I. INTRODUCTION: THE OBLIGATION TO TEACH GRAMMAR AND PUNCTUATION SUCCESSFULLY TO ALL STUDENTS

Students enter law school to acquire the specialized knowledge and skills necessary for legal practice.[1] Their likelihood of success in the legal academy is subject to many measures: LSAT, GPA, and undergraduate institution and major. However, for a variety of reasons, students often enter law school without necessary fundamental skills to thrive. One such skill is the ability to write using standard, correct grammar and punctuation.[2] Content apart, correct writing is a gatekeeper in the legal profession, critical for attorneys[3] and their clients.[4] Further, lack of knowledge of the rules governing standard or professional writing can serve as a proxy for other deficits in students’ previous instruction.

Many law schools have responded to the perceived lack of skills in entering and continuing law students with programs and courses targeted at writing and academic success in general.[5] However, to date, little quantitative data have been gathered and analyzed about entering law students’ grammar and punctuation skills or about satisfactory programs to address the instructional needs students may have. This leaves a gap in the discussion both of student needs and effective institutional responses. This Article discusses the writing seminar program at Michigan State University College of Law, which forms part of the first-year curriculum in legal writing and provides instruction in grammar and punctuation. We analyze our program and its success through quantitative data collected over five years from five cohorts of first-year students.

Our program is modeled on the writing specialist/writing seminar model pioneered at Seattle University School of Law by Professor Anne Enquist, who was gracious enough to consult with the MSU writing program and faculty.[6] Central to the Seattle program and our own is optional instruction—delivered through lectures, workshops, and office hours—full integration with the first- and second-semester legal writing courses, and mandatory assessment. Assessment in the writing skills program at MSU is based on students demonstrating proficiency, not ranking with a forced curve, and is otherwise ungraded. Students are required to demonstrate proficiency before exiting the program rather than exiting at defined time regardless of skill level.

We collected data regarding students’ entering skills in grammar and punctuation, their attendance at optional writing seminars and office hours, their attainment of proficiency and actual score on the final assessment, and, where possible, cumulative law school GPA and bar results. Using these data, the study sought to answer the following questions:

  • Could a voluntary program that frames fundamental writing skills as professional development rather than remediation and is run on a proficiency model successfully engage students, particularly students most in need of instruction?

  • Did students who used optional program resources demonstrate increased skill with the material and greater improvement than students who did not use the optional resources?

  • Does student performance on an assessment designed to measure skill with grammar and punctuation predict or correlate with other outcomes of legal education, such as cumulative law school GPA or first-time bar passage?

Data indicate that the vast majority of students during the five-year study period voluntarily attended the writing seminars and attained proficiency by the end of the fall semester. The high level of attendance and early acquisition of proficiency testify to the potency of intrinsic student motivation when actions and outcomes are clearly identified and align with students’ aspirations for future professionalism, not remediation of past inadequacy. The data indicate that such quantitative studies, coupled with qualitative comments by students and faculty, are worthwhile measures of the contribution of programs like ours to students’ success. Our methods provide a case study in how to use outcome data to help evaluate instructional design to enable institutions and individuals to supplement more qualitative and subjective measures, such as instructor perception or personal anecdotes.

Part II of this Article analyzes obstacles to teaching conventions of grammar and punctuation in law school. Part III discusses the values of the MSU writing seminar program, intended to lower barriers to students learning fundamental writing skills; the instructional sequence; and the assessment instruments. Part IV indicates the methods used for our study of the program’s effectiveness, including data collection and parameters. Part V provides and discusses the results of the study in terms of student engagement and success within the program. Part VI presents the correlations between internal program data—that is scores on the students’ initial assessment—and cumulative law school GPA and their later performance in bar passage. Finally, part VII discusses overall implications of our study and our conclusions.

II. OBSTACLES TO TEACHING FUNDAMENTAL SKILLS SUCH AS GRAMMAR AND PUNCTUATION

There are several practical obstacles to teaching grammar and punctuation in law colleges. As Alaka’s 2010 article points out, faculty members both inside and outside the legal academy, as products of contemporary education, may feel relatively ill at ease teaching this material.[7] Time is limited in any course, and coverage requirements may discourage teaching writing skills. Teaching skills is demanding, time consuming, and relatively expensive within the context of the first-year curriculum. Yet the discussion surrounding grammar and punctuation frequently moves beyond these concerns to a more value-laden rhetoric than might be expected for an effort to impart a set of writing conventions that, as we will show, are fairly easily grasped by most students. This attitude is more striking because the cost for students and future clients for lack of skill in the areas of writing and editing is readily acknowledged to be high.

The debate over writing and particularly writing mechanics is shaped by the traditional dichotomy between reasoning/meaning and expression/ornamentation in which the first set of terms is considered fundamental and controlling while the second is derivative, subordinate, and less valuable.[8] Thus, for example, the LSAT identifies the goal of the scored portions of the test as measuring skills considered “essential for success in law school: the reading and comprehension of complex texts with accuracy and insight; the organization and management of information and the ability to draw reasonable inferences from it; the ability to think critically; and the analysis and evaluation of the reasoning and arguments of others.”[9] The purpose of the writing sample, on the other hand, is not defined, and the sample is not assessed. While the writing sample is frequently said to play little role in admissions, students are warned, “It’s unfortunate, but misspellings and bad grammar can quickly undermine the best of arguments.”[10]

Thus, from the beginning, grammar and punctuation are readily conceptualized as inferior to or separate from reasoning and meaning.[11] Yet a long-standing association of mistakes in writing mechanics with loss of credibility, a fact often invoked to underscore the necessity of error-free expression,[12] increases the perceived significance of such errors for both professors and students. The close linkage of credibility and virtue in our rhetorical tradition, which historically and currently associates credibility with the sincerity and moral worth of the speaker, ties errors in credibility to errors in character.[13] Students enter a world in which poor writing mechanics—often a result of inadequate instruction, that is, of institutional failure—are reconceptualized as, or conflated with, students’ personal deficiencies.[14] Thus, faculty can be reluctant to teach writing mechanics on the grounds that students “should” normatively already know these rules.[15] Not surprisingly, students are correspondingly reluctant to accept instruction if “remedial” education is tied to personal deficits on the part of persons who are to be remediated.[16]

However, lack of teaching and of learning in this area perpetuates a traditional role for diction and grammar, including writing mechanics: maintenance of social capital and reinforcement of hierarchy. Looking for deviation from elite norms in writing and speaking is a handy way to eliminate, for example, job applicants. Poor grammar and punctuation readily become class markers in a hiring situation in which social class already plays an identifiable role, particularly at elite firms.[17] Thus, the failure to teach grammar and punctuation exacerbates the perception that law school is a sorting mechanism, and may allow sorting on social grounds.

III. VALUES, INSTRUCTIONAL SEQUENCE, AND ASSESSMENT IN THE MSU WRITING SEMINAR PROGRAM

A. Defining Values

1. Reconceptualizing Instruction in Grammar and Punctuation to Remove Potential Stigma

Students are often deterred from seeking instruction when instruction is conceptualized as remedial or indicative of lack of ability.[18] This aversion to seeking help is amplified by students’ previous success with writing as undergraduates, which allows students to believe that their current skills will suffice for the new challenges of writing in law school. To overcome this barrier, we re-framed the discussion surrounding writing mechanics to focus on the transition from discipline-specific undergraduate writing conventions to the conventions employed in the legal academy and legal profession.[19] We cultivate the idea that all students, not just low-performing students, need instruction in what is fundamentally a new discourse.[20] This approach builds on research on undergraduate writers indicating that students who self-identify as “experts” in the conventions of writing from previous experience show less improvement as writers over the long term than students who self-identify as “novices”[21] and that students who are aware that conventions, standards, and techniques differ based on context hold a strategic advantage over their peers who lack such an understanding.[22] This approach harmonizes with the rhetoric of the first year of law school as one in which students are initiated into a new way of thinking with different rules and vocabulary.[23] Our approach invokes students’ professional aspirations for the future rather than orienting them to an inadequate past.[24] Grammatical and stylistic choices—for example, the legal preference for serial commas and maintenance of the “that”/“which” distinction—are presented in the MSU Law program as discipline-specific skills that contribute to the students’ acquisition of a new professional identity[25] and branding as attorneys.[26] Explanation of the role of rules of punctuation and grammar in enhancing clarity and in litigation allows students to incorporate what they are learning into the overall project of the first year, instead of to experience instruction as filling gaps and correcting past failures or as mitigating problems derived from socio-linguistic provenance.[27]

2. Adoption of an Assessment and Proficiency Approach, Rather Than Evaluation and Ranking

The instructional sequence and assessments are discussed in detail in the next sections (B and C), but, in short, all students take an initial required formative assessment covering grammar and punctuation conventions.[28] Students are provided with a recommended course of study based on their initial score, offered a variety of instructional opportunities, and required to pass a post-test demonstrating proficiency. The score necessary to pass the Proficiency Test is clearly advertised at writing seminars and announced in legal writing classes. Students’ grades in second semester legal writing would not be released until proficiency was demonstrated on one of the Proficiency Tests.

One of the goals of legal education is to develop self-motivated students who have the intellectual and personal characteristics to continue to master new material and meet new challenges throughout their careers.[29] Yet this professionally and personally desirable outcome may be incompatible with the extrinsic motivators of law school, for example forced curves and high-stakes testing.[30] Further, the evaluate-and-move-on approach of most law school courses, with its final goal of sorting and ranking students[31] (and removing those who do not make the grade), does not effectively facilitate students’ acquisition of the fundamental skills they will need to serve their eventual clients.[32] Finally, research on student help seeking points to the importance of a context of mastery goals, as opposed to comparison-based ability goals, to prompt students to seek out instruction when it is needed.[33]

For our program, student motivation was vital because student participation in the instruction offered in the writing seminar program was explicitly designed as voluntary. Further, our program abandoned the traditional zero-sum, competitive ranking of law school courses and focused instead on a what we call a proficiency model, emphasizing that all students could achieve proficiency over time.[34] The positive incentives for high levels of student engagement in the writing seminar program invoke the students’ desire to join a professional community and assuage their anxiety about doing so. The material in the writing seminars is presented as essential for attorneys and common to the professional community. Thus, students are motivated to learn because it explicitly links to the goals of socialization and formation of professional identity that students and faculty alike see as central to the first year of law school.[35] Further, the proficiency model calms students and allows them to anticipate success during the stressful 1L year because it both requires and allows all students to succeed at some point during their first year, something not true of their other courses. Thus, students are emotionally drawn to and invested in the writing seminar process. Allowing students to choose what instructional resources to access and even when to take the Proficiency Test (some students explicitly postpone it until the second semester) honors students’ independence, puts them in charge of their own education, and contributes to their emerging view of themselves as professionals who must manage competing priorities.

A proficiency model is distinct from a pass/fail model. While a pass/fail model does eliminate internal ranking, it remains a ranking system to the extent that students are not required to learn the material, but simply to be evaluated on whether they did learn it or not. In many instances, failure has no repercussions beyond notation on a transcript and does not require students to continue learning. Proficiency is both a more demanding standard than passing—we set the bar at 75% correct answers, which would traditionally correlate with a “C,” while institutions may define lower scores as “passing”—and, more importantly, a requirement for exit: no student leaves our program until the student meets or exceeds the 75% requirement. Students and professors continue to work until proficiency is achieved. This is appropriately called a proficiency approach because the goal of instruction, understood by students and professors alike, is becoming proficient with specific content, not evaluation. A proficiency approach is the appropriate model for fundamental skills whose benefits will be experienced by students and clients alike. This approach reflects that of the professional world that students aspire to enter and is most characteristic of clinics, where, as in law practice, poor work product is met with a request that it be redone satisfactorily rather than with a low grade and nothing further.

3. Promotion of Student Autonomy Through Optional Instruction and Student Choice of Timing

Our program couples optional instruction with mandatory assessment.[36] Thus, within limits, this model relies on students to set their own priorities and goals, but we provide them with pertinent information to help guide their choices: roadmaps for success explicitly tied to assessment of their initial skills. In the design phase, we believed that tying the writing seminar program to the students’ own goals, honoring their priorities in the first semester, and amplifying intrinsic motivation would promote students’ enthusiasm and encourage participation in the program.[37] This approach dovetails with the goal of helping students achieve proficiency rather than ranking students based on their scores.

Performance in the writing seminar program had no impact on students’ grades, except to the extent that legal writing professors would also grade on correct mechanics as described in the next subsection. Students could prepare for the test in whatever way they chose: through self-study, office hours, partial engagement with the writing seminars, full attendance at the writing seminars, or any combination. Further, students were free not to engage with the writing seminar program at all during their first semester but to postpone all engagement until the spring semester.[38] The voluntary nature of the program was clearly indicated on all legal writing and program documents and explicitly discussed by all legal writing professors, who also, however, discussed the benefits of early attention to the material.

During the study period, students understood the importance of the skills taught in the writing seminar program because the program and content were fully integrated into the legal writing courses, present on a common syllabus, and referenced often by the legal writing professors, who shared common curriculum goals and vocabulary with the seminar program.[39] The writing specialist’s name, contact information, and office hours appeared at the head of every legal writing course syllabus, below the main professor’s name. The optional seminars appeared on the legal writing syllabus and were held, whenever possible, during scheduled class time. The initial Writing Skills Inventory was returned by the legal writing professors during class, and the professors discussed its importance and the significance of the results for each student. Professors used a common “Writing Checklist” with each graded assignment. The checklist keyed students’ errors to the writing seminar and section in the writing textbook discussing the appropriate issue. In addition, legal writing professors were able to refer students to the writing specialist for individual appointments, although, in keeping with the philosophy of student autonomy, students were never required to visit the writing specialist.

Institutionally, the writing specialist, who teaches the writing seminar program, is a full-time clinical faculty member and part of the legal writing program. Further, the writing specialist attends all writing program meetings and makes regular overall and individual reports about student progress.

5. Commitment to Measuring Success, or Failure, and Primacy of Instructional Efficacy

Once the materials for the writing seminar program were finalized, we began collecting the data discussed in the results below. However, our students come first. Because no student leaves the program until proficiency is achieved—a standard no less demanding of the professor than the students—we have a particular interest in knowing what works and whether students are mastering the material at the earliest possible opportunity consistent with their own priorities. Thus, we continually use program data to improve our program in order to provide our students with the best educational experience possible. For example, the first year after the study period ended, when we realized that students would benefit by switching the order of two seminars, we made the change.

B. Instructional Sequence

The Michigan State University (MSU) program has several parts that span the first year of law school. The sequence has remained unchanged since the inception of the program and is discussed here. For more information about construction and content of the assessments, see the next subsection.

1. Writing Skills Inventory

During fall orientation, entering first-year students take the Writing Skills Inventory (WSI), our pre-instruction assessment. The assessment is in the same format and covers the same content as the Proficiency Tests, our post-instruction assessment. The WSI is a purely formative exam.[40] Students receive the results of the WSI with each error keyed to the appropriate section of the textbook for our program and the time and date of the seminar that addresses that particular skill or subject. In addition, students are given an overall score and given advice about the best strategy to pass the Proficiency Test based on their score. The information suggests several possible strategies based on the range into which students’ scores fall: “attend the Writing Seminars,” “review Just Writing and . . . exercises,” and “schedule an appointment with the Writing Specialist . . . to review the test and discuss your plan of action.”[41] After being exposed to the ways to prepare for the Proficiency Test, students are free to pick whatever approach they prefer.

2. Writing Seminars and Office Hours

After students receive the results of the WSI, they may attend any or all of five optional writing seminars taught by the writing specialist, each covering a different set of skills assessed on the WSI and Proficiency Test. In addition, the writing specialist has extensive office hours for individual conferences with students[42] and provides optional editing seminars to help students apply skills taught in the seminars to their own writing.

The integration with the first-year legal writing course means that skills taught in the writing seminars and covered in the recommended sections of the textbook are reinforced in first- and second-semester legal writing.

4. Final Proficiency Test

At the end of the fall semester, students take a Proficiency Test (PT). The PT is structured identically to the WSI and covers the same content. However, the PT is written in professional legal English and uses legal content and examples. Students must score at least 75% (24/32) on the PT to receive the grade in their second semester legal writing course. Otherwise, the score on the PT has no impact on students’ grades. Once a student has demonstrated proficiency, his or her obligations in the writing seminar program end. However, students are free to continue to consult with the writing specialist, and many do.

5. Additional Instruction for Non-Proficient Students

Collection of assessment data for this study ended after the first PT. However, within the program, first-year students who do not demonstrate proficiency on the fall PT take a second PT offered in February of their first year. The writing specialist consults individually with each student about how to prepare for the second test. Students can join small study groups facilitated by the writing specialist, work independently with support from the writing specialist, or study entirely on their own. Students who do not achieve proficiency on the second PT are highly encouraged to enroll in a “boot camp” led by the writing specialist.[43] After several weeks of intensive review and study with the writing specialist, or whatever preparation they determine is in their best interest, these students take a third PT. During the study period, no first-year student has ever had a grade held due to failure to pass a PT.[44]

C. Creation and Content of the Assessments

1. Creation of Assessments

All assessments used in the MSU Law Writing Seminar program were developed in-house by two of the Authors of this Article, Daphne O’Regan, co-director of the legal writing program, and Jeremy Francis, the writing specialist. Each assessment took dozens (if not hundreds) of work hours to complete, test, pilot, validate, and revise. Our operating assumption has always been that our assessments must reflect the pedagogical values guiding the writing seminar program, not be merely a tool for evaluation. They are criteria referenced, not norm referenced, as students’ performance is weighed not against other students, but against the standard of proficiency, which we expect all students to attain.[45]

2. Content of Assessments

We teach and assess only skills we consider essential to clear communication: sentence fragments and run on sentences; commas, including restrictive and nonrestrictive clauses; semicolons; apostrophes; tense; pronoun and verb agreement; passive voice; and parallel structure.[46]

3. Format of Assessments

Both the WSI and PT have 32 multiple-choice questions with 5 possible answers, listed A–E. The questions are divided into three sections. One section tests students’ ability to read and edit an extended writing sample via multiple-choice questions that allow the students to edit sentences in various ways or identify them as “Correct as written.” A second section provides short scenarios followed by sentences describing the scenario and asks the students to choose which sentence best describes the scenario using correct grammar or punctuation. A third section provides a number of sentences and asks students to choose which sentence or group of sentences uses correct grammar and punctuation.[47]

Multiple-choice testing is well suited for rule-based content such as grammar and punctuation.[48] It allows us to assess all students on the same content. This common, objective assessment would be difficult with student-generated writing samples because students could avoid areas of weakness and discomfort, could limit the length of responses to provide less data, and could fail, therefore, to address all areas we wish to assess.[49] In addition, multiple-choice tests allow assessment of proofreading skills and of correct choices among various punctuation options to convey particular meanings. Further, clearly defined wrong answers provide additional data for professors, students, and researchers about potential sources of student confusion. However, the tests do have shortcomings. First, they only indirectly test students’ positive ability to craft content by choosing among a variety of grammatical tools and punctuation. Second, in common with all multiple-choice tests, the tests may privilege students whose real skill is simply taking multiple-choice tests or who have greater visual acuity or attention to detail.[50] We have attempted to reduce the impact of such differences between students and the impact of reading speed, which is not something we are testing, by making the tests essentially untimed as discussed below.

To minimize the potential impact of differing assessments on student outcomes, we have either reused identical tests or established the equivalency of versions of the test. We use the same WSI every year because the inventory is informative only and students have only recently arrived at the law college when it is administered. However, we rotate three PTs to make sure that student outcomes are not influenced by familiarity with the test or by information from other students. The tests assess the same skills in exactly the same way, but in a different order and with different text. The equivalence of the tests has been demonstrated by an overwhelming consistency from year to year. For example, from 2009 to 2012, the same number of students did not pass the first test, roughly forty out of each cohort of approximately 300. Out of these forty, every year three students did not pass the second test.

4. Tight Focus on Students’ Ability to Apply the Rules of Grammar and Punctuation

To the extent possible, our assessments are designed to measure only a student’s ability to use the skills taught in the writing seminar program.[51] We believe that becoming a competent editor is a realistic goal for all students in a first-year legal writing course, and such competence is what we identify as proficiency. Moreover, the limited time in the writing seminars dictates the number of topics that can be covered effectively.[52] Thus, we avoid teaching and testing vocabulary associated with grammar and punctuation, although we do associate names with the concepts of parallel structure and passive voice because we find those topics are difficult to assess unless students explicitly identify the concepts. In general, for example, students need only demonstrate knowledge that a comma is needed or not needed at a certain point in a sentence, not be able to name the particular rule of the many governing comma usage, that applies.

To reduce interference from new students’ unfamiliarity with legal writing, the WSI uses vernacular English, not legal English. Thus, the assessment of students’ skills in editing grammar and punctuation is not muddied by regression in skills when students are asked to transfer skills from one context to another.[53] The writing sample for the first section of the WSI is in a genre familiar to entering students: the personal statement.[54] All students who apply to MSU Law have written a personal statement and have probably considered what makes such texts effective. Thus, we hope that students have not only interacted with the genre in question as readers, but as authors with a vested interest in the efficacy of such documents, extending their familiarity beyond that of a novice. Because failure cannot be attributed to transfer problems, we are confident in outcome of the WSI as a reliable measure of skill in grammar and punctuation.

Because we wish to test skills independent of reading speed and to limit the impact of visual processing speed and acuity, the PTs were essentially untimed during the study period.[55] Thus, we are not measuring how quickly students can read.[56]

IV. STUDY METHODS

A. Sample Size

We have data from 1,476 first-year students: five cohorts who started at MSU in the falls of the academic years 2007–2011.[57] The number of students in each cohort ranged from 282 to 367, with an average of 295.

B. Data Collection

1. Data Collected

The following objective measures of student performance were collected for every student during each year of the study and entered in a master data set for every student during each year of the study: WSI score, the student’s answer for every question on the WSI, first (fall) PT score, the student’s answer for every question on the PT, attendance at each optional writing seminar, participation in office hours, fall and spring semester writing grades, and first year GPA. For the earlier cohorts of students that have graduated, graduating GPA and bar passage on first try were also collected. In addition, qualitative data were collected via the course evaluations students completed at the end of each fall semester.

2. Student Achievement Data: WSI and PT Scores and Responses

Students recorded answers for the multiple-choice questions on the WSI and PT on OCR answer sheets (similar to SCANTRONs). We advised students to mark preferred answers on the original testing documents as well, in case of a marking error on the OCR form. The MSU Scoring Office processed tests using the GRADER III[58] system, which generated several reports, including detailed feedback for students, overall scores, item analysis, and basic histograms. The Scoring Office’s “Data Files,” containing student response data and raw scores, were used for some of the student data itemized above.

3. Student Engagement Data: Writing Seminar Attendance and Office Hours Participation

Data were collected on each student’s engagement with the optional instructional resources available.[59] Students signed in at writing seminars, and the writing specialist recorded attendance. The writing specialist also recorded individual student attendance at office hours. For the purposes of this study, we only analyzed attendance at office hours between the date of WSI, during orientation, and the PT in November. While students, of course, continued to attend office hours after the November PT, these data were not included, as they occurred outside the study period.

4. Instructional Efficacy Data

The combination of assessment data and data about student engagement allows us to track the effectiveness of the seminars and office hours.

5. Predictive Data: Final Grades and Bar Passage

To study the relationship between students’ scores on assessments and subsequent measures of student success, grades for fall and spring semester writing courses, first-year and graduating GPA, and bar passage were obtained from the MSU Law registrar’s office. As said above, the timing of this Article means that some students have not yet graduated; thus, GPA and bar data were unavailable.

V. Results and Discussion: Student Outcomes and Engagement

A. Positive Student and Faculty Reception of the Writing Seminar Program

The faculty of the law college has enthusiastically embraced the writing seminar program. The legal writing faculty finds the seminar program to be effective and useful as it shifts primary responsibility for grammar and punctuation to the writing specialist and allows the legal writing professors to spend more class time on other topics while remaining confident that students will learn these fundamental skills.[60] The other law college faculty members have also been very supportive. The writing specialist recently became the first member of the teaching staff without a J.D. to become a clinical faculty member with a rolling contract. Just Writing, the text for the writing seminars, is used as a resource in many upper-division courses. The writing specialist is often asked by faculty members, both clinical and tenure-stream, to give seminars at the clinics and to journals and law review and also to help support students with upper-level writing projects, including academic papers, law review notes, and writing samples.

More importantly, and perhaps surprisingly, first-year students have enthusiastically embraced the writing seminar program.

Figure 1 plots the simple percentage of students who attended any one of the five seminars offered. As the figure makes clear, an overwhelming majority of students attend at least one of the writing seminars during the fall semester. As no student is required to attend a writing seminar, this attendance reflects widespread, voluntary engagement with the program.

Figure 1
Figure 1:Bar Plot of Students Attending at Least One Writing Seminar

Figure 2 further disaggregates participation among students and demonstrates that the strong attendance result just described is not driven by students who only attended a single seminar. To the contrary, the modal observation in our data is a student who attends all of the five seminars offered during the fall semester. Sixty-two percent of our students attended a majority of the seminars.

Figure 2
Figure 2:Histogram of Student Attendance at Writing Seminars

Qualitative evidence also suggests widespread student engagement and satisfaction. Student evaluations of the writing seminar program during the years of this study revealed scores consistently between 4.4 and 4.8 out of 5 possible points in the areas of Course Organization, Course Materials, Professor Availability, and Overall Professor Effectiveness. First-year students frequently contact the writing specialist via email to relay successes and accomplishments related to the seminar material. One first-year student echoed a common sentiment in an email to the writing specialist: “The classes were really helpful, and I can seriously see an improvement in my writing for Advocacy [second-semester legal writing class]!” Even second- and third-year students continue to report long-term success. One second-year student reported on a family issue involving a hired lawyer and the impression this lawyer made: “The pleadings were riddled with these types of mistakes. It’s good to know that I’m starting to get the rules.” A third-year student reported that “of all of the material from 1L [year], I have found myself referring to your writing seminar slides countless times in the spring, summer, and 2L fall. The first slides on commas I open at least a couple times a month.” Particularly pleasing is the fact that some students even report enhanced self-awareness and self-efficacy with regard to their writing. A second-year student wrote, “I was able to recognize a sentence [containing] passive voice in my memo, and I was able to revise it! I know it may sound lame, but even though I have not mastered punctuation and grammar, I know that I am learning how to get better.”[61]

B. 100% Student Proficiency by the End of the 1L Year, Regardless of Entering Scores

Entering students began the writing program with the WSI illustrated below in Figure 3, which shows the distribution of scores on the WSI. The WSI consists of 32 multiple-choice questions. The x-axis of the figure provides the range of scores, and the y-axis indicates the percent of students receiving a particular score. The average score on the WSI across the five-year study period is 22.3 out of 32 with a standard deviation of 4.3. The median score is 22. The fact that most students scored below 24, the score for proficiency on the PT, on a test in vernacular, not legal English, confirms the general perception on the part of the law faculty that many students, despite generally strong undergraduate GPAs and intellectual strengths, enter the law school unfamiliar with the fundamental rules of grammar and punctuation that the writing seminar program teaches and assesses.[62]

Figure 3
Figure 3:Distribution of Scores on WSI

Student scores on Figure 3 are divided into three regions—low, medium, and high—by two dashed vertical lines (appearing between 19 and 20 and between 25 and 26). As discussed above, when the results of the WSI were returned to students, each student was given information tying particular scores to steps that would maximize the likelihood of achieving proficiency on the first PT, at the end of the fall semester. The information focused on the future and on student effort, minimizing any reference to students’ innate ability or inadequate past. This emphasis primed students to consider help seeking as a positive step to obtain outcomes within their control, which has been shown to enhance help seeking, and thus, in this case, boost attendance at the writing seminars and office hours.

The approximately 25% of students, those who scored below 20 are identified as low-scoring students. These students were encouraged to contact the writing specialist to develop a plan for the semester.[63] Just under 51% of students, those with scores between 20 and 25, are in the middle group. These students were encouraged to evaluate the items they missed on the WSI to see what patterns emerged and attend the appropriate seminars.[64] Roughly 24% of students, achieving scores of 26 or higher, were high-scoring. They were encouraged to determine what, if any, items they missed and review using the textbook or attend the pertinent writing seminar.[65]

Student outcomes were highly satisfactory for all five years of the study. As said above, all 1,476 students participating during the study period scored at the proficient level by the end of their first year in law school. Thus, over the course of a year in the writing seminar program, even students scoring as low as nine or ten on the 32 question WSI in vernacular English improved to a minimum of 24 correct answers out of possible 32 when tested on the same skills in the context of legal English by the third PT.

C. Significant Improvement or Proficiency by the End of the Fall Semester for a Vast Majority of Students with Reduced Disparity in Their Skills

Figure 4 provides a closer look at student progress in the fall from WSI to PT score by plotting each student’s WSI score against the student’s later PT score, aggregated into a sunflower plot.[66] The x-axis shows WSI scores. The y-axis shows scores from the PT at the end of fall.

The unique—and potentially confusing—feature of a sunflower plot is the way it portrays multiple students with the same set of WSI/PT scores. A single dot represents an individual student with a particular WSI Score-PT Score pairing. Once more than one student achieves the same pair of scores, each student is instead represented by a line segment, which is often referred to as a “petal.” For example, there were two students who scored 15 on both the WSI and the PT. Similarly, towards the top right corner of the figure, a point with five petals represents five students who scored 28 on the WSI and then scored a perfect 32 on the PT. Each additional petal denotes another student with that pair of WSI/PT scores. Thus, the denser the petals (e.g., WSI = 25, PT = 29), the more students are represented as having progressed from those particular WSI scores to PT scores.

Figure 4
Figure 4:Student Progress from WSI to PT 1

The diagonal, solid line between the lower left and upper right corners of the figure allows easy assessment of whether students scored higher or lower on the PT than on the WSI. Points above this line correspond to students whose scores on the PT were higher than their scores on the WSI. Conversely, students below the line have PT scores lower than their original WSI scores. And, of course, score pairings on the line itself indicate that a student had identical scores on both tests.

The vast majority of students (88%) show an increase in score between the WSI and the PT. The average score on the WSI is 22.3 with a standard deviation of 4.3; the median WSI score is 22. The average score on the PT is 27.3 with a standard deviation of 3.3; the median PT score is 28. Thus, the typical student scored substantially higher on the PT than on the WSI. Only 8% of students decreased in score between the tests.[67] The remaining 4% stayed the same. Because only the PT is in legal English, we can be confident that this improvement did not occur due only to greater familiarity with legal English as the first semester progressed. In fact, the switch to legal English for the PT probably put greater strain on the students’ skills.[68]

The amount of vertical distance between a score pairing and the line represents the magnitude of the changes in students’ score between the two tests. Score pairings far above or below the line show comparatively larger net changes between the WSI and the PT. Again, as will be confirmed in Figure 5 below, the plot reveals the extent to which students who scored lower on the WSI, that is, students on the left of the figure, showed the greatest gains, often large ones.

Besides general improvement, the dashed horizontal line also indicates the significance of the students’ scores across the plot at a value of a score of 24 of the y-axis, or the PT score required to demonstrate proficiency and to pass the test. Students scoring at this value or above have completed the program. By the end of the fall semester and the writing seminars, 87% of students in the five-year study period have PT scores of 24 or higher. The plot indicates that nearly 700 students scored below 24 on the WSI, but nevertheless achieved 24 or above at the end of the fall on the PT. These students fall left of the mark indicating 24 correct on the axis indicating WSI scores, but above the line for proficiency. Further, and equally importantly, as evidenced by the smaller standard deviation on the PT results than the WSI, performance is less widely dispersed around the average scores on the PT than it was around the lower average on the WSI. This means that over the course of the fall semester, there is a reduction in the disparity in skills evidenced by entering students. Particularly gratifying are the individual success stories hidden in the dots on the upper left corner of the plot. These are students who scored below 16, that is below half, correct on the WSI, yet at the end of the fall, these students scored 30 or above: one student’s increase was 21 points, from an initial score of 9, less than one-third correct, to a final score of 30, almost perfect.

On the other hand, the students who scored comparatively well on the WSI, thus indicating that they were more accomplished writers or editors from previous academic writing experiences, were the most likely to show a decrease in scores, perhaps because they were the least likely to attend writing seminars, as discussed below, coupled with the transition to legal English. These students fall below the diagonal line in the upper right corner of the plot. This distribution conforms to the theory, discussed above, that students who are proficient in another discourse or genre often have a harder time adjusting to the changed demands of a new genre, perhaps because of self-perception as already proficient writers.[69] Nevertheless, except for a very few outliers over five years, even students whose scores decreased remained clustered relatively near the diagonal line, indicating that while improvement may not be universal, pronounced decreases in scores were unusual.

Students located below the horizontal line, that is who scored below 24 on the PT, continued with additional instruction and testing until they could demonstrate proficiency on a subsequent test. The students who occasioned us the most distress were those very low-performing students on the left of the figure who, despite very large gains in scores between the WSI and the PT, still did not manage to show proficiency and needed to continue on in the program. However, for the most part, these students did not register dissatisfaction. On the contrary, they were generally happy with their improvement and understood the need to continue with further study that they felt would be rewarded. One student who did not pass the first PT commented, “I feel as though failing the first test was actually a long term benefit for me as it caused me to learn and understand the rules [of writing and grammar].” Upon passing the PT on the third and final try for the year, another student commented, with uncharacteristically unrestrained capitalization and punctuation, “OH MY GOODNESS! Thank you!!! I feel like I just won the lotto or something!” Not only did this student feel personal satisfaction for having passed the test, she immediately requested “to go over the questions I got wrong,” even though she had no obligation to do so. As indicated by the engagement numbers discussed above, we argue that affirming student autonomy, including for those students who make choices that we might not prefer, is an important part of the success of the program.

The widespread improvement, that exceeds the 83% of students indicated in Figure 1 as accessing the optional writing seminar program, may also indicate the importance of the integration of the writing seminar program—via coordinated instruction and feedback—with the legal writing classes. Such integration may work not only to promote engagement with the writing seminar program, but also to enhance or consolidate skills in those students who chose self study as opposed to attendance at writing seminars or office hours.

D. Greatest Fall-Semester Engagement by Students with the Lowest Level of Skill

Figures 1 and 2 above have already indicated the high level of student engagement, but a frequent problem with courses perceived as remedial is attracting the very students who may benefit most.[70] We wished to know which students were attending the writing seminars and how often. The writing seminar program bills itself as for all students and as professional development, not remedial skill building. However, as discussed above, feedback provided to each student with his or her results on the WSI does recommend courses of action depending on score on the WSI. Here what is of interest is the ability to attract low-scoring students to the seminars and the impact of information returned with the WSI on writing seminar attendance. Figure 5 suggests the writing seminar program is successful in motivating low-scoring students to attend while not sacrificing the higher-scoring students who could benefit from instruction.

Figure 5
Figure 5:Writing Seminar Attendance by WSI Category

Note: Low, Medium, and High refer to the WSI score categories and feedback given to students. See the accompanying text in part V, sub-section B, supra, for additional details.

Along the x-axis, we plot each of the three WSI Score Category feedback types and the number of seminars attended. The y-axis shows the percentage of students attending that number of seminars for each group. As the height of the bars makes clear, we observe the highest level of attendance by students in the low WSI Score Category; conversely, the high group is most likely not to attend any seminars—by a five-times-higher percentage than the low group—and has the smallest percentage of students attending all seminars—less than half the percentage of the medium group and less than one-third the percentage of the low group. Medium students attend 71% more seminars, on average, than do high students.[71] Another way to understand this is by comparing students within categories: of students with low WSI scores, 93% attend at least one seminar; of students scoring in the middle on the WSI, 88% attend at least one seminar; and finally of students scoring high on the WSI, only 64% attend at least one seminar. Further, although 28% of all students attend all five seminars, the scores by group are as follows: 43% of low-scoring students attend all five seminars, 28% of medium-scoring, and 12% of high-scoring.[72] Thus, while there is significant participation across student groups, those with the lowest skills are voluntarily attending the seminars in greater proportions and more consistently.

E. Greatest Fall-Semester Gains for the Students with the Lowest Scores on the WSI and the Most Involvement with the Writing Seminar Program

Engagement as measured by seminar and office hour attendance is linked for all students with performance gains between the WSI and PT and proficiency on the PT.

First, we examined the individual student score changes between the WSI and the PT, which we descriptively summarized in Figure 4. Our dependent variable is the change in each student’s score, which ranges between -8 (i.e., the student scored 8 points fewer on the PT than on the WSI) and 21 (i.e., the student scored 21 points higher on the PT than on the WSI). The average score change is 5 points with a standard deviation of 4.1. To explain this variation in score change, we consider several factors. Consistent with our contention that engagement in our program is beneficial, we include the number of writing seminars that the student attended. This variable ranges between 0 and 5 with a mean of 2.9 and a standard deviation of 1.8. In a similar vein, we also include the number of times that a student visited a writing specialist for office hours. This variable ranges between zero and seven[73] with a mean of 0.28 and a standard deviation of 0.88.[74]

Figure 6 breaks out the improvements suggested above more precisely by low-, medium-, and high-scoring students and attendance at writing seminars, indicating that, as hoped, the program helps all students, but benefits most those who most need to improve their writing skills.

Figure 6
Figure 6:Average Change in Score Between WSI and PT by WSI Score Category and Attendance at Writing Seminars

Students in the low category of WSI scores, who comprise one-fourth of students in our data, typically achieve score gains of about nine points between the WSI and the PT, with average WSI and PT scores of 16.6 and 25.3, respectively. Comparing average scores for this group of students shows a relative increase of approximately 52%. Fully 71% of the students in this category achieve proficiency on the PT. Students who attend at least one writing seminar can expect to score over two points better than students who never attend a seminar. However, all students who start with a low WSI score can expect to improve their scores by 6.5 points. The improvement for students who did not attend writing seminars may be due to the integration between the legal writing classes and the writing seminar program. As discussed above, students received a separate writing grade and writing checklist for major assignments that not only identified errors, but keyed those errors to the sections of the textbook where they are discussed.

The 51% of students whose WSI score placed them in the medium category also show gains between the WSI and the PT, although not as substantial as students in the low category. These students have average WSI and PT scores of 22.5 and 27.5, respectively. The median score change for the majority of students is five points, with a relative score increase of approximately 22%. Again, a two-point additional gain is shown for students who attend at least one writing seminar. In terms of achieving proficiency on the PT, 90% of students in this middle group will do so.

Finally, students with the highest scores on the WSI show the lowest gains on the PT. A typical score increase for a student with an initial WSI score of 26 or greater is just two points; those who do not attend any writing seminars improve less than half a point. The relative score change for these students is just 5%. As discussed in the context of Figure 4, some students who have high scores on the WSI actually achieve lower scores on the PT. However, for those who do improve, it is worth remembering that these students’ WSI scores are already very high, which leaves comparatively little room for improvement since the maximum score on the PT is 32. Twelve percent of WSI high category students end up achieving this perfect score.

In addition to the lecture-and-workshop format writing seminars, the writing specialist, during the study period, offered twenty to thirty office hours per week available to students by appointment and drop-in basis. Office hours could focus on preparing for assessments, but more often, students chose to focus on their draft papers for their legal writing courses, seeking editing support and technical advice on revision. Attendance at office hours is associated with increased scores, although the difference is less dramatic than attendance at least one writing seminar. This is indicated by Figure 7.

Figure 7
Figure 7:Average Change in Score Between WSI and PY by WSI Score Category and Attendance at Office Hours

Unlike writing seminars, office hours are limited. Further, students identified as greatly in need of help could be referred to the writing specialist by legal writing professors. Such students, if they wished, could sign up for weekly office hours, further reducing the availability of office hours for other students. Thus, as expected, the students with the lowest WSI scores show the greatest gains, 1.5 points, for office hour attendance.

The overall impact of engagement with the writing seminar program is indicated in Figure 8. This figure indicates the escalating improvements experienced by students as they engage more with the program.

Figure 8
Figure 8:Average Change in Score Between WSI and PT by Student Engagement with Program and WSI Score Category

Here, “None” denotes a student who neither came to any writing seminars nor attended office hours. “L”[ow] denotes a student who either came to 1–2 writing seminars but attended no office hours. “M”[edium] denotes a student who attended 3–5 writing seminars but never attended office hours. “H”[igh] denotes a student who attended 3–5 writing seminars and at least one office hour. We use the value of 3 for writing seminars as it corresponds to the sample median. In terms of office hour attendance, approximately 15% attended at least one office hour session.

All students can anticipate some improvement, which we attribute in large part to participation in the legal writing course, in which they received grades on grammar and punctuation, writing checklists indicating their errors and keyed to the textbook, and varying levels of explicit, in-class instruction depending on the professor. The greatest gains here and throughout are just where we would like to see them: the students with the lowest entering WSI scores. For all students, a low level of engagement resulted in very little additional improvement—less than one point. Much greater gains were experienced by students as they engaged at a medium, and even more, at a high level.

These score gains translate into higher rates of achieving not just improvement, but proficiency, on the PT as demonstrated by Figure 9.

Figure 9
Figure 9:Proficiency Rates by WSI Score Category and Student Engagement

As expected, students who scored high on the WSI to begin with, thus indicating the highest level of skill at entry to law school, show the smallest gains, but the highest rates of proficiency. All of these students scored at least a 26 on the WSI. Proficiency on the PT is set at 24, although the PT is in legal English, not the vernacular. Students in the high group can virtually guarantee proficiency through high engagement: One hundred percent of those students over five years were proficient. Students in the medium group could also virtually assure proficiency through high levels of engagement: 97% of those students would be proficient by the end of fall semester. Not surprisingly, students who demonstrated poor skills on the WSI (i.e., the low group) fared worst on achieving proficiency. These students scored between 9 and 19 on the WSI. Thus, even without the transfer to legal English for the PT, they began the fall semester at a significant disadvantage. However, 77% of these students could expect to attain proficiency in their first semester of law school after high engagement with the program.

The above figures show that score gains between the WSI and PT are not random, but rather appear to be driven by both a student’s baseline level of knowledge (as measured by the WSI) and that student’s level of engagement with the program. Although these results are certainly encouraging, as the analysis is descriptive in nature, it is necessarily limited in what it can tell us about how the interplay of these various factors affect a student’s score change. Accordingly, to provide a more systematic account, we turn to multivariate analysis and, in particular, multiple regression analysis, which allows us to examine the impact of particular factors while statistically controlling for other potential confounding variables.

Our initial dependent variable is the change between each student’s WSI and PT score. As we note above, there are meaningful differences in score gains across the different student groups as distinguished by low, medium, and high scores on the WSI and engagement. To account for these differences, we include a dichotomous indicator for each of the three categories. Substantively speaking, this approach allows us to ask if, after controlling for a student’s initial WSI score group, the remaining variation in score change can be explained as a function of attendance at writing seminars and office hours.[75]

Given the nature of our dependent variable, we estimate a linear regression model. Parameter estimates for this model are reported in Table 1.[76] The results suggest the model fits the data well, explaining about 44% of the variation in score changes with the five independent variables we include in it. The model results indicate that there is a positive and statistically significant relationship between both of our measures of student engagement and PT score for all three student groups. That is, as the number of office hours a student attends goes up, so too does her expected score gain between the WSI and the PT. Similarly, higher attendance at writing seminars also yields a higher expected score change.

Table 1:Parameter Estimates for Explaining Variation in WSI-PT Score Difference
Variable Coefficient
(Robust Standard Error)
WSI Score Category:
High 0.387*
(0.150)
Medium 3.176*
(0.178)
Low 6.368*
(0.285)
Student Attendance at:
Office Hours 0.230*
(0.114)
Writing Seminars 0.577*
(0.048)
R-Squared 0.444
Root MSE 3.066
Observations 1476

Note: Parameter estimates are ordinary least squares.

* denotes p < 0.05, two-tailed test.

The model’s output allows one to estimate the score change for a hypothetical student conditional on the student’s WSI Score Category and her attendance at both office hours and writing seminars. Generating such estimates requires three basic steps. First, we must decide if a student is in the high, medium, or low WSI Score Category and select the appropriate coefficient for that category. If, for example, we were interested in estimating a score gain for a low student, then we would select 6.368, which is the coefficient for that category. Second, we need to specify how many office hours and writing seminars a student attended. Third, we substitute these values into the linear equation to calculate an estimated score change. For a student in the low WSI Score Category, that equation would be: Gain = 6.368 + 0.230 x Office Hours + 0.577 x Writing Seminars, where “Office Hours” and “Writing Seminars” refer, respectively, to the values we selected in step two. Thus, if we picked the modal values for these two measures (0 office hours, 5 writing seminars), then the resulting equation would look like: Gain = 6.368 + 0.230 x 0 + 0.577 x 5, which simplifies to 9.253. That is, for a student in the low WSI Score Category who attends all five writing seminars and no office hours, our best guess is for those efforts to gain her about 9.3 more points on the PT than she scored on the WSI. We can then tweak aspects of this counterfactual to see what would happen if she attended no seminars instead of five. After adjusting the previous equation, we would find that this revised student would be expected to observe a 6.4 point increase in her score, which represents a 31% relative decrease from the gain if all five seminars were attended. These increases are consistent with what we observed in the descriptive analysis presented earlier.

More generally, we can also ignore a student’s initial WSI score and make the general statement that every additional writing seminar attended will translate into roughly 0.6 additional points gained on the PT. Thus, going from zero to five seminars will net a student roughly 3 points on the PT. Similarly, each additional office hour attended will yield roughly 0.2 additional points on the PT. A student who attended zero office hours would be expected to gain about one point less than a student with a very high level of office hour attendance (i.e., five sessions, which is roughly the 99th percentile of the sample data).

It is encouraging that engagement in our program yields statistically and substantively meaningful gains in student performance. However, these results do not speak to the interaction between student engagement in the program and student ability to move from a lower score on the WSI to a proficient score of 24 or above on the PT. To evaluate the effect of program engagement on proficiency, we conducted a second statistical analysis. Here our dependent variable is simply whether a student obtained a proficient score on the PT (i.e., scored 24 or higher), which we code as 1 if the student did and 0 if the student did not. Our independent variables remain unchanged from the previous analysis—i.e., attendance at office hours and writing seminars. Our dependent variable is dichotomous, so we estimate separate logistic regression models for each of the three values of WSI Score Category.[77] Parameter estimates are presented in Table 2.[78]

Table 2:Parameter Estimates for Explaining Proficiency by WSI Score Category
WSI Score Category
Variable High Medium Low
Office Hours -0.619
(0.689)
0.124
(0.389)
0.060
(0.088)
Writing Seminars 0.416
(0.280)
0.160*
(0.073)
0.167*
(0.074)
Constant 3.613*
(0.461)
1.767*
(0.224)
0.233
(0.282)
Observations 355 749 371

Note: Within each cell coefficients appear above standard errors, which are in parentheses. Parameter estimates are maximum likelihood.

* denotes p < 0.05 (two-tailed test).

The results from this model reveal two interesting findings. First, for all values of WSI Score Category, we fail to find a statistically significant effect for attendance at office hours for any group of students. This runs contrary to our expectations and the results we obtained in our model of score change, where the variable was statistically significant. Second, in terms of our writing seminar attendance variable, although the coefficient on the variable is positive for all three WSI categories, we only recover a statistically significant effect for students in the medium and low WSI groupings. In other words, attendance at writing seminars is positively related to whether a student achieves proficiency if his or her initial WSI score was medium or low, but such attendance has no systematic effect on proficiency for students who scored highly on the WSI.

Unlike the previous model, which was a basic linear equation, a logistic regression model is non-linear and the results are somewhat more difficult to interpret. To aid us in that endeavor, we calculated a series of predicted probabilities, which we portray graphically in Figure 10, which shows the substantive impact of writing seminar attendance for all three values of WSI Score Category. Along the x-axis we show the number of seminars a hypothetical student attended, which ranged between 0 and 5. The y-axis shows the likelihood that a student achieved proficiency on the PT. The three different lines within the plot itself indicate the likelihood for students in each range of the WSI scores: high, middle, and low.

Figure 10
Figure 10:Connected Line Graph of the Effect of Writing Seminar Attendance on a Student’s Likelihood of Passing the Proficiency Test

Starting with a student in the high category, we see that she is, in general, very likely to achieve proficiency on her initial attempt. Indeed, for such a student we estimate between a 97 and 99% chance that she will be proficient, an estimate that is consistent with the actual values we observe in our data (see, e.g., the left side of Figure 10). We also observe only the weakest of increases in proficiency likelihood as the number of writing seminars attended by a student with a high WSI score increases. This corresponds to the result from the underlying model, which indicates that seminar attendance has no impact on a high student’s likelihood of proficiency.

Moving to a student with a medium WSI score (i.e., the middle line in the figure), we find that this student also has a relatively high baseline likelihood of being proficient, which is to say a relatively high likelihood of passing even without attendance at the seminars. For such a student, however, unlike her high-scoring counterpart, the effect of seminar attendance is statistically significant. In substantive terms, we estimate that going to 5 seminars as opposed to 0 seminars results would increase the likelihood of proficiency from 85% to 93%—a relative gain of about 9%.

Finally, we turn to a student with a low WSI score. For this individual, attendance at seminars has the most positive impact, as the slope of the lowest line makes clear. Indeed, if a student with a low score on the WSI attends no writing seminars, then we estimate that this individual has just over a coin flip’s chance of being proficient on the PT (about 56%). However, a similar low-scoring student who attends all five writing seminars sees the likelihood of proficiency jump to 74%—a relative change of 32%. This impact indicates that the writing seminar program is helping precisely those students at whom it is targeted: students who lack skills in grammar and punctuation at the beginning of fall semester.

VI. Results: Correlations Between Performance on the Assessments, Seminar Attendance, and Other Measures of Success in Law School

The primary goal of our program is to ensure that all students achieve a baseline level of proficiency in writing. We know that they do, and these results confirm that their ability to do so is, in large part, due to the various components our program implements. More generally, however, we were curious about the predictive value of the WSI. There are a variety of quantitative indicators used to measure a student’s success during law school. Of these, undergraduate GPA and LSAT are used, of course, to make admissions decisions and to identify students who may need assistance. Additional predictors like first-year GPA, which can be used for intervention with at-risk students, come too late.[79] As educators, we are interested in being able to identify matriculated students early in their law school careers who are potentially at risk of performing poorly in their first year and beyond.

Thus, we conducted two exploratory analyses. First, we sought to examine the ability of performance on the WSI to predict a student’s final law school grade point average (GPA). To do so we simply combined our WSI scores with law school GPA data obtained from the registrar. We then regressed a student’s final GPA on a series of independent variables using linear regression. The results of these models are reported below in Table 3. Model 1 reports a bivariate regression between a student’s WSI score—which takes place, recall, in fall of her first year of law school—and a student’s final law school GPA. We find a positive and statistically significant relationship. In Model 2, we add the additional control of a student’s final PT score, finding that both of these measures are positively correlated with a student’s final GPA. Models 3 and 4 up the ante, as we now control for a student’s GPA at the end of his or her first year of law school. As one might expect, there is a very strong relationship between first year and final GPA (the bivariate correlation is 0.91). Nonetheless, we still find a small but positive and statistically significant relationship between a student’s WSI score and her final GPA, even after controlling for her GPA at the end of the first year of law school.

Table 3:Parameter Estimates for Models Explaining Variation in a Student’s Final GPA
Dependent Variable: Student’s Final GPA
Independent Variable (1) (2) (3) (4)
WSI Score 0.025*
(0.003)
0.014*
(0.004)
0.004*
(0.002)
0.005*
(0.001)
PT Score -- 0.029*
(0.005)
0.003
(0.002)
--
First Year GPA -- -- 0.681*
(0.012)
0.685*
(0.011)
Constant 2.651*
(0.077)
2.102*
(0.119)
0.977*
(0.055)
1.032*
(0.043)
Observations 723 723 723 723
R-Squared 0.07 0.11 0.84 0.84

Note: Cells contain ordinary least squares coefficient estimates and standard errors for those parameter estimates (in parentheses).

* denotes p < 0.05 (two-tailed test).

The second predictive relationship we considered was first-attempt bar passage. These data also come from the registrar and were obtained for 645 students. Our dependent variable is whether a student passed the bar on that student’s first attempt (0 = no; 1 = yes). 90% of the students in our data were coded as “1’s.” Our single independent variable is a student’s WSI score, which ranges between 9 and 32. As our dependent variable is dichotomous, we estimate a logistic regression model and recover a statistically significant relationship between bar passage and a student’s WSI score. Given the non-linear nature of our model, we present our results graphically in Figure 11 below.[80] The x-axis shows a student’s WSI score. The y-axis shows the likelihood that a student passed the bar on the first try. As the plot reveals, there is a positive relationship between WSI Score and the likelihood of passing the bar. We estimate, for example, that a student with the sample minimum value of 8 on the WSI has approximately a 78% chance of passing the bar. By contrast, a student who achieves a perfect score on the WSI has roughly a 95% chance—a relative increase of approximately 22%. Or, stated differently, a student is over four times more likely to fail the bar on her first attempt when she has a low WSI score than when she has a high score.

Figure 11
Figure 11:Line Graph of the Effect of a Student’s WSI Score on the Likelihood of Bar Passage

Interestingly, when WSI score and writing seminar attendance are included in this bar passage model, both variables are statistically significant and positive.[81] Thus, just as WSI score predicts bar passage, as shown in Figure 11, whether a student attends writing seminars similarly predicts bar passage. When WSI score is held constant, students’ chances of passing the bar on the first attempt increase from 85% if they did not attend any writing seminars to roughly 95% if they attend all five seminars, as shown in Figure 12. In other words, regardless of whether a student started with a low WSI score or a high WSI score, that student’s chance of passing the bar on the first attempt increased the more often that student attended the voluntary seminars. This finding does not suggest that the seminars are in some way teaching bar content or even teaching strategies students might use on the bar. Instead, it is entirely possible that the mere fact of students seeking help is an attribute that is, in some way, correlated with success on the bar.

Figure 12
Figure 12:Connected Line Graph of the Effect of a Student’s Writing Seminar Attendance on the Likelihood of Bar Passage, Controlling for WSI Score

We present these data not so that such students can be ranked earlier in law school or eliminated altogether, but so that as with the WSI itself and the score sheet and recommendations we return to students in our writing seminar program, students can be warned about the significance of these data and given options to enhance the possibility that they can achieve their goals, in this case passing the bar and becoming an attorney. Our experience with the writing seminar program suggests proposing resources that the students can embrace as contributing to professional identity and success while students maintain autonomy and control of the timing and their choices. This arrangement empowers students to take their own steps to ensure their own success. The data seem to suggest that, regardless of entering skills with grammar and punctuation, taking these steps of their own accord has a positive effect on bar passage three years later.

VII. Conclusions and Implications

The first and most obvious conclusion from our study is that law faculty and administrators are correct in thinking that many students do not enter law school with the fundamental writing and editing skills necessary in law school and even more necessary in practice. The response to this situation may vary depending on the philosophy of the law school, but we believe that schools have an obligation to equip matriculated students with the tools to succeed, particularly in situations where the gaps in students’ knowledge pre-date law school and may well be due to simple lack of instruction. These gaps are known to undermine success or even employment in the legal profession. Explicit instruction in such material, here fundamental writing mechanics, shrinks student differences deriving from pre-existing social capital and focuses law school evaluation on the proper data, students’ legal abilities.

Within the context of a commitment to reducing student disparity and cultivating necessary skills in future legal professionals, our data strongly support the effectiveness of a proficiency model instead of a ranking model, as the basis for instruction in these skills. A proficiency model is based on the high expectation, for both teachers and students, that all students will rise to a proficient level before instruction is ended. While we recognize that a proficiency model demands much of professors and of students and may not be appropriate for every course, we encourage its use in the context of clearly identified fundamental skills necessary to all who enter the legal profession. We believe it is particularly appropriate in the context of the first year of law school, during which students acculturate at different speeds to the new world of the law.

Further, our engagement and efficacy data provide some answers to one objection to the proficiency approach, particularly when engagement is at least to some degree voluntary, because it is not driven by grades or attendance requirements: that in the context of the competitive first year, students will not engage with a course run on a proficiency basis because such a course may not contribute to a student’s GPA or class rank. Students are informed from the beginning of the program that they will not receive their second-semester writing grade until they pass the PT. However, failure to pass or even take the test has no impact on their grades. Yet only a handful of students fail to take the test the first time it is offered in the fall semester. Further, attendance data for the writing seminars and demand for office hours indicate that fears that classes offered on proficiency basis will be neglected are misplaced. Honoring student autonomy while engaging their professional aspirations through instruction presented and perceived as fundamental to their future role as attorneys can be highly motivating for students.

This instructional paradigm also promotes other long-term goals of legal education. It prompts help seeking in students most in need of instruction. In all students, it promotes intrinsic motivation for learning and enhances self-regulation, development of which is critical for long-term success in the legal profession. The approach reduces student passivity and stress, leading to a more positive experience not only for students, but also for the professor, who is able to work with engaged and positive students. Finally, as indicated by figure 11, student engagement in a voluntary program is positively correlated with first time bar passage. Thus, creating an environment in which students are enabled to and more likely to engage in voluntary resources could demonstrate positive effects. Additionally, this finding suggests that law schools would be well-served to pay attention to, and perhaps even track, how and when students access available resources.

Finally, implicit in our approach is an argument that legal educators should, whenever possible, use formal data collection to guide program design, instruction, and assessment. We hope our study models a way to assess the effectiveness of pedagogical models and instruction. Our early decision to incorporate data collection and to think about what type of data to collect shaped our program and instruction The resulting information will allow professors and institutions to assess what courses and strategies are promoting the desired student outcomes most effectively—and change those that are not effective. Quantitative data has an important role to play in this endeavor; it moves the discussion beyond subjective interpretations of students or instructors. Using data eschews anecdotal evidence in favor of objective measurements of the success or failure of both sub-populations of students and students as a whole. Thus, we are able to base future decisions on the relationship between those outcomes and the design and delivery of instructional programs. This shift in emphasis has the potential to improve the experience of all students, but most importantly it has the potential to improve the experience of those students most dependent on law school instruction for their success.


  1. For preparing students for practice with a focus on their future clients, see Roy Stuckey et al., Best Practices for Legal Education: A Vision and a Roadmap 16–18, 39–40, 62 (2007).

  2. Of course, the problem is not limited to attorneys: “Managers are fighting an epidemic of grammar gaffes in the workplace, the Wall Street Journal reported . . . [s]ome bosses and coworkers step in to correct mistakes. Some offices provide business-grammar guides to employees. And almost half of employers are adding language-skills lessons to employee-training programs.” Bryan A. Garner, The Year 2012 in Language & Writing: June, LawProse, http://www.lawprose.org/the-year-2012-in-language-writing/ (Dec. 29, 2012, 12:21 a.m.) (discussing articles in the Wall Street Journal and New Times, in June of 2012). In other industries, grammar skills are highly valued by employers. Even a sheet-metal-manufacturing business owner noted, “‘My operators are in constant contact with our customers, so they need to be able to articulate through e-mail. But you’d be surprised at how many people can’t do that. I can’t have them e-mailing Boeing or Pfizer if their grammar is terrible.’” Id. (quoting a business owner).

  3. The literature is voluminous. Writing is at the core of the legal profession. See Kathleen Elliot Vinson, Improving Legal Writing: A Life-Long Learning Process and Continuing Professional Challenge, 21 Touro L. Rev. 507 (2005). For anecdotal evidence that “the hiring committee never sees applications when the cover letter contains grammatical errors,” see Susan McClellan & Constance Krontz, Improving Legal Writing Courses: Perspectives from the Bar and Bench, 8 Legal Writing 201, 214 (2002), and Matthew Arnold, The Lack of Basic Writing Skills and Its Impact on the Legal Profession, 24 Cap. U. L. Rev. 227, 237–40 (1995). Less than one-third of judges think lawyers’ writing mechanics are excellent or very good; they rank grammar, style, and tone immediately after analysis and organization. E.g., Judith D. Fischer, Bareheaded and Barefaced Counsel: Courts React to Unprofessionalism in Lawyers’ Papers, 31 Suffolk U. L. Rev. 1 (1997); Susan Hanley Kosse & David T. ButleRitchie, How Judges, Practitioners, and Legal Writing Teachers Assess the Writing Skills of New Law Graduates: A Comparative Study, 53 J. Legal Educ. 80, 85–86 (2003) (nearly 94% of the attorneys, judges, and legal writing professors surveyed “found briefs and memoranda marred by basic writing problems.” Thirty-eight percent mentioned grammar, spelling, or punctuation errors as common); Kristen K. Robbins, The Inside Scoop: What Federal Judges Really Think About the Way Lawyers Write, 8 Legal Writing 257, 275–76, 276 fig. 16 (2002). One solution is skills testing. See Joseph Kimble, The Best Test of a New Lawyer’s Writing, 80 Mich. B.J., July 2001, at 62.

  4. For a survey of bad briefs and their impact on individual plaintiffs and employment law in general, with reasons and solutions, see Scott A. Moss, Bad Briefs, Bad Law, Bad Markets: Documenting the Poor Quality of Plaintiffs’ Briefs, Its Impact on the Law, and the Market Failure It Reflects, 63 Emory L.J. 59, 82 (2013) (commenting in his section on grammar, “Some briefs are so incoherent or ungrammatical it is hard to believe the author is even a college graduate”). The problem is not new; see Fischer, supra note 3, at 20–30 nn.165–253, for problems with writing mechanics and reactions. In recent years, many highly publicized admonitions of attorneys have circulated through the legal blogosphere, including cases dismissed for failure to proofread, Sanches v. Carrollton-Farmers Branch Indep. Sch. Dist., 647 F.3d 156 (5th Cir. 2011); judges questioning a lawyer’s fitness to practice, Stanard v. Nygren, 658 F.3d 792, 801–02 (7th Cir. 2011); fines from the bench for the increased work caused by poor editing and a referral to the office of lawyer regulation, id. at 802; In re Disciplinary Proceedings Against Hudec, 848 N.W.2d 287 (Wis. 2014); and even an order from a judge that the attorney could only charge half his hourly rate for work that was “careless to the point of being disrespectful,” Devore v. City of Philadelphia, No. Civ. A. 00-3598, 2004 WL 414085, at *1 (E.D. Pa. Feb. 20, 2004).

  5. See Aïda M. Alaka. The Grammar Wars Come to Law School, 59 J. Legal Educ. 343, 352–55 (2010) (calling for explicit instruction and attention to grammar and punctuation); see also Neil J. Dilloff, Law School Training: Bridging the Gap Between Legal Education and the Practice of Law, 24 Stan. L. & Pol’y Rev. 425, 431–33 (2013).

  6. This program was researched and proposed by the then co-directors of the Michigan State University College of Law legal writing program, Professors Deanne Lawrence, Nancy Costello, and Daphne O’Regan, and authorized by the MSU Law faculty. We use Professor Enquist and Professor Oates’s writing text as the textbook for the writing seminar program. Anne Enquist & Laurel Currie Oates, Just Writing: Grammar, Punctuation, and Style for the Legal Writer (4th ed. 2013). We have used this book since the second edition, published in 2005. The content of the program was developed by two of the Authors of this Article, Daphne O’Regan, Co-Director of the legal writing program, and Jeremy Francis, Writing Specialist.

  7. Alaka, supra note 5, at 351. She also points out that “[s]everal factors underlie the reluctance to focus on basic skills, including the primary need to teach first-year students legal analysis and reasoning skills. Addressing writing errors amplifies the instructors’ already daunting task of grading student memos and briefs and takes precious time away from their scholarship.” Id. at 351–52 (footnotes omitted). She further explains that “[f]or those who do not teach legal writing courses, the incentives to tackle basic skills are understandably slight.” Id. at 352 (footnotes omitted).

  8. Grammar and punctuation are considered “ancillary,” not central, in a recent article about legal writing pedagogy and best relegated to instruction outside of the legal writing classroom. Miriam E. Felsenburg & Laura P. Graham, Beginning Legal Writers in Their Own Words: Why the First Weeks of Legal Writing Are So Tough and What We Can Do About It, 16 Legal Writing 223, 283 n.141 (2010). Survey results indicating that students view the “mechanical” skills of grammar as important as the substantive skill of analysis are taken evidence that students are misguided in their failure to understand the classical distinction between form and content. Id. at 261–62. Another way of conceptualizing this conceptual divide is in terms of the well-known theory/practice dichotomy that continues to exercise law faculty. See Judith W. Wegner, Reframing Legal Education’s “Wicked Problems”, 61 Rutgers L. Rev. 867, 969–72 (2009) (departing from Aristotle’s original categories). For similar conception of the guiding role of deeply traditional, and ultimately destructive, dichotomies, see Peggy Cooper Davis et al., Making Law Students Healthy, Skillful, and Wise, 56 N.Y.L. Sch. L. Rev. 487, 492 (2012) (“Our tendency to dichotomize academic and practical skill is a reflection of our culture and socialization. Gender, race, and class-based stereotypes can lead us to distinguish academic and practical work and to prioritize the academic. That is, we may be primed to think that some skills are higher-order and more complex, and to think that other skills are lower-order and more easily learned.” (footnotes omitted)). See also Stewart Harris, Giving up Grammar and Dumping Derrida: How to Make Legal Writing a Respected Part of the Law School Curriculum, 33 Cap. U.L. Rev. 291, 299 (2004); David S. Romantz, The Truth About Cats and Dogs: Legal Writing Courses and the Law School Curriculum, 52 U. Kan. L. Rev. 105 (2003).

  9. Law School Admissions Council, About the LSAT, LSAC.org, http://www.lsac.org/jd/lsat/about-the-lsat/ (last visited July 15, 2016).

  10. SimuGator, The LSAT Writing Sample Matters: Advice from a Law School Admissions Insider, Simugator, http://www.simugator.com/blog/2010/03/20/lsat-writing-sample-matters/ (last visited Aug. 15, 2016).

  11. Alaka points out that the exam structure of some courses may reinforce this message:

    But as it stands now in many institutions, the message for students, regardless of what one expressly says, is that grammar [and] punctuation . . . are important in only one sphere—the legal writing skills courses. Instead of encouraging students to transfer the writing skills they are learning, students are subtly encouraged to leave those concerns behind them. It is not surprising that students appear not to have studied “legal writing” at all once they finish their first year.

    Too often, essay exams in doctrinal courses reinforce the misperception that it is “the thought that counts.”

    Alaka, supra note 5, at 354. She encourages all professors to grade “language and presentation. Doing so would also mimic the ‘real world’ of practice, clerkships, academia, and other settings, where written legal analysis is judged in the context of its presentation.” Id. at 355.

  12. See Lillian B. Hardwick, Classical Persuasion Through Grammar and Punctuation, 3 J. ALWD 75 (2006) (describing the connection between credibility and writing mechanics as described in legal writing textbooks and beyond).

  13. For the traditional, and inherently contradictory, conceptualizations of the techniques of rhetoric and personal virtue as necessary and even exclusive grounds for credibility, see Daphne O’Regan, Eying the Body to Find Truth: How Classical Rhetoric’s Rules for Demeanor Distort and Sustain Our Legal Regime ___ Pace L. Rev. ___ (forthcoming). Although emphasis shifts between the poles of skill and self in the rhetorical tradition, depending on the purpose of the discussion, the traditional connection between personal worth or virtue of the speaker and credibility persists. For a contemporary example, see, e.g., Hardwick, supra note 12, at 78. Thus writing mechanics join the disreputable company of other strategies for enhancing credibility that are at once valueless and supremely important given a world of human ignorance and fallibility. In their overlap with class and their operation in argument, they resemble accent, a measure of credibility at once discredited and profoundly influential. See generally John A. Dixon et al., Accents of Guilt? Effects of Regional Accent, Race, and Crime Type on Attributions of Guilt, 21 J. Language & Soc. Psychol. 162 (2002). This remains true even if the impact of the accent is not to create prejudice, but simply to make the information more difficult to understand. See generally Shiri Lev-Ari & Boaz Keysar, Why Don’t We Believe Non-Native Speakers? The Influence of Accent on Credibility, 46 J. Experimental Soc. Psychol. 1093 (2010).

  14. Dean William Warren stated,

    I believe that, if young men [and women] entering law school possessed a good understanding of rhetoric and grammar, they would have the basic tools with which to go forward rapidly. No course in law school, whether you call it writing or that combination of words, legal writing, can make up for lack of early training.

    Proceedings of the Fifty-Second Annual Meeting of the American Association of Law Libraries, 52 Law Libr. J. 312, 366 (1959) (cited as representative of the traditionalist view of legal writing instruction in J. Christopher Rideout & Jill J. Ramsfield, Legal Writing: A Revised View, 69 Wash. L. Rev. 35, 41–42 n.20 (1994)).

  15. See H. P. Southerland, English as a Second Language—Or Why Lawyers Can’t Write, 18 St. Thomas L. Rev. 53, 65 (2005) (discussing the “deeply rooted” belief that students should already know this material and if they do not, it is “too late”); see also Pamela Edwards, Teaching Legal Writing as Women’s Work: Life on the Fringes of the Academy, 4 Cardozo Women’s L.J. 75, 83 (1997). There is extensive scholarship documenting “the traditional view that casts legal writing either as remedial or as unteachable.” Edwards, supra note 15, at 83; see also Rideout & Ramsfield, supra note 14, at 40 n.16. “It is likely that [law school] educators were at the top of their law school classes. This implies that their orientation to the discourse was so swift that they may be unaware of the steps in the process, a phenomenon of which the other 90 percent of the class was keenly aware.” Rideout & Ramsfield, supra note 14, at 40 (citing Philip C. Kissam, Law School Examinations, 42 Vand. L. Rev. 433 (1989)). To avoid implication in the negative pole of this paradigm, legal writing professors cabin grammar and punctuation as remedial and lower subjects that detract from the higher calling of legal writing as thought and analysis. Of course, this perpetuates the conceptual dichotomy already relegating the entire discipline to lower status. See generally Lisa Eichhorn, The Legal Writing Relay: Preparing Supervising Attorneys to Pick Up the Pedagogical Baton, 5 Legal Writing 143 (1999).

  16. Paradoxically, then, grammar and punctuation become like ethics. The necessity of teaching them implies a profound failure at the heart of students and institutions, a failure so deep and of such consequence that it reflects a conception of instruction as purely superficial, reworking the surface of reality set elsewhere at other times. The extent to which this dilutes institutional responsibility, perpetuates existing social structures, and shifts blame is clear.

  17. See Wegner, supra note 8, at 960 (citing David Wilkins et al., Urban Law School Graduates in Large Law Firms, 36 Sw. U. L. Rev. 433 (2008)); Robert Nelson, The AJD Project: The First National Longitudinal Study of Lawyer Careers, 36 Sw. U. L. Rev. 355 (2008) (discussing background of the After the J.D. project, key data, and conclusions); Joyce Sterling et al., The Changing Social Role of Urban Law Schools, 36 Sw. U. L. Rev. 389 (2007) (discussing patterns involving urban law schools); Ronit Dinovitzer et al., After the J.D.: First Results of a National Study of Legal Careers (2004), available at http://www.americanbarfoundation.org/uploads/cms/documents/ajd.pdf (providing overview of study and key findings).

  18. Researchers have studied this phenomenon among students at various levels, but we were unable to find such studies on law students. For a useful summary of research on achievement theory, attributional theory, and help avoidance for academic support services, but with wider applicability, see William Collins & Brian C. Sims, Help Seeking in Higher Education Academic Support Services, in Help Seeking in Academic Settings 203, 212–13 (Stuart A. Karabenick & Richard S. Newman eds., 2006); Ruth Butler, An Achievement Goal Perspective on Student Help Seeking and Teacher Help Giving in the Classroom: Theory, Research, and Educational Implications, in Help Seeking in Academic Settings: Goals, Groups, and Contexts 15, 24 (Stuart A. Karabenick & Richard S. Newman eds., 2006).

  19. See, e.g., Abigail Salisbury, Skills Without Stigma: Using the JURIST Method to Teach Legal Research and Writing, 59 J. Legal Educ. 173 (2009).

  20. The task of learning how to communicate within the legal profession has been equated to learning a new language. See, e.g., Jim Chen, Law as a Species of Language Acquisition, 73 Wash. U. L.Q. 1263 (1995). For information on how to talk to entering law students about the differences between past writing experiences and the requirements in law school, see Anne Enquist, Talking to Students About the Differences Between Undergraduate Writing and Legal Writing, 13 Persps, 104 (2005). Many professors in the MSU Law legal writing program distribute this article to students at the beginning of the fall semester.

  21. See Nancy Sommers & Laura Saltz, The Novice as Expert: Writing the Freshman Year, 56 Coll. Composition & Comm. 124, 134 (2004).

  22. Id.

  23. Thus, we position our class consciously within the powerful dynamic of the first year to pull students forward as they enter the new discourse community, particularly as they also learn a new way of reading, and a new “language.” See Wegner, supra note 8, at 891–923. She ties the law school project of learning to think like a lawyer to legal literacy, “‘socializing students in the way lawyers engage in discourse about problems,’” id. at 908 (quoting a “thoughtful professor”), creating a “a sense of inclusion in a previously foreign professional community, a movement from novice to expert,” id. at 912, and assuming a role, id. at 913–19. Similarly, we draw on apprenticeship and socialization language to indicate to students that we are not remedying a deficiency, but creating professionals and endowing them with tools of the trade.

  24. See Rideout & Ramsfield, supra note 14, at 41–42. Among the traditional (and, they argue, fallacious) views of legal writing they cite, the first is that there are not differences between legal writing and other forms of writing. Id. Thus, according to the logic, legal writing courses must be remedial in nature. Id. at 42.

  25. For more information on the construction of writers’ identities, especially as they access academic discourses, see Roz Ivanič, Writing and Identity: The Discoursal Construction of Identity in Academic Writing (1998).

  26. See Gary Bellow & Randy Hertz, Clinical Studies in Law, in Looking at Law School: A Student Guide from the Society of American Law Teachers 343 (Stephen Gillers ed., 4th ed. 1997). The authors discuss the bifurcation in law study that results in divorcing practical experience from the education process and “leav[ing] it to the practicing bar to socialize new members of the bar upon graduation from law school.” Id. at 342. One remedy is clinical experiences in law school that bridge this bifurcation and begin the professional assimilation process for students. Thus, clinical instruction and experience explicitly invoke students’ future as professionals with clients and the demands of this situation. For an overview of clinical goals and methods, all of which are forward looking, see Carolyn Grose, Beyond Skills Training, Revisited: The Clinical Education Spiral, 19 Clinical L. Rev. 489, 490–501 (2013).

  27. Thus, we position the writing seminars in the project of entering the discourse community of the legal profession.

    [L]aw school offers an invitation into one of the richest and most complex of the professional discourses: a community that is demanding in its argumentative and analytical paradigms, challenging in its research and writing processes, and complicated by its social pressures. Such a complex discourse and its accompanying social contexts require strategies for discovering and mastering its conventions, for writing as a situated member of the legal community. The legal writing classroom should, appropriately, initiate students into these conventions and practices. And that process of initiation should continue through the three years.

    Rideout & Ramsfield, supra note 14, at 99; see also J. Christopher Rideout & Jill J. Ramsfield, Legal Writing: The View from Within, 61 Mercer L. Rev. 705 (2010) (discussing discoursal creation of the “self”).

  28. This assessment is formative because it is intended primarily to alert students themselves to what they need to learn to succeed. See Emily Zimmerman, An Interdisciplinary Framework for Understanding and Cultivating Law Student Enthusiasm, 58 DePaul L. Rev. 851, 892 (2009) (discussing William M. Sullivan et al., Educating Lawyers: Preparation for the Profession of Law 171 (2007) [hereinafter Carnegie Report]). Such initial formative assessments are often called “Diagnostic” tests. We avoided this term as the connotations did not harmonize with the goals and philosophy of our program. However, the initial assessment is used to alert teachers to the skill level of their class, and if a particularly low score is achieved, to alert professors early to students who may need extra help.

  29. Carnegie Report, supra note 28, at 85–86, 160–61, 173 (2007) (“The situational character of practical expertise strongly suggests that one essential goal of professional schools must be to form practitioners who are aware of what it takes to become competent in their chosen domain and to equip them with the reflective capacity and motivation to pursue genuine expertise. They must become ‘metacognitive’ about their own learning . . . To accomplish this, the over-all educational context must be a formative one that can encourage students.” Id. at 160–61.); see also Stuckey et al., supra note 1, at 66–67 (discussing the principle, “[g]raduates demonstrate self-reflection and lifelong learning skills”).

  30. See Wegner supra note 8, at 885–87 (discussing the self-fulfilling role of assessments and sorting in legal education). “The fixed curve interferes with learning. It motivates students to work for grades rather than for comprehension or skill development.” Peggy Cooper Davis, Slay the Three-Headed Demon!, 43 Harv. C.R.-C.L. L. Rev. 619, 622 (2008). Thus, the curve interferes with a mastery mindset and undermines the values of independence and substantive goals needed in practice. For “fixed theories of intelligence, identity threat and anxiety about rank combine in the zero-sum competition for law school rank [working to] elevate stress and inhibit learning,” see Davis et al., supra note 8, at 489–95. Confirming Davis et al., we observe that while many students enter the writing seminar program with significance anxiety about punctuation and grammar and not a few with the belief that they cannot perform, students shed this as they proceed.

  31. For the prevalence of sorting/ranking as the goal of assessment in law schools and the cost in learning, see Stuckey et al., supra note 1, at 236–39 (discussing Wegner’s work and the Carnegie Report, supra note 28, at 168 (“Those who champion grading on the curve assume that legal education largely serves a sorting function. The intent is to identify the best and the brightest . . . and reward[ ] those few who will carry on the tradition of legal scholarship as professors, scholars, and jurists. . . .”)).

  32. See Stuckey et al., supra note 1, at 235 (“In law schools, as in medical schools, one purpose of assessment is to determine which students should receive degrees, but other purposes of assessment are more important. . . . ‘Assessment methods and requirements probably have a greater influence on how and what students learn than any other single factor.’” Id. (footnotes omitted) (quoting Alison Bone, National Centre for Legal Education, Ensuring Successful Assessment 3 (Roger Burridge & Tracey Varnava eds., 1999), available at http://ials.sas.ac.uk/library/archives/ukcle/78.158.56.101/archive/law/files/ downloads/131/682.7564f85f.bone.pdf)).

  33. Butler, supra note 18, at 24–25. For a good summary of help seeking in terms of achievement goal theory and attribution theory, see Collins & Sims, supra note 18, at 210–12. For a discussion of mastery goals versus performance goals (which have a large overlap with what Butler calls ability goals) in the law school context, see Leah M. Christensen, The Power of Skills: An Empirical Study of Lawyering Skills Grades as the Strongest Predictor of Law School Success (or in Other Words, It’s Time for Legal Education to Get Serious About Integrating Skills Training Throughout the Law School Curriculum If We Care About How Our Students Learn), 83 St. John’s L. Rev. 795, 799–800 (2009). As stated in Butler and implied in Christensen, mastery goals are tightly linked with personal autonomy as they are self regarding rather than other regarding. Butler, supra note 18, at 24–25; see Christensen, supra note 33, at 799–800.

  34. Students demonstrate proficiency by scoring 24/32 on a Proficiency Test. Although many students do, in fact, score higher, we do not believe this 24/32 identifies expertise; therefore, our goal for all students is proficiency. However, a proficiency model does assure students and professors that individuals have demonstrated skills in an absolute sense, not just relative to other students as in a ranking, class-curve regime. For a summary of literature on problems with summative, ranking, curved exams and grades, see A. Benjamin Spencer, The Law School Critique in Historical Perspective, 69 Wash. & Lee L. Rev. 1949, 2039–47 (2012) (arguing for what he identifies as achievement based grading).

  35. Wegner, supra note 8, at 991 (comparing the more well documented professional formation of doctoral students). Wegner also summarizes the research about learning theory for emerging adults, ages 18–29, the age of many law students, that marks the tie between learning and identity during this period. Id. at 996.

  36. For an example of a mandatory program, see Ann L. Nowak, Tough Love: The Law School That Required Its Students to Learn Good Grammar, 28 Touro L. Rev. 1369 (2012).

  37. See Zimmerman, supra note 28, at 890–92 (discussing studies showing that respect for student autonomy enhances satisfaction, learning, and even GPA). The literature is extensive. For literature on motivation (intrinsic and extrinsic), its link to assessment and teaching practices, and ways to revive and harness students’ entering enthusiasm, see, e.g., id. at 878–92. Promotion of student autonomy ranks immediately after doing no harm to the students as a best practice for delivering instruction. Stuckey et al., supra note 1, at 113–14. Autonomy requires teachers to “give students as much choice as possible . . . explain the rationale for reaching methodologies and assignments, [and] assessments.” Id. at 114. Autonomy results in more favorable student outcomes. Id.

  38. Additional instruction and further Proficiency Tests were available in the spring semester for students who chose not to engage with the writing seminar program in the fall or did not achieve proficiency on the first test.

  39. This consistency means that when students switch professors between fall and spring semesters, the students can continue to consolidate their skills without confusion. Faculty members outside of the writing program have begun to refer to the textbook and to adopt the common standards and vocabulary as well.

  40. See Carol Springer Sargent & Andrea A. Curcio, Empirical Evidence That Formative Assessments Improve Final Exams, 61 J. Legal Educ. 379 (2012) (explaining the benefits of formative exams and the importance of prompt and specific feedback). “Feedback allows learners to calibrate their progress towards academic goals. The effect is greater when the feedback offers an explanation rather than just a correct response . . . .” Id. at 381 (footnotes omitted); see also Wegner, supra note 8, at 886–87 (commenting that “[t]he role of assessment in driving learning is often forgotten by educators even though evidence of that truism lies all around”).

  41. For the exact text of advice given to students, see part V(B), infra. Providing positive information on the usefulness of help seeking with recommended actions and tied to a desirable outcome works better than feedback alone, particularly for low performing students most likely to benefit from help (and least likely to seek it). Russell Ames & Sing Lau, An Attributional Analysis of Student Help-Seeking in Academic Settings, 74 J. Educ. Psychol. 414, 421 (1982). The presentation of explicit, accurate, and neutral information about appropriate actions for success also primes students to focus on what they can do to improve performance rather than isolating personal or extrinsic reasons for failure. Jeffery G. Noel et al., Improving the Performance of Failing Students by Overcoming Their Self- Serving Attributional Biases, 8 Basic & Applied Soc. Psychol. 151, 152, 157 (1987). The specificity of the feedback may correct for the problem noted by Sargent and Curcio: that formative assessments did not help the bottom 30% of students for reasons that they hypothesize could be due to the fact that “those with higher LSAT scores and UGPAs may be more experienced with adjusting their study habits, re-working content units, organizing their answers to mirror the models given, and reflecting on gaps between attained and desired levels of performance” or lower performing students’ “difficulty in perceiving the feedback messages or calibrating their comprehension.” Sargent & Curcio, supra note 40, at 395–96.

  42. For the difference between instrumental (learning for mastery of material) and executive (task completion) help seeking and the desirability of fostering instrumental help seeking, see Collins & Sims, supra note 18, at 210. In keeping with the general dichotomy between instrumental and executive help seeking, the writing specialist does not serve as a proofreading service. Instead, students must come with specific questions on specific passages in their active documents. In this way, students drive the conference with the writing specialist as a resource and educator, not a proofreader.

  43. Over the study period, nearly all students chose to participate in these intensive review sessions. The writing specialist advised the few students who opted not to participate that their non-participation was ill-advised and potentially held the very serious consequence of not receiving their spring semester writing course grade. Still, three students opted to self-prepare because of professional or personal commitments—internships, political campaign participation, commute, or family commitments—and all passed the final test.

  44. Only one student has ever failed to show proficiency by the third PT. However, this event occurred before the five-year study period began. That student was given a fourth alternative assessment to ensure that the assessment tool itself was not the obstacle. The student passed the final assessment. The instrument, in this case, was the obstacle. The student had a very difficult time with multiple-choice tests, but was able to articulate and demonstrate the concepts measured on the test with little to no difficulty.

  45. See Stuckey et al., supra note 1, at 243–45 (pointing out that criteria referenced exams, similar to our PT, are normally pass/fail).

  46. Legal writing professors cover quotation mechanics in the Bluebooking curriculum, although the writing seminars touch on punctuation usage in the context of quotations. See, e.g., Enquist & Oates, supra note 6. The text for the seminars, does address more advanced concepts, and many of these concepts are discussed and taught by some legal writing professors, primarily in the second semester.

  47. We cannot provide examples from the assessments because we continue to use the assessments in our program. Scale reliability for WSI was 0.73 while the PTs ranged from 0.66 to 0.70, not adjusted for question type.

  48. For the virtues of multiple-choice exams in coverage and fairness, see Susan M. Case & Beth E. Donahue, Developing High-Quality Multiple-Choice Questions for Assessment in Legal Education, 58 J. Legal Educ. 372, 373 (2008). The authors caution against multiple-choice exams for “writing,” id., but our tests are of rules of grammar and punctuation, not writing as understood in that article. Grammar and punctuation, arguably, fall in the domain of editing or proofreading. For the importance of matching the type of test to teaching goals, see Janet W. Fisher, Multiple-Choice: Choosing the Best Options for More Effective and Less Frustrating Law School Testing, 37 Cap. U. L. Rev. 119, 131–36 (2008); Howard Gensler, Valid Objective Test Construction, 60 St. John’s L. Rev. 288 (1986); Charles J. Senger, Perfect Compromise or Perfectly Compromised Tests: Law School Examinations That Mimic a Bar Examination’s Format? 2–6 (Jan. 10, 2010) (unpublished manuscript), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1962809 &download=yes.

  49. See John D. Schunk, Indirectly Assessing Writing and Analysis Skills in a First-Year Legal Writing Course, 40 S.U. L. Rev. 47 (2012) (explaining multiple-choice tests as the appropriate vehicle in writing assessments).

  50. See Jeremy Francis, The Silent Scream: How Soon Can Students Let Us Know They Are Struggling?, 27 Second Draft, Summer 2013, at 32, 33. In an informal study, testing proctors identified and separated students who had made marking errors on the OCR test answer sheet, ranging from failing to bubble information to entering incorrect section numbers. Id. The study found that students who had marking errors on the OCR sheet were twice as likely to fail the PT as students who had no marking errors. Id. We assume that skills involved with careful and meticulous editing are likely linked to performing many skills involved in academics. Id. at 32–33. This indicates that low performing students could benefit from instruction simply focusing them on the importance of detail, independent of content. See id. at 33.

  51. See Stuckey et al., supra note 1, at 241–43.

  52. In each year in the study period, the writing seminars totaled 250 minutes of course time, 5 seminars at 50 minutes each. In each seminar, roughly 35 minutes were dedicated to instruction and 15 minutes dedicated to practice and application. Thus, we had roughly 175 minutes to deliver all of the content for the seminar program. That content was also reinforced in the legal writing courses as described above.

  53. See, e.g., Aïda M. Alaka, Phenomenology of Error in Legal Writing, 28 Quinnipiac L. Rev. 1, 36–38, 38 n.189 (2009); Joseph M. Williams, On the Maturing of Legal Writers: Two Models of Growth and Development, 1 Legal Writing 1 (1991). For transfer theory and its application to law schools generally, see Tonya Kowalski, True North: Navigating for the Transfer of Learning in Legal Education, 34 Seattle U. L. Rev. 51 (2010).

  54. Research from the field of rhetoric and composition overwhelmingly supports the notion that students must become familiar with a new genre to become proficient writers in that genre. See Carol Berkenkotter & Thomas N. Huckin, Genre Knowledge in Disciplinary Communication: Cognition/Culture/Power 1 (1995) (“Genres are intimately linked to a discipline’s methodology, and they package information in ways that conform to a discipline’s norms, values, and ideology.”). Students or novices in a new genre have limited ability to participate in the written discourse when they enter the process of apprenticeship. Legal language is not just outside the domain of an entering student’s linguistic abilities, but foreign as meaningful linguistic interaction. Ivanič, supra note 25, at 104 (describing how as students transition between genres and disciplinary boundaries when entering a new discipline, students do not receive “precise instructions on how to play this role, specifically, on how to produce the written ‘performances’ known as ‘assignments’”). Students, thus, “have to piece together what they know about writing from other roles—knowledge which will not necessarily be adequate to the situation.” Id.; see M. M. Bakhtin, The Dialogic Imagination: Four Essays 289 (1981) (While experts in a discourse community are able to “denote and express directly and fully, and are capable of expressing themselves without mediation,” novices, lacking acculturation to the full range of linguistic activities that surround utterances, are relegated to a position in which “the intentions permeating these languages become things, limited in their meaning and expression.” Id. (emphasis in original).).

  55. Over a two-year observation period (years two and three of the overall study period), only a handful of students ever worked on the PT longer than 100 minutes. Due to scheduling and building usage concerns, we now limit students to 120 minutes to complete the PT. Students who require additional time make arrangements for time accommodations through the registrar or dean of students.

  56. On the negative implications of timed tests and the likelihood that the use of time-pressured law school exams may both explain the predictive quality of the equally time-pressured LSAT and distort evaluation of reasoning and knowledge of some students, see William D. Henderson, The LSAT, Law School Exams, and Meritocracy, The Surprising and Undertheorized Role of Test-Taking Speed, 82 Tex. L. Rev. 975, 1033, 1034–45 (2004).

  57. This project required Human Subjects IRB approval because the law school’s agreement with MSU requests that faculty and staff of the law school adhere to the codes and norms of the larger university. Second, our project did deal with human participants and potentially sensitive data, including grades, ID numbers, and bar passage data. We applied under the “exempt” category because all of the data collected were for educational purposes and data were not entered into the database until the students’ enrollment in the legal writing program had concluded. All of the data collected were regular program and administrative data to which the primary investigators had routine and unrestricted access. None of the data required special permissions to obtain.

  58. Grader III is owned by Michigan State University. For more information on this system, please see Welcome to the Scoring Office, Michigan State University Scoring Office, https://tech.msu.edu/teaching/test-scanning-scoring/.

  59. Thus, we use the term “engagement” in the relatively narrow sense of whether a student accessed the optional instructional opportunities. For the wider sense, see Wegner, supra note 8, at 961–64.

  60. Faculty comment on the positive effect the program has on student writing. One legal writing professor commented in a departmental email, “I think this is working. By my informal measure, I’m noticing a definite increase in the overall quality of the students’ writing.” Another legal writing professor commented on a student after referral to the writing specialist, “I was just looking at [the student’s] memo, and I was very impressed with her progress on grammar and punctuation.”

  61. Not surprisingly, the writing seminar program does receive critical and negative comments from time to time. These comments tend to focus on how instruction or assessments are not needed or welcome in a law school, how the material should be available online instead of in class, how students should be able to “pass out” by earning a high enough score on the first test, and how the instructor moved too fast or did not explain something clearly enough.

  62. There are other factors that could contribute to our WSI score distribution. For instance, MSU Law admits a great number of students with science backgrounds; these students might not have had as many opportunities as humanities or social science undergraduates to receive extensive feedback in writing. Additionally, MSU Law maintains a commitment to admitting students whose academic history may be less strong, but nevertheless have a great deal to offer both the profession and their future clients. Also worth noting is the increase over time of English as a Second Language students entering the J.D. program.

  63. Low-scoring students receive the following feedback: “Scores below 20 are low. Schedule an appointment with the Writing Specialist soon to review the test and discuss your plan of action. Data indicate that without further action, many students in this range will fail the Proficiency Test.”

  64. Medium-scoring students receive the following feedback: “Scores between 20 and 25 indicated that you should attend the Writing Seminars, as well as review Just Writing and the exercises . . . Diagnose your errors to learn which seminars you should attend.”

  65. High-scoring students receive the following feedback:

    Scores of 26 and above probably indicate that you should review before the Proficiency Test. Determine what you missed and review the pertinent sections of Just Writing and the exercises . . . in the book. Diagnose your errors, and, if many of them were in the same areas, attend the pertinent Writing Seminars.

  66. Sunflower plots are useful in comparing the joint distribution of two variables with discrete (as opposed to continuous) numerical values. An alternative (and more common) mode of presentation for showing a bivariate relationship is a scatterplot. Scatterplots are less helpful when working with data like ours as there will often be multiple observations with the same values. This information is lost in a typical scatterplot unless the values are “jittered,” which involves adding or subtracting very small random values to each x/y pairing. We simply, as a stylistic matter, prefer the sunflower plot.

  67. One potentially interesting question is—who are the students who lose points between the WSI and PT? Of the 119 students whose score decreased, fully 68% of them were students who initially scored in the high category on the WSI. Twenty-seven percent were students in the medium category and the remaining 5% were students in the low category. In terms of student engagement, 45% of those who lost points between the PT and the WSI never attended a writing seminar and an astonishing 93% never attended office hours.

  68. See Anis Bawarshi, Genre & The Invention of the Writer (2003); Berkenkotter & Huckin, supra note 54, at 7; Ivanič, supra note 25, at 7.

  69. See Sommers & Saltz, supra note 21.

  70. This frequent perception of professors is confirmed by research. Butler summarized research with citations and concluded “the most disturbing implication of classroom research is that students who are most in need of help seem to be least likely to request it.” Butler, supra note 18, at 24 (citation omitted).

  71. The seminar averages for medium and high students are 3.06 and 1.79, respectively. All of these pairwise comparisons are statistically significant via a difference in means test (p < 0.05, two-tailed test).

  72. A more formal chi-square test of independence informs us that we must reject the null hypothesis that no relationship exists between one’s WSI Score Category and the number of writing seminars attended (chi-square = 230, d.f. = 10, p < 0.001).

  73. Seven is not the actual maximum number of times a particular student may have attended office hours. Some students, particularly very low-scoring students, accessed the writing specialist on a weekly or bi-weekly basis. The number of these students was small, between one and four per year. We decided to stop recording office hours visits at seven, as this number represented an approximate maximum for most students.

  74. Given the skewed nature of this variable, we also estimated our statistical models using a dichotomous measure for whether a student attended any office hours (0 = no; 1 = yes). Our results are unchanged if we use this alternative operationalization.

  75. Although this approach allows for differential baseline gains in performance based on a student’s WSI Score group, it assumes that the effect of attending office hours or a writing seminar is the same across all students. To determine if this assumption was valid, we have also estimated separate models for each WSI Score Category. The data suggest that the assumption is appropriate. That is, the effect of engagement (both seminars and office hours) is about the same for students in all three groups.

  76. As we note above (see part III(A)(5), supra), data from early program years were used to make a variety of tweaks to the program and curriculum in later years of the study. One might reasonably hope (and expect) that such refinements translate into bigger gains between the WSI and the PT. We have re-estimated both this model and the proficient/not proficient model featured in Table 2. We find that more recent program years are associated with larger WSI-PT gains than earlier ones, but controlling for program year does not affect any of our other results. See infra note 78 (discussing the effects on our other model).

  77. This differs from our approach in the previous model. We initially estimated a single model with separate indicators for each WSI Score Category (i.e., just like Table 1). Subsequent analysis revealed that unlike the previous model, the effect of writing seminars was significantly different across the groups. As Table 2 makes clear, it is positively and statistically significant for the medium and low values of WSI Score Category but is statistically insignificant for the high group (p = 0.14, two-tailed test).

  78. As discussed in footnote 76, supra, we also conducted a secondary analysis to control for the effect of program year. We find no relationship between program year and proficiency achievement among students with medium (p = 0.50) or low (p = 0.09) WSI scores. We do, however, find a positive and statistically significant relationship for students with high WSI scores. High WSI students in 2011 (the final year of our analysis) had, holding all else equal, a 99.8% chance of achieving proficiency compared to a 95.2% chance for high WSI students in 2007 (the first year of the program). The significance of no other variables is affected by controlling for program year.

  79. Other measures, for example legal writing grades, have been identified:

    Lawyering Skills Grade was the strongest predictor of law school success. Lawyering Skills Grade had a positive statistical correlation to class rank at a level of 0.57. There was a moderate positive correlation between UGPA and class rank at 0.46. Lastly, a weak correlation at 0.23 was found between LSAT score and class rank.

    Christensen, supra note 33, at 805 (footnotes omitted). However, this also comes late. Christensen also ties success to a mastery orientation and argues that skills courses, particularly legal writing, promote mastery learning. Id. at 806–12.

  80. The specific equation is 0.579 + 0.074 x WSI Score. The standard errors for the constant and WSI Score variables are 0.655 and 0.030, respectively.

  81. The specific equation is -0.85 + 0.11 x WSI Score + 0.23 x Seminar Attendance. The standard errors for the constant and independent variables are 0.82, 0.03, and 0.08, respectively.