If you are a law professor who gives issue-spotting essay exams (and most law professors do), then this article is for you. If you are a law professor at a law school that has seen a decline in its bar passage rate (and most law schools have), then this article is also for you. If you are a law professor who cares deeply about your students’ ability to perform the essential lawyering skill of applying law to facts, then this article is most certainly for you. The purpose of this article is to explain how law professors—even with the many demands on their time—can teach exam-writing skills in their classrooms and the reasons they should do so.
It begins, in Part I, with the many reasons why law professors should teach exam-writing skills in the doctrinal classroom. Perhaps the most important reason for explicitly teaching exam-writing skills is that teaching students to write a structured, coherent exam teaches them how to reason in a structured, coherent way. Doctrinal professors want their students to learn legal doctrine, whether that be Torts, Contracts, or some other body of law. But for most professors, having their students learn a body of law is not enough. Doctrinal professors also want their students to become experts at “analytical reasoning,” which “involves the application of general principles to specific sets of facts to discover the relationship between rules and circumstances.” Teaching law students how to write an essay exam achieves that second, equally important, goal. Accordingly, teaching exam-writing skills is at the heart of what law professors wish to accomplish in their classrooms.
Teaching exam-writing skills has other significant benefits. The same skills that are necessary to write a structured law school exam are also essential to passing the bar and are foundational to the practice of law. Finally, teaching such skills explicitly in the doctrinal classroom is likely to close performance gaps and mitigate students’ mental distress.
Although there are many reasons to teach exam-writing skills, few professors do so—at least not explicitly. That failure to provide instruction is a missed opportunity. As explained in Part II, a number of recent studies have shown that when professors provide opportunities for practice and feedback, students’ exam performance improves.
The articles that document the relationship between practice, feedback, and performance do not, however, provide a method for teaching exam-writing skills. This article adds to those earlier studies by recommending a comprehensive method for teaching exam-writing skills. It takes as its starting point the cognitive apprenticeship model for learning described in Educating Lawyers, the landmark report on improving legal education better known as the Carnegie Report. Under a cognitive apprenticeship model, learning is most likely to occur when an expert makes key features of a skill visible to the novice so that the novice can appropriate that skill. Where law school exams are concerned, the skill that needs to be made visible is the application of law to facts. That is the skill that professors want to see on an exam, and that is the skill that students must appropriate to pass the bar and to practice law. Despite the fundamental importance of that skill, students are often mystified about how exactly to apply law to facts.
Part III of this article addresses that gap in law students’ education. It provides a conceptual model that identifies the key features of an effective exam answer. It focuses particularly on how to effectively apply the law to facts. It then explains how that conceptual model can be integrated into students’ coursework so that professors can better guide their students toward the appropriation of expert exam-writing skills. Although providing explicit instruction about how to write an effective exam will take some extra work, that extra work need not be as burdensome as professors might think, and, as explained below, the payoffs are great.
I. Reasons Exam Writing Should Be Taught in Doctrinal Classes and Possible Reasons It Is Not Taught More Often
Although law professors do not regularly teach exam-writing skills in their doctrinal classes, there are at least five reasons they should do so. Each is discussed below, followed by a brief examination of why more professors do not currently teach these important skills.
A. It is the Outcome Professors Want to See
That professors want students to be able to effectively apply the law to facts is evidenced in how they grade their exams. Professors usually disproportionately weight a student’s application of law to facts in their evaluation of an exam. As one professor explained, he and other law professors assume that students know the rules and can recall them. On the exam, the professor wants to see “what you can do with the law . . . . Your ability to apply the law to facts is what’s being tested . . . .” As another professor put it, “‘My exam assumes you know the law; it only tests whether you can apply it.’”
Accordingly, writing an effective law school exam depends upon the ability to apply law to facts. More specifically, professors look for students’ ability to reason deductively. Classically, deductive reasoning requires a major premise, a minor premise, and a conclusion that flows from those two. In that form of reasoning, a person is moving from a general proposition to a more specific conclusion. A traditional issue-spotting exam relies almost exclusively on such deductive—or syllogistic—reasoning. The rule of law is the major premise. The facts of the hypothetical are the minor premise. The conclusion flows from the two. This form of reasoning is also called rule-based reasoning because the rule drives the analysis.
Rule-based reasoning is a distinct approach to legal analysis. It stands in contrast to analogical reasoning. When reasoning by analogy, a lawyer compares the facts of a previously decided case to a novel fact pattern. If the two fact patterns are sufficiently similar, the lawyer can predict that the novel fact pattern will lead to the same legal conclusion as in the previously-decided case. Analogical reasoning is a form of inductive reasoning because the analysis proceeds from a comparison of particular facts toward a more general conclusion.
Although students may rely on analogical reasoning on a law school exam, the dominant mode of analysis on a traditional issue-spotting exam is rule-based reasoning. Teaching exam-writing skills teaches students this particular form of structured reasoning.
Law professors want students to produce structured analyses on their final exams, but classroom instruction regularly fails to provide students with the necessary foundation. As Professor Philip Kissam explained in his extended essay, Law School Examinations, “the assigned readings and classroom discussions in most law school courses are concerned with the explanation, interpretation, and evaluation of pivotal cases that establish contemporary doctrine.” By contrast, “the final exam is typically concerned with the conventional application of doctrine to new and unexpected ‘borderline’ situations.” Thus, the daily classroom work largely “fails to provide models, instruction, or practice for the issue spotting and rule applications required for Blue Book exams.”
Law professors do provide hypotheticals and ask students to consider the likely outcome. In that exchange, the professor is asking a student to apply the law to facts and the remaining students in the class have the opportunity to observe that application of law to facts.
That oral exchange, however, has significant limitations. First, the exchange is oral. An oral exchange provides the opportunity for a professor to assist the student through the analysis—help that a student will not have on the exam. Second, students are often unaware that the in-class oral exchange is similar to the analysis they will need to perform on their exam because the professor does not explain the similarity. Rather, the student is more likely to think that the exchange is simply a way to flesh out the contours of the law. The student is likely unaware that there are other skills that can be learned from the exchange—specifically, a method of applying law to facts. Thus, “most law students face their examinations with substantial uncertainty about what they should know and do.” Students are “left largely on their own to acquire the skills” they will need to succeed on their law school exams.
As a result, students often lack the skills they need to take a law school exam. As Professor Mary Beth Beazley explained, even after a semester of law school exams, many students do not know how to “apply law to the facts”:
[I]n the second semester, I find some students who are unfamiliar with the phrase “applying the law to the facts,” or who don’t have a concrete image of what this concept means. This occurs despite the fact that they have spent a semester witnessing conversations in which they, their classmates, and their teachers have been applying various laws to various sets of fact every day.
Professor Michael Hunter Schwartz reported that students often do not understand the difference between knowing the rules about a given subject area and using those rules on an exam:
Every semester since I started teaching law [15 years ago], there are a few students who come to see me and express shock at the low grades they received for my course. In each instance, the student has said something like, “But I knew the law cold. I knew it backwards and forwards and every other which way.” Some students have even asked me to test their knowledge on the spot so that I would know how much they knew. . . . They failed to realize, however, that such knowledge was not enough.
Is it any wonder, then, that professors are dismayed by the results that they see on their final exams? Law professors describe grading final exams in a number of ways, none of them good: “miserable,” “painful,” “distasteful,” a “‘deadening intimacy with ignorance and mental fog.’” As one professor explained, “[Y]ou repeatedly . . . find unrelated concepts woven together, producing exam answers that defy both legal logic and common sense.”
However, as explained in Parts II and III below, law professors can make changes that will provide students with the opportunity to learn the skills they need to write more effective law school exams. Doing so will help students learn the kind of structured thinking that law professors want their students to learn, and it has the potential to mitigate the pain that has traditionally accompanied grading law school exams.
B. Improved Bar Exam Skills
Law professors are not the only ones who want to see legal analyses with carefully constructed applications of law to facts. Bar examiners across the nation are looking for that same skill. Most bar exams include a series of essay exams and one or more performance tests. Those two tests—both of which require the applicant to use rule-based arguments—comprise between 50% and 67% of an applicant’s overall score.
The essay exams on the bar exams are like traditional law school issue-spotters. The applicant will see a fact pattern and a prompt. Then the applicant has to apply previously memorized legal rules to the fact pattern. On the essay portion of the bar exam, applicants rely entirely on rule-based reasoning—the same skill they need for their law school exams.
Applicants also rely on rule-based reasoning when taking the bar exam’s performance tests. During the performance test, applicants are provided with a case file and a “library” of legal authorities. They then produce a document in which they explain their legal analysis of the client’s problem. Although applicants can rely on analogical reasoning, applicants will frequently rely on rule-based arguments either because the limited legal authorities provided in the case file do not provide an analogical argument or because time constraints force them to rely on the more quickly-produced rule-based arguments.
Thus, a second reason to teach exam-writing skills is because doing so can improve students’ chances of passing the bar. Over the last thirty years, one out of every four applicants who sits for the bar exam fails. That is to say, one in four of our students—students who are paying tens or hundreds of thousands of dollars to become lawyers and who are also paying our salaries and providing us with our jobs—are not prepared to pass the bar.
Failure to pass the bar can have “catastrophic results.” Some of our graduates who fail to pass the bar may lose their jobs, while others may be unable to find a job. In either case, graduates may be “saddled with debt from law school” and their undergraduate education that they may not be able to repay. Graduating cohorts of students who are unable to practice in the profession for which they are trained impacts not just the individual graduate but also their alma mater. Recent headlines have read: “An Expensive Law Degree and No Place to Use It,” “Why Attending Law School Is the Worst Career Decision You’ll Ever Make,” and “Burdened with Debt, Law School Graduates Struggle in Job Market.” Of course, law faculty cannot control the economy and the availability of legal jobs. But law faculty can, at the very least, help their students attain the skills they need to become eligible for jobs that are available.
Our graduates may fail the bar for a number of different reasons. They may underestimate the amount of work necessary to pass or they may fail to plan financially and have to work while they study. But some of our former students fail because they possess deficient legal writing and analytical skills. They are unable to identify issues, state rules, or apply those rules to facts in a coherent way. At the very least, law professors can do their part to ensure that those graduates who want to pass the bar have the skills to do so.
As noted above, bar examiners look for answers that demonstrate the same kinds of skills that law professors would like to see reflected on their exams. On the essays, bar examiners expect applicants to be able to identify legal issues; demonstrate an understanding of the law relevant to resolving the factual situation; present a reasoned analysis of the issues in a clear, concise, and well-organized manner; sort the relevant from irrelevant information; and demonstrate an ability to communicate effectively in writing. Similarly, the performance test seeks to test applicants’ ability to extract from statutes, cases, and administrative materials applicable principles of law; apply the relevant law to relevant facts; sort detailed factual materials and separate relevant from the irrelevant; communicate effectively; and complete a lawyering task within time constraints.
Bar examiners’ interest in well-structured writing is evident in their grading methods. Two-thirds of all state examiners grade essays and performance tests on a holistic basis. Under a holistic grading method, the grader makes an overall judgment about the quality of the work and that assessment is translated to a score or grade. Most jurisdictions grading on a holistic scale use a five-, six-, or seven-point scale (although some use a ten- or twenty-point scale). A typical one-to-six grading scale might look like this:
A 6 answer is a very good answer. A 6 answer usually indicates that the applicant has a thorough understanding of the facts, a recognition of the
issues presented and the applicable principles of law, and the ability to reason to a conclusion in a well-written paper.
A 5 answer is an above average answer. A 5 answer usually indicates that the applicant has a fairly complete understanding of the facts, recognizes most of the issues and the applicable principles of law, and has the ability to reason fairly well to a conclusion in a relatively well-written paper.
A 4 answer demonstrates an average answer. A 4 answer usually indicates that the applicant understands the facts fairly well, recognizes most of the issues and the applicable principles of law, and has the ability to reason to a conclusion in a satisfactorily written paper.
A 3 answer demonstrates a somewhat below average answer. A 3 answer usually indicates that it is, on balance, inadequate. It shows that the applicant has only a limited understanding of the facts and issues and the applicable principles of law, and a limited ability to reason to a conclusion in a below average written paper.
A 2 answer demonstrates a below average answer. A 2 answer usually indicates that it is, on balance, significantly flawed. It shows that the applicant has only a rudimentary understanding of the facts and/or law, very limited ability to reason to a conclusion, and poor writing ability.
A 1 answer is among the worst answers. A 1 answer usually indicates a failure to understand the facts and the law. A 1 answer shows virtually no ability to identify issues, reason, or write in a cogent manner.
A 0 answer indicates that there is no response to the question or that it is completely unresponsive to the question.
The important point is that applicants do not get points for simply putting ideas down on the paper. Those ideas must be effectively structured “in a well-written paper” to get credit for them.
Many students do not practice writing well-structured exam answers that include a thoughtful application of the rules to the facts. Rather, they practice a form of “outline dumping,” writing every concept they can think of in their exam answers. They “outline dump” because many professors award points for including correct ideas—no matter the order of those ideas—and do not deduct points if the ideas are disjointed or if the answers include extraneous ideas. Thus, on law school exams, outline dumping is rewarded. The same cannot be said for the bar exam.
To develop the skills they will need for the bar exam, students need more practice writing exam answers with tightly structured applications of law to fact. Although some schools provide a third-year bar exam preparation class, such a class is often too little, too late. Students need to be practicing these skills consistently throughout their law school careers.
C. Creating Practice-Ready Students
The same skills that law professors would like to see on their exams and that bar examiners expect to see in essays and performance tests are essential to effective lawyering.
Practicing lawyers repeatedly state that effective written communication of a lawyer’s legal analysis and reasoning is an essential skill. For example, a 2005 survey of members of the Arizona bar asked those practicing lawyers to assess the importance of a variety of skills to the success of an associate at the end of the associate’s first year of practice. The top two skills—rated by 96% of those surveyed as “essential” or “very important” were “[l]egal analysis and reasoning” and “written communication.” A 1999 survey of Minnesota’s lawyers revealed that 97.2% of lawyers surveyed regarded “written communication” as “extremely important” or “important.” Similarly, a 1993 survey of lawyers in Chicago and Missouri listed oral and written communication skills as the top skills new lawyers need.
The ability to systematically apply law to facts is not just a writing skill; it is a fundamental thinking skill. When a client enters a law office and explains a tale of woe, the lawyer must identify the legal issues and begin to assess which issues have merit and which issues do not. The lawyer does that by comparing the elements of potential claims to the facts the client has described. To draft a complaint, lawyers must match the elements of each claim to the alleged facts. To plan discovery or to interview a witness, a lawyer must know the elements that must be proved and seek out the facts that will prove each element. In all of these cases, a lawyer must have a mental image of the issues, their elements, and the facts that relate to those elements. Thus, core lawyering skills require the same kind of structured thinking that students are expected to show on their exams.
Teaching students to write in that structured way will teach them to think in that way. Because structured analytical thinking is at the heart of being a lawyer, we fail our students if we do not teach them this fundamental skill.
D. Closing Performance Gaps
Providing exam-writing instruction and feedback may also close performance gaps. A number of studies have noted a performance gap between white law students and law students of color. For example, several studies have established a tendency for LSAT scores and undergraduate GPAs to “overpredict” the performance of minority students in law school. That is to say, statistically, students of color perform less well in law school than would be anticipated based on their scores alone.
Some part of the performance gap may be attributable to the “need for speed” on law school exams. In the field of psychometrics, testing theorists distinguish between tests that measure “power” and tests that measure “speed.” The LSAT, for example, is intended to measure a student’s ability to reason and is thus intended to be a “power” test. However, a series of studies have shown that speed rather than reasoning may play a significant role in an applicant’s LSAT score. For example, one study examined the number of unreached items on each section of the LSAT by ethnic subgroup and found a “particularly dramatic” speed differential between white test-takers and minority test-takers, including African American, Puerto Rican, and Hispanic test-takers. Those groups had more trouble reaching the end of the exam. Similarly, another study found that students from predominantly black colleges had more difficulty finishing an experimental reading comprehension section of the LSAT than did students from the general college population.
Just as minority students struggle with the LSAT due to its demand for speed, they may similarly struggle with law school exams. In a 2004 study, Professor William Henderson first examined the extent to which the LSAT’s predictive power for law school success relates to test-taking speed. He concluded that LSAT scores were “a relatively robust predictor of performance on in-class exams but a relatively weak predictor of performance on take-home exams and paper assignments.” These results supported Professor Henderson’s hypothesis that test-taking speed is a variable that affects both the LSAT and law school performance.
Professor Henderson then examined whether the time pressures of law school exams have a particular effect on minority students. He assigned students an ordinal class rank based on their GPA derived from in-class exams and another ordinal class rank based on their GPA derived from take-home exams. White students tended to have higher rankings when that ranking depended on in-class exams than when their rankings depended on take-home exams. By contrast, minority student rankings tended to be lower when that ranking was based exclusively on in-class exams. Professor Henderson cautioned that the sample size of minority students in his study was small. Nevertheless, within that small sample, he saw a “clear trend” in which white students’ class rank was positively affected by time-compressed, in-class exams, while minority students’ class rank was negatively affected.
Since at most law schools, time-compressed exams dominate the first year of law school, the demand for speed can be particularly problematic for minority students. As Professor Henderson noted, “Numerous academic and career opportunities often hinge on relatively small variations in law school grades.”
Some research suggests that taking additional practice exams and the feedback from doing so can help close a performance gap between majority and minority students. For example, researchers compared how women, who were a minority group in an introductory statistics class, performed based on the amount of feedback that they received. In the first class, students took two mid-term exams and a final exam. The next year, in the same course, students took six bi-weekly exams and a final. Although more frequent testing helped all students perform better, the researchers found that women who were tested more frequently experienced the greatest increase in final exam scores and final grades in the course. Another study at the University of Texas at Austin School of Law examined the effect of daily testing on student performance. It found that daily testing improved the performance “for all students and the achievement gap between upper-middle class and lower-middle class students shrunk.”
Another study, this one of students at the University of California Berkeley School of Law, also pointed to practice exams and feedback as a key component in improving minority student performance. In that study, Professor Sean Darling-Hammond interviewed minority students who had been especially successful in law school. He asked those students to identify “transformative professors.” Professor Darling-Hammond then interviewed those professors to determine what made those professors “transformative.” The “most resounding and consistent recommendation” from the transformative professors was to “use multiple assessment tools over the course of the semester.” As one transformative professor, a constitutional law professor, explained, “We should be treating exam taking, or legal analysis, as a skill that itself needs to be worked on. You practice, you get feedback, you work that feedback in the next time you practice—like learning to swing a baseball bat.” Minority students agreed. “[Eighty-nine percent] of Latino students and 86% of Black students indicated that they believed their performance would have been . . . improved if professors had provided ‘different kinds of assessments—papers, practice motions, etc.,—rather than an exam that is most or all of the grade.’”
E. Decreased Student Alienation
Teaching exam-writing skills in doctrinal classrooms may mitigate the psychological distress caused by law school. During their first year of law school, law students’ psychological health collapses, and it continues to decline in the remaining years of law school. Students enter law school with a psychological health profile similar to that of the general population. Between 3% and 9% of the student population—like the general population—suffer from symptoms of depression. By the end of their first semester of law school, over 25% of all students suffer depression. That number rises to 34% by the end of their first year. By the end of their third year, 40% of all students will report symptoms of depression.
One of the many causes of law students’ psychological distress is the lack of feedback regarding their progress in acquiring the skills necessary to succeed. When interviewing law students, the authors of the Carnegie Report found a recurring theme: Consistently, students described the traditional, single-end-of-year form of assessment as “unfair, counterproductive, demoralizing, and arbitrary.” The students complained that they “had little opportunity to practice and hone the skills that were tested and, in the absence of feedback during the semester, no basis on which to gauge whether they were mastering the material or making adequate progress toward the desired proficiencies.” The authors noted that these comments came “not only from marginal students but [also] from students who made the cut at highly selective universities with eminent teaching faculties.” Notably, the students “directed their criticism not so much at the content of the examination as examination but at what preceded—rather, what did not precede—the exam.”
By contrast, students express deep gratitude for professors who align their course content with final modes of assessment. The authors of the book What the Best Law Professors Do sought to identify law professors who generated “exceptional learning” in their students. The authors then sought to identify themes in their teaching that promoted exceptional learning. Consistently, one reason that certain professors were labeled as the “best law professors” was that their course instruction prepared students for the final exam: “[T]hey are transparent about their expectations for student performance. They provide their students with extensive . . . feedback. . . . Their exams and paper assignments are notably congruent with how they have taught their courses: they test what they teach.”
Notably, students reacted positively to the instruction even though the students felt the final exams “were among their hardest in law school”: “Their students report that they feel well prepared for exams and paper assignments even though they regard those assessments as being among their hardest in law school.”
Similarly, individual professors who have examined the relationship between practice, feedback, and performance on student exams have found that students appreciate the opportunity to practice exam skills and to receive feedback. For example, Professor Curcio gave her students five single-issue essay questions over the course of a semester. She then asked her students about the practice essay questions, which were additional work for the students, and the feedback they received. “[A]lmost all the students” felt that the additional assignments and feedback were helpful and should be continued in future years—despite the extra work involved for students. Professor Burman gave his students two out-of-class essay-writing assignments, one of which was a typical issue-spotting essay. He noted that, of the students who provided comments about the writing assignments, nearly all were positive about the experience, including these comments:
“I thought the two written assignments were excellent methods of studying for the final. . . .”
“I enjoyed writing the papers. This experience taught me a lot and made me feel like I was learning some practical skills.”
“The papers we wrote were a great idea because it helped students study as well as gave us desperately needed feedback & relieves pressure.”
“Papers give the students a chance to see if they are really understanding the subject matter.”
In sum, one way to mitigate the stress our students feel is to more clearly explain what they are expected to produce on their law school exams and help them practice the skills that they will need.
F. Possible Reasons Exam-Writing Skills Are Not Taught in Doctrinal Classes More Often
Although there are many reasons why doctrinal professors should teach exam-writing skills explicitly, few do. Perhaps the most widely cited explanation for why law professors do not teach exam-writing skills is the large student-teacher ratio in most law school classrooms. Teaching writing skills requires feedback, and providing feedback to sixty, seventy, eighty, or more students seems a daunting task.
Other law professors may avoid addressing how to write an exam because they do not have the training to do so. Law professors are more likely than most to have intuitively absorbed and deployed exam-writing skills. Having never examined the process closely, many professors may not be able to explain the task to their students.
Still other doctrinal professors may believe that students are learning exam-writing skills in their Legal Research & Writing class. I have heard more than one professor advise a student, with respect to taking an exam, “Just follow IRAC, like you’ve learned in Legal Research & Writing.” That advice misunderstands what is taught in legal writing classes as compared to the skills that are needed on law school exams. In most legal writing classes, students are asked to resolve a client’s (hypothetical) legal problem. To resolve a client’s legal problem, students will more often rely on analogical reasoning, a form of inductive reasoning. They will rely on rule-based reasoning less frequently. Thus, although students will be exposed to rule-based reasoning in their Legal Research, Analysis, and Writing classes and may rely on rule-based reasoning in their assignments, rule-based reasoning is unlikely to be a central focus of most Legal Research, Analysis, and Writing classes.
Moreover, even after having had some practice in developing rule-based arguments, students are unlikely to transfer that skill to their doctrinal classes because the context in which they are using the same skill will look so different. Students tend to tie information to a particular subject or a particular environment in which that information was learned. To transfer skills from one class to another, they need to be made aware that the skill can be transferred. In a legal writing class, they may see rule-based reasoning but the context is an office memorandum or a brief. Law school exams look different. Thus, students are unlikely to see how the rule-based reasoning skills they learn in legal writing can help them in their doctrinal classes.
For these reasons, doctrinal professors should not assume that legal writing classes teach the skills that are needed on law school exams, nor should they expect that students would intuitively understand how to transfer the skills taught in their legal writing classes to an exam context.
Finally, some law professors may not want to “teach to the exam.” However, teaching law students how to organize their thoughts is not limiting what they can think; it is simply providing them with a structure within which to present their thoughts. As discussed above, the ability to apply the law to facts in a coherent way is a fundamental lawyering skill. Thus, teaching law students how to think in an organized way is not teaching to the exam; it is teaching them how to think like a lawyer.
For these reasons—and perhaps others—doctrinal professors often avoid teaching exam-writing skills in their classes. Their failure to teach exam-writing skills is a missed opportunity. The next section explains that simply adding practice and feedback appears to improve exam performance.
II. Practice and Feedback Alone Improve Students’ Exam Performance
A series of recent studies has shown that when law students practice writing exams and then receive feedback about their efforts, the quality of students’ final exams improves.
Most recently, in 2016, Professor Daniel Schwarcz and Dion Farganis conducted an investigation at the University of Minnesota Law School regarding the effects of taking practice exams and receiving feedback on those exams. At the University of Minnesota Law School, students are assigned to sections of 40 to 50 students. Occasionally, for staffing reasons, students from two different sections are combined into a double-section of 80 to 100 students. Schwarcz and Farganis compared the performance of students in double sections when only one-half of the double section had previously had an opportunity to practice and receive feedback on their exams. They found that “students from the sections receiving individualized feedback outperformed the students from the section that did not in every single class.”  Importantly, they found that the positive impact was concentrated among students who had below-median LSATs. The median LSAT score for students in the study was either 164 or 165.
The impact of the intervention was significant. In Schwarcz and Farganis’s study, a student who had previously received feedback improved his or her grade in the double section class by .118 (so, for example, from a 3.000 to a 3.118), when all of the other variables were held constant. That .1 difference in GPA of .1 on a 4.0 scale can impact student outcomes. In a typical year at the University of Minnesota Law School, “a .134 increase in GPA can improve a student’s class rank by a full 50 places, which in turn can mean the difference between being ranked in the second versus the first quartile.” That improvement was the result of “receiving individualized feedback in one—or at most two—classes during the first year of law school.”
Professors Andrea Curcio, Gregory Jones, and Tanya Washington also investigated the effect of feedback on students’ performance in a civil procedure class. They, too, found positive effects, although the positive effects were not as pervasive. In 2007, Professor Curcio gave her first-year Civil Procedure class five single-issue essay questions over the course of the semester. After students turned each assignment in, the professor provided an annotated model answer. In addition, the professor spent some class time discussing the models and providing time for self- and peer-edits. The students in Professor Curcio’s class and in another Civil Procedure class then took the same final exam. Professor Curcio and the other professor graded all the exams from both classes. The results showed that students who practiced and received feedback performed better than those students who had not had the opportunity to practice and receive feedback. Most of the benefit, however, accrued to students who had above-the-median LSAT scores. For those students with below-the-median LSAT scores, the study found no statistically significant difference between scores of those who had received the feedback and those who had not. In 2010, the school’s median LSAT was 161.
A few years later, Professor Curcio and her co-author Professor Carol Sargent conducted another study in which they investigated the effect of practice and feedback on students’ exam performance. In that study, Professor Curcio compared the final exam grades from her 2008 Evidence class with final exam grades from her 2009 Evidence class. The students in the 2008 Evidence class had only one final exam and no opportunities for practice or feedback during the semester. In 2009, Professor Curcio required her Evidence students to take five ungraded quizzes and a graded midterm. After each, students were given model answers and grading rubrics to evaluate their work. They were also asked to engage in reflective exercises intended to help them assess their comprehension and prepare for the final exam. She found that, overall, students in the 2009 class performed better than the students in her 2008 class. Looking at the results more closely, Professor Curcio determined that the majority of the benefit accrued to those in the top two-thirds of the class, as measured by LSAT scores and undergraduate GPAs. For those in the top two-thirds of the class, the benefit was “moderate to large,” with students seeing scores that were almost a full letter grade higher as compared to those with similar academic credentials in the 2008 class.
Other anecdotal evidence suggests that practice and feedback enhances student performance. In a first-year Torts class, Professor John Burman introduced two out-of-class writing assignments. One assignment asked students to argue whether a court should apply a strict liability standard or a negligence standard. The other assignment was a typical issue-spotting essay. Students received minimal individual feedback, but Professor Burman reviewed the assignments in class and discussed common mistakes or misunderstandings. In this admittedly “unscientific experiment,” Professor Burman wrote that the students “appeared to have learned the material better than in my previous classes. . . .” After grading the final exam, a traditional issue-spotting exam similar to his previous exams, Professor Burman concluded that although the “class average was similar to the previous year’s there was a significant difference: I found no bad exams. No exams were even close to a D—a rare occurrence in a first-year course. In particular, students did a much better job of discerning the issues, organizing their answers, and discussing the elements of the defenses to those claims.”
Together, the results of these studies suggest that practice with feedback improves final performance. None of these articles, however, suggest that students received any instruction on how to effectively apply legal rules to the hypothetical fact patterns. This article adds to those studies by suggesting a conceptual model that can better guide both the students’ efforts and the professors’ feedback.
III. A More Complete Approach: The Apprenticeship Model
Although practice and feedback will help students improve their exam performance, professors can do more to help their students develop the analytical reasoning skills that are essential, not only to law school exams, but also to passing the bar and practicing law. The apprenticeship model describes teaching strategies that law professors can deploy to facilitate skill development.
Under the apprenticeship model for learning, skill development is understood as a journey from novice to expert. An expert, unlike a novice, has “mastered well-rehearsed procedures, or ‘schemas,’ for thinking and acting.” Those schemas “enable experts to bring their knowledge to bear on situations with remarkable speed and accuracy.” Often, however, the schemas that experts can so rapidly deploy are invisible to the novice and are, therefore, more difficult for novices to learn. Under this theory, learning is most likely to occur when the expert makes key features of a skill visible to the novice so that the novice can appropriate the skill. In particular, a novice is likely to learn best when an expert “model[s] performance in such a way that the learner can imitate the performance while the expert provides feedback to guide the learner.” Gradually, through attempted imitation and feedback, the novice can make the expert performance his or her own. As the novice internalizes the expert’s skills and gains competence, the need for and the reliance on the expert fades.
Thus, if professors want to help move their students from their position as novices and toward competency, certain strategies will be particularly effective. Professors should
Be explicit about learning goals and the means to meet those learning goals.
Provide scaffolding or schema, which are conceptual models that guide the learner in mastering a more complex skill through smaller steps.
Model the skill being taught by performing the skill in front of students.
Ask students to perform to make their skill level visible and available for critique and analysis.
Coach students by providing feedback.
The sections that follow examine the research that relates to each of these strategies and how each strategy can be applied to teaching exam-writing skills in the doctrinal classroom. Because each strategy has been the subject of significant scholarly research and analysis, I begin each section with a short explanation of the research that supports a particular strategy. Then, I recommend ways in which each strategy can be used to teach exam-writing skills in the doctrinal classroom.
Throughout, my goal is to keep one eye on what students need and the other eye on what professors can reasonably give. The task of providing students with necessary exam-taking skills need not be onerous and, as discussed above, the benefits can be great.
A. Be Explicit
This first strategy—to be explicit—underlies all others. To facilitate skill development, professors must explicitly state what skills they want students to learn and the reasons for learning those skills. A statement that a skill is important, that it is being taught, and, perhaps most importantly, that it will be tested in the future, prompts a student to take notice. A student who is on notice is more likely to see the skill and absorb it than a student who is unaware that that skill is being taught. Once a student sees a skill and begins to absorb it, a statement that that same skill will be useful in future contexts, also makes it more likely that a student will transfer that knowledge to that future context.
Thus, for professors who want to teach exam-writing skills in class, the first step is to simply acknowledge that that is a goal. Then, when those skills are being developed or tested in class, professors should call attention to that fact. Professors should remind students that the ability to apply the law to facts is one of the skills that the professor is looking for in an exam answer and that that same skill is necessary to pass the bar.
Often, professors aren’t sufficiently explicit with students. Many professors do spend class time walking students through the process of applying law to facts. Frequently, a professor will provide a hypothetical fact pattern and ask a student (the “on call” student) to apply a rule, previously discussed, to the hypothetical fact pattern. But, in that exchange, what is the professor trying to teach? What is the student supposed to learn? The professor could be demonstrating how to break a rule down into its component parts and test for each component part—the very skill that students will need for their exams. Or, the professor may be providing a new fact pattern to illustrate the contours of the rule and the facts that can trigger one outcome versus another. Or perhaps, the professor is helping students to develop their issue-spotting abilities by identifying a novel fact pattern that can trigger the same rule. The professor could be teaching some, none, or all of those skills. When the professor fails to identify the skill or skills being taught, the students are far less likely to notice it, absorb it, or be able to use it. Thus, one necessary step in bringing skills instruction into the doctrinal classroom is to simply identify the skills that are already being taught.
In addition to identifying skills as they are being developed, professors can use their syllabi as a way to communicate the skills they want students to learn. Currently, most syllabi (and textbooks) are organized according to the legal concepts, and not the skills, that are being taught. By failing to articulate the skills that are being taught, the professor makes it more difficult for the student to identify the skills that they could potentially learn in the classroom.
Being explicit will take a little thinking time. It requires professors who have not done so already to think about their goals and what they hope their students will learn. But then, all that is necessary is to say those thoughts out loud or to write them down on a syllabus.
B. Provide a Conceptual Model
One way in which a law professor could be more explicit is by providing a strategy for approaching the kinds of legal problems that regularly appear on law school exams. One of the signature differences between an expert and a novice is that, as mentioned above, the expert has “well-rehearsed procedures, or schemas, for thinking and acting.” These practiced patterns of thought allow experts to resolve problems “with remarkable speed and accuracy.”
The novice does not have any schemas that will structure her thoughts or actions. As a result, the novice “may understand the specifics of a substantive area,” but she “will be unable to use her knowledge effectively because she will not know the structure of the discourse, the order in which to present ideas, . . . [or] what information she needs to make explicit versus what information is understood implicitly.”
For the novice to progress toward competency, those schemas for thinking and acting—sometimes called “heuristic strategies”—must be made available to the novice.
In the law school classroom, law professors are the experts. Having excelled at law school, they have proved themselves able to resolve legal problems (particularly, those that show up on law school exams) “with remarkable speed and accuracy.” Law students (as evidenced by their exams) struggle with that same task. One reason they struggle with that task is because they do not have a concrete idea of what it means to apply the law to facts.
Providing a conceptual model or a heuristic strategy for applying law to facts has several benefits. First, a conceptual model can break a complex process into smaller constituent steps, allowing the learner to master that complex process in smaller steps. By focusing on one small step at a time, the learner can avoid cognitive overload, which can inhibit learning. Cognitive overload occurs when learners are overwhelmed by the number of interactive information elements that need to be processed simultaneously. Applying the law to facts coherently and in writing is a complex task. It requires the novice to work with several different elements at once: the relevant legal rules, which the student has had only limited practice working with; the facts from a new, complicated hypothetical; and a method for getting the relationship between the law and facts onto the paper. A schema allows a student to identify the different steps in the process and focus on one at a time.
Second, the mere articulation of the underlying processes opens the possibility for dialogue. Having named the steps in a process, the student and teacher can have a dialogue about whether the student has demonstrated proficiency. Without a common vocabulary the teacher and student have little ability to analyze or discuss the student’s progress.
Third, a conceptual model can facilitate “transfer.” Transfer is the carrying over of learning from one domain to another. Ideally, students would transfer the ability to apply the rules to facts from one class to another, to the bar exam, and ultimately to practice. A conceptual model can help students do that. With a conceptual model—one that is devoid of specific doctrine or facts—students can more easily see how an analytical process available in one domain can also be used in another. By contrast, if a student does not understand the structure underlying an analytical process, the student will be unable to apply that process to a future problem.
One conceptual model that professors do provide is the “IRAC” structure. “IRAC” is a strategy for solving a legal question. It tells students to solve a legal question by stating the issue (I), stating the rule that will resolve the issue I, applying the rule to the facts (A), and stating a conclusion (C).
Although that approach is a helpful starting point, it provides insufficient guidance to students. IRAC does not explain how to apply the law to facts. Without a method for applying law to facts, students often get lost and write incomplete or incoherent analyses. For example, in their applications students are apt to address some, but not all, of the rule in reaching their conclusion. Alternatively, they may state a lot—or even all—of the relevant facts and a conclusion, but they may fail to explain how those facts relate to the rule. In a variety of ways, students can apply the law to facts but fail to do so effectively.
Thus, students need a conceptual model solely for the “A” or application. Specifically, professors can provide students with this model:
- State the issue (I).
- State the rule (R).
- Apply the rule (A).
- Divide the rule into its elements.
- Make an assertion about whether one element is satisfied or not.
- Follow that assertion with the facts that prove the assertion.
- Repeat steps two and three until you have addressed all the elements in the rule.
- Conclude (C).
The above steps provide a conceptual model for students to follow as they approach the application of law to facts.
The schema helps correct problems that often arise in students’ exams. Specifically, students often fail to explain the relationship between the elements of a rule and particular facts. The above schema provides students with a model for creating a clear connection between aspects of the rule and facts. It also explains how to organize an analysis—one element of the rule at a time. Finally, if students can identify the elements in a rule, it ensures that the analysis will be complete.
If a student were to use this schema to address a simple exam issue, the student’s answer might look like the example below. Elements are in bold, and the facts that prove the assertion are underlined.
[I:] False imprisonment. [R:] To prove false imprisonment, a plaintiff must show that the defendant intentionally confined the plaintiff within boundaries set by defendant. No physical barrier is necessary to prove confinement; threats of physical harm suffice. Plaintiff is not required to resist or attempt to escape. Plaintiff must be aware of the confinement. The length of confinement is immaterial as to making a prima facie case but is relevant as to the amount of damages.
[A:] Here, Edwina intentionally confined Lenny to boundaries she set by threatening physical harm. As Lenny lay on the ground, Edwina leaned over him, snapping her “razor-sharp shears” close to his stomach and “angrily snarled, ‘Make my day.’” Lenny was aware of the confinement. He was embarrassed and “cowered before Edwina.” The facts do not state how long he was confined, but as stated above, the length of confinement is immaterial. [C:] Thus, Lenny has a claim for false imprisonment.
Professors can explain the approach, as I have here, but it helps to also represent the idea graphically. Students’ learning and the likelihood for transfer can be substantially increased if a conceptual model is presented graphically, in diagrams or flow charts. Moreover, transfer between contexts is more likely when students have been presented with a number of different examples. Thus, providing the same schema in different forms can also improve student learning. A professor might diagram the approach this way:
Legal issue 1
Rule for legal issue 1 = A + B + C
A is satisfied because fact, fact, fact.
B is satisfied. Fact, fact, fact.
C is satisfied. Fact, fact, fact.
Outlined or diagrammed in the above manner, the approach may seem overly mechanical—much in the same way that IRAC might seem overly mechanical—but at this stage in their learning, law students need a mechanical approach. As explained in the Carnegie Report, the novice law student must, first, “learn to recognize certain well-defined elements of [a] situation and apply precise and formal rules to these elements . . . . Following the rules allows for a gradual accumulation of experience.”
As they gain experience, students can use the model more flexibly. For example, if a student wants to prove that a party will not succeed on a claim or defense, the student can march down only those elements that cannot be met. If a student wants to provide a counter-argument, the student can make a contrary assertion and then explain the facts that support that contrary position.
Similarly, professors can approach the model flexibly. Some professors believe that exam answers do not need to state the rule and then also apply the rule: Because an effective application refers to the rule, the application demonstrates the student’s knowledge of the rule. These professors can omit the initial rule statement from their description of the approach.
Given the speed with which students must complete exams, they are unlikely to perfectly execute an application of law to facts without practice; however, a conceptual model gives them a framework with which to start, a model against which they can compare their own efforts, and a vocabulary with which to discuss those efforts.
Providing students with detailed schemas with which to approach their exam answers may be an important step toward improving the quality of student work. As described above, three studies in the law school setting have chronicled the benefits of providing students with the opportunity to practice taking exams. In those studies, although students were provided with feedback, it seems that they were not provided with a detailed schema with which to approach their exam answers. Providing a detailed schema may be an important step in helping a wider range of students benefit from additional practice and feedback.
C. Model the Skill
In addition to providing a conceptual model for working through a legal analysis, professors can also model the process. Modeling “involves demonstrating a skill or process” while providing a “‘running monologue.’” The running monologue describes what is happening and explains the decision-making that occurs along the way.
One way to provide a “running monologue” about writing is to provide a “worked example” of a legal analysis. A worked example is an example with a “step-by-step explanation of a solution to a problem.” A worked example provides students with the opportunity to observe and absorb the conceptual model before they have to actually work with it.
In that way, examples can help students avoid a cognitive load problem. A cognitive load problem occurs when the student is trying to juggle too many mental tasks at once. Composing imposes a “large cognitive load” on students. Certainly that is true on law school exams when they must simultaneously accomplish two novel and arguably sophisticated tasks. They must deploy the substantive law that they have learned—law that is often new, unfamiliar, and seemingly unwieldy—using a degree of precision in their writing that they are likely unaccustomed to. At the same time, they must structure that explanation and application of the law in an effective way. Observing an example allows the students an opportunity to absorb the conceptual model and visualize its use before they must actually work with it themselves.
Numerous studies have shown that providing novices with examples or models allows them to learn more quickly and more easily. In one study, researchers provided one group of undergraduate psychology majors with examples of a well-written “methods section” of a research report. A second group was given examples of a methods section with varying quality. A final group was given no models. Each group was then asked to write a “methods section.” The study concluded that those groups that had received models scored higher on organization and that the models improved the value of the information that the students provided in their reports.
Interestingly, studies suggest that weaker students benefit most from critiquing and analyzing weak models. Weaker models can illustrate predictable problems and allow students to see what not to do “without the cognitive load involved in composition.” By contrast, more advanced students learn best from more advanced examples. A professor can balance the needs of weaker and stronger students by providing “weaker examples at the start of the semester when students have less-developed schemas” and by providing stronger examples later in the semester.
Some form of this modeling already occurs in doctrinal classrooms. When a law professor asks a student to analyze how a rule applies to a fact pattern, a law professor could ask a student to walk through the process described above. The student would first identify the elements of the rule. Then, one at a time, the student would state whether each element is met and the facts that lead to that conclusion.
Students can work with examples in a number of different ways, all of which will allow students to begin to absorb the conceptual model. Professors can provide an annotated exam answer that identifies each aspect of the conceptual model and ask students to review the annotated answer. Although providing an annotated exam answer provides some assistance, students are likely to better absorb the conceptual model if they actively engage with the model. By working with the conceptual model, students are forced to process the model, and they are more likely to remember and later use it.
Professors can help students engage with model answers in any number of ways. Professors might provide an exam answer without annotations and ask the students to annotate it by noting the various components of the conceptual model. Professors might provide a weak exam answer and ask students to identify the aspects of the model that stray from the conceptual model. Professors could also ask students to revise the answer. Or a professor might provide two different exam answers and ask students to list the ways in which one is stronger or weaker than another relative to the conceptual model. To avoid taking too much time away from class, professors can ask the students to work with the models as homework.
Again, professors should be explicit about the reasons for the assignment. Professors will want to tell students that the goal is for them to learn a structured way to think about and write an exam answer and for them to recognize the qualities of a “good” legal analysis.
After students turn in the assignment, professors can provide feedback in a number of ways. Professors can provide an answer key that identifies the areas where the model strays from the schema. Alternatively, professors can take ten or fifteen minutes of class time to have that same discussion in class. If a teaching assistant is available, professors could ask that assistant to address the assignment with students outside of class time. Professors who want to review the work themselves might consider asking students to work in groups of two or three. Working in groups allows students to provide each other with feedback, and it reduces the number of assignments that need to be reviewed.
Although creating annotated models of exam answers can take some time, once those models are created, they can be used year after year. Many professors already have model answers or easy access to very strong student answers that can be tweaked to create a model of a strong legal analysis. To create a weaker exam answer, professors can simply revise the strong answer so that it includes the problems that arise most frequently or that do the most damage.
Although there are many approaches professors can take, the fundamental idea is that students—especially weaker students—will be helped if they first have an opportunity to internalize the conceptual model before they have to actually produce their own work product.
D. Ask Students to Perform
After having an opportunity to internalize the conceptual model, students must be given the opportunity to practice. To learn, students must mentally process the lessons of the day. Practice requires students to retrieve information provided in class. The process of retrieving information embeds the information in long-term memory and makes it more likely that students will be able to retrieve and use the information in future contexts. In addition, practice—which allows students to actually use the material taught in class—is likely to keep students more engaged and deepen their understanding of the doctrine being taught.
To help students improve their exam-taking skills, students should be given practice exams that are like the exams they will take at the end of the year. Studies in other disciplines have shown that practice tests are most effective in improving student performance if the practice exams are similar in format and difficulty to the final exam.
Preferably, students will be required to practice their exam-writing skills on multiple occasions.. Experts are able to apply skills “fairly automatically.” Practice provides the novice with the opportunity for skills to become more automatic. Through practice, a student can “own” the information that was merely described in class.
To promote transfer, the practice should occur in a variety of settings. As mentioned above, students tend to tie information learned to a particular subject or a particular environment in which that information was learned. By providing opportunities for students to see the same structure in different contexts, students can generalize the structure, making it more likely that the student will retrieve the information in a different context.
The exercises should also become increasingly challenging as the semester progresses. By beginning with less challenging problems, students have the opportunity to practice and absorb the underlying schema, so that by the time the law and analysis becomes more challenging their use of the conceptual model is more automatic.
The need for multiple practice opportunities and for practice in a variety of contexts creates an opportunity to share the burden. At most law schools, students are organized into sections. In one section, one group of students attend all the same classes with the same professors. Students in another section attend classes in the same subject matters, but with a different set of professors providing the instruction. Professors working with the same group of students could coordinate the instruction and workload. Each professor could require students to take one practice exam. The professors could agree to space out the exams so that they occur at different times of the semester. The professor giving the earliest practice exam would give an easier, shorter—perhaps single-issue—practice question. The professor giving the last opportunity to practice could give a multi-issue question. Each professor would give only one practice opportunity, but in the aggregate, students would have a series of increasingly challenging opportunities to practice their written legal analysis.
Ideally, the professors would first discuss among themselves their expectations for students’ work product. Perhaps they would each agree to the conceptual model described in this article, or perhaps some variation. By discussing their expectations in advance, the professors can be explicit with students about the goal of the assignment. They should explain to students that the assignments are coordinated and that they will become increasingly challenging as the semester progresses.
Professors should also acknowledge that—at a general level—professors expect a similar work product. To that end, doctrinal professors should discuss not only amongst themselves but also with their legal research and writing colleagues the vocabulary being used to describe rule-based reasoning. Understanding other professors’ approaches will allow all professors to explain any differences in their expectations or in the language they use. Doing so, will allow students to see how the structure underlying their legal analysis can vary and the ways in which it remains the same in different contexts.
Practice is essential to improving a skill. As one scholar points out, “[a]thletes learn skills, not by talking about them, or by doing them once or twice, but by doing the same (or a very similar) exercise over and over again.” However, for that practice to lead to improvement, the practice must be accompanied by feedback.
E. Provide Opportunities for Feedback (But Not Always From the Professor)
For practice to be effective and assist student learning, students must receive prompt feedback on their efforts. This feedback should identify gaps in students’ learning and help students close those gaps. To help students close the gap, feedback should provide more than a correct answer; it should explain why an answer is correct or, in the case of an incorrect answer, explain how to improve.
Feedback can come in many different forms. The sections below describe different kinds of feedback that, given what we know, seem likely to help students close the gap between learning goals and their performance. The forms of feedback include self-assessment, peer assessment, and professor assessment.
As always, professors should be explicit about the reason for the feedback and the goals. A professor might, for example, explain that one goal for the assessment is for students to be able to improve their exam-writing skills before the final exam. In the case of self-assessment, a professor might also point out that being able to self-assess one’s own work is key to one’s professional development. This self-assessment is practice in developing that key skill. If it is a peer assessment, a professor might point out that reviewing another person’s work will provide fresh insight into students’ own work.
1. Self-assessment and Reflection
The ability to self-assess is a necessary pre-condition for learning. If students do not understand the weaknesses in their own work, they have no chance of bridging the gap between their level of performance and a more expert level of performance. Unfortunately, weaker students are significantly less able to see the gaps in their own work, even when provided with an example of excellent work. Instead of seeing the differences between their work and the work in the stronger example, weaker students tend to defend their response and assert that it has the same or equivalent content.
Even if they do see the gaps in their work product, weaker students are less likely to know what steps to take to bridge the gap. Thus, students may need to be prompted to consider what caused the gaps in their work and how they might bridge those gaps.
Professors have a variety of ways to help students in their self-assessments. An annotated model answer is a good start. An annotated model answer would point out key aspects of an effective exam answer. The model should point out not just the substantive areas that are addressed in the answer; that will be obvious to the students. It should also point out the structural and organizational aspects of the answer. It might, for example, highlight in different colors the rules and facts, so that students can see how the application tracks with the rules and connects an assertion about the rule to the facts.
Students will, however, be much more likely to learn from the model answer if a professor creates an assignment that requires students to engage with the model. For example, a professor might ask students to read an annotated model answer and then submit a statement about what the students have learned from comparing the model answer to their answer and what steps they plan to take to continue to improve. The professor can then review the students’ self-assessments rather than the actual exams. In the past, I have asked my students to submit these kinds of self-assessments. They are great to read. The statements generally show that students have identified some important aspect that they need to work on and some reasonable way to improve their work. On those papers, I can simply write, “Great! You are seeing how to improve your work.” The students may not identify every aspect that needs to be improved, but as long as students identify one aspect of their answer that they can work on, the students are likely to improve on their next attempt. For those self-assessments that seem to miss the mark, I take a little bit longer to look at their work product and suggest an area for improvement. Because most students can identify one area that needs improvement and a reasonable method for doing so, my workload is greatly reduced.
Professors can also provide students with a rubric such as the one in Appendix B. That rubric asks students to compare their work to a model answer. Because it tracks the conceptual model, the rubric helps students identify gaps in their learning. Importantly, the rubric also asks students to think about why their answer differs from the model and what they might do about it. That directed reflection can help students determine the steps that they need to take to close the gap between the model answer and their own. Professors can keep their workload to a minimum, by reviewing the students’ self-assessment for a good faith effort.
A teaching assistant—if one is available—can also help with the workload. A teaching assistant could compare the self-assessment rubric to the student’s work-product. If the self-assessment seems to correctly diagnose the problem, the teaching assistant can simply write a note that the student is correctly seeing the areas that the student needs to work on. The teaching assistant can also pull any self-assessments that do not properly identify the most significant problems. The teaching assistant can either point out on the paper areas where the student is not seeing problems or, if time permits, meet with students who struggle to appropriately self-assess their work.
Importantly, students can review model answers and complete the rubric as part of their homework. Thus, students can receive feedback with minimal sacrifice of class time.
2. Peer Assessment
Peer assessment is another way in which students can receive feedback on their work. Just as a model answer and rubric can be given to individuals to self-assess their work, the same model answer and very similar rubric can be given to students to assess the work of a peer.
Peer assessment has some advantages. First, it provides “new eyes.” Those new eyes will not suffer from any bias the writer might have toward his or her own work product, nor will the new eyes fill in gaps with what the writer intended, but did not, in fact, say. Second, reviewing someone else’s work puts students in the position of the audience and allows them to imagine how written work can appear to the person who has not written it. That same position as a reader allows students to see alternative approaches and different analyses. Finally, peer feedback requires active engagement with the material because every student must produce feedback for another. Like other forms of active learning, peer feedback helps students internalize earlier instruction by actually using it.
However, as with model answers, the peer who receives the feedback must engage with the feedback. Thus, professors might require students to submit a statement about what they learned from receiving feedback or questions that they have.
Another approach to peer feedback is to require a small group of two or three students to meet to discuss their feedback. The peer group can then provide the professor with a statement about what group members have learned or questions that came up in the process. That could be the end of the peer assessment, or the students could engage in one more task. After providing feedback to each other, students can then choose the best answer. Students can then submit the “best” answer, which they can also annotate to indicate aspects that they believe are strong and aspects that can be improved further.
Peer assessments keep the burdens on the professor manageable. There are, of course, administrative tasks of assigning groups and accounting for assignments. Some learning management systems, such as Canvas and Blackboard, can automatically and randomly assign students to review other students’ work. Alternatively, if a professor wants to ensure that peer groups are academically balanced, professors could ask the registrar to create three-person groups in which each group has a student with high, middle, and low academic profiles. Those same learning management systems can also keep track of assignments and allow a professor to easily determine whether everyone has handed in an assignment.
Peer assessments may generate some questions that you will need to address in class or in office hours; however, they allow students to engage with the material and reflect upon it, and professors can avoid the more significant burden of personally providing individual feedback.
3. Professor Assessment
Not surprisingly, students like to receive feedback on their work from the professor who will be grading their final exam. Moreover, feedback from professors is “one of the most instructionally powerful” tools, even if it is one of the “least understood features in instructional design.” Professors can provide feedback in a number of ways, many of which will minimize the burden on the professor. The discussion above has already touched on some of those methods.
First, professors can work together. By sequencing assignments across classes in a section, students can get both the practice and feedback that they need without overburdening any one professor.
Second, students can collaborate on their work, and the professor can provide feedback on the collaboration. If students work in groups of two or three to produce a practice exam answer, a professor’s workload will be reduced by half or a third. As one commentator pointed out, the important point is that the feedback be specific. It need not always be individualized.
Professors can also save time by reviewing self-assessments. As mentioned above, a professor can provide students with a model answer or rubric and ask them to self-assess. You can review the self-assessment to see if it seems like the students have learned something valuable—even if it is not everything you would want them to learn.
Finally, professors can provide class-wide feedback. An annotated model answer is one example of class-wide feedback. That model answer will, however, be more effective if the professor provides additional commentary. For example, a professor can review student practice exams and identify those problems that crop up most frequently. The professor can then create a hand-out or have a classroom discussion in which the professor identifies both strong work and why it is strong as well as typically weak work and why it is weak. For example, a professor could provide an “A” answer, a “B” answer, and a “C” answer, with an explanation about why each exam answer received the grade that it did. In the first year or two, reviewing student work (even without providing individual feedback) will create some burden. However, students tend to make the same mistakes year after year. Thus, whatever materials professors create in the first year or two of the assignment can likely be reused in future years.
All of the above methods are opportunities to provide feedback, which will help students identify the gaps in their learning and help them close those gaps. Of course, many professors will still want to provide individual feedback to each student. In my experience, providing individual feedback helps students connect the conceptual model with what they are actually doing in their work product. Students, for example, often do not know what it means to provide “specific” facts. First-year students tend to provide more generalized versions of facts. A comment to a student that, “Here, on your exam, these facts are not specific enough. Compare them to the model answer,” helps that student appreciate what exactly “provide specific facts” means.
Providing at least one round of individualized feedback does create additional work, but it is manageable. One professor estimated that grading 62 take-home essay questions and providing “significant individualized feedback” took her approximately 15 hours. She then spent another four or five hours in student meetings because students who did not do well wanted to discuss their performance. In addition, professors will likely spend a few more hours drafting practice questions, model answers, rubrics, and self-reflective exercises. Happily, those materials can be re-used each year.
One final consideration is whether to grade students’ work. Grades are another form of feedback because they inform a student about the quality of his or her performance. Moreover, to the extent that a practice exam accounts for a part of a student’s final grade, it may justify a shorter final exam, thereby keeping a professor’s overall workload more manageable. Grades can, however, be detrimental to learning.
In particular, norm-referenced grades can be detrimental to a student’s learning. In a norm-referenced grading system, grades are awarded by comparing the performance of one student against the performance of another. The purpose of a normative grading system is to sort students. Unfortunately, assessments that compare a student’s performance to others in his or her class have been shown to inhibit learning, especially for poor performers. This is so because poor performers are most likely to attribute their poor performance to lack of ability and foresee future poor performance. The negative comparison, thereby, decreases a student’s motivation to improve. Most law school grades are norm-referenced.
Criteria referenced-grading presents an alternative to norm-referenced grading. In criterion-referenced grading, a professor awards grades by comparing work product against a performance standard. The philosophy underlying a criterion-referenced grading system is that the purpose of education is to produce competent professionals, and the grade reflects the level of competency. Studies have shown that when teachers focus on mastery of a skill rather than on grade performance students are more likely to absorb a lesson and to be able to transfer that lesson to future contexts.
Although criteria-referenced grading represents an alternative to norm-referenced grading, it can still interfere with student learning. Studies suggest that when students receive a grade, they focus on the grade rather than the feedback that comes with it.
For these reasons, students will most likely benefit if initial exam-writing practice is ungraded. Ideally, students would have one or more opportunities to practice writing exams and to receive feedback before they receive a grade.
To make grading a little less time-consuming, professors can grade holistically. A holistic grade assesses the overall quality of an exam answer; it would not assign points to each individual issue spotted. The 1 to 6 scale described in Part II.B, Improved Bar Exam Skills, is a norm-referenced holistic scale that many bar examiners use to grade essay exams. Appendix C provides a criteria-referenced holistic grading scale, which may be more supportive of student learning.
Law professors want their students to expertly apply law to facts, and they want that skill on display in students’ final exams. Yet, professors frequently fail to provide the necessary instruction. This article intends to fill that gap by explaining a comprehensive method for teaching exam-writing skills:
First, professors should provide students with a conceptual model. IRAC is a start, but because students struggle with applying law to facts in a coherent, systematic way, professors should provide a more detailed conceptual model for the application. This article provides one such conceptual model. That conceptual model can then be the basis for professors and students to think about and communicate about their analyses.
Second, students should work with the conceptual model before they have to actually produce their own exam answer. Doing so allows them time to absorb the model. Students can work with a model exam answer by labeling the parts of the example that correspond with different parts of the conceptual model or students can be asked to improve a weak example. Weaker students may benefit most from seeing a weak model.
Third, students must practice and then practice some more. Professors can alleviate the burden of providing multiple practice opportunities by coordinating with colleagues who are teaching the same group of students. Students will benefit not only from multiple practice opportunities, but also by seeing how the same or a similar model can be used in different contexts.
Fourth, provide feedback. Feedback can come in many forms. This article explains a variety of ways students can receive feedback, including self-assessment, peer-assessment, and assessment from the professor. At bottom, “[m]ost feedback improves student performance.” Whatever the form of the feedback, the goal should be to help students to identify gaps in their learning and understand how to close those gaps.
Finally, professors must be explicit. At every step, professors should explain what they are doing and why they are doing it so that students can follow along.
The above represents an ideal model for teaching law students the essential skill of applying law to facts. Life is often, of course, less than ideal. If, however, each law professor does just a little bit more to explicitly teach exam-writing skills, students will, over the course of their three years in law school, more effectively learn the skills they need on law school exams, to pass the bar, and in practice. Perhaps equally important, providing students with exam-writing instruction will make law school a healthier, more welcoming, and more productive place for all of our students.
Appendix A provides another example of how the model described above can be used to answer an exam question. The sample exam answer below addresses whether a state can ban cryogenic freezing. The exam question asked students to consider all claims that the executrix of an estate would have against a state, which had instituted a ban against cryogenic freezing. The excerpt below addresses only the first part of a substantive due process claim—the analysis of whether the right to cryogenic freezing could be deemed a fundamental right.
Although this constitutional law exam answer is less fact-rich than other exam answers might be, it demonstrates how the model described above can be used in different contexts. As suggested by the model, the exam answer uses key aspects of the rule to organize the analysis and it repeats key aspects of the rule in the application to make the relationship between the rules and the facts clear.
[I:] Bobby will have a substantive due process argument. The first step in any substantive due process analysis is to determine whether there is a fundamental right involved.
[R:] Fundamental rights include the right to autonomy and bodily integrity. In particular, in both Rowe and Lawrence, the Court recognized the individual’s fundamental interest in autonomous decision-making regarding one’s own body. With respect to end of life decisions, the Supreme Court has held in Cruzan that a person has a fundamental right to refuse medical treatment, even if that refusal results in the person’s death. By contrast, in Glucksberg the Supreme Court held that the right to commit suicide (or to have someone assist a suicide) is not a fundamental right. In Glucksberg, the majority concluded that assisted suicide is not a fundamental right because it was not a right that was “deeply rooted in the history and tradition of our nation.” The Glucksberg court distinguished the Cruzan decision by explaining that, historically, forced medical treatment was considered a battery, and, therefore, historically there had been a fundamental right to refuse medical treatment, but there was no historical access to suicide.
[A:] The state has a strong argument that cryogenic freezing is not a fundamental right. Most notably, there is not a “deeply rooted” tradition in our nation of cryogenic freezing nor is there a deeply rooted tradition of preserving oneself in a way that will allow oneself to come back to life after death. Moreover, this case is distinguishable from Cruzan in that Cruzan allowed individuals to refuse treatment based on an historical right to avoid forced medical treatment. There has never been a fundamental right to obtain treatment that will allow someone to one day come back to life.
[C:] The state will argue that no such right should be acknowledged here.
[Counter-argument:] Bobby will, of course, have to argue that the right to preserve himself cryogenically is a fundamental right. To make this argument, he will have to argue not that it is a fundamental right to choose cryogenic freezing, but it is a fundamental right for the individual to choose how is body is treated after death. [Application of above rules:] First, he might argue that the right to determine what happens with one’s remains implicates one’s interest in bodily integrity. Moreover, historically, the law has enforced wishes expressed by the living after they have died. Respecting the last wishes of the living regarding how their property—including their body—is deeply rooted in the history of our nation.
[I:] Moreover, he might argue that the right to choose how to dispose of his body also implicates his right to privacy. [R:] The right to privacy includes the right to make the “most intimate and personal choices . . . central to personal dignity and autonomy” (Casey). [A/C:] The right to choose how one’s body is disposed of is also an intimate personal choice central to a person’s dignity.
[I:] Finally, the inability to choose cryogenic freezing could also implicate the right to order one’s family living arrangements. [R:] The court has frequently protected individual’s right to make personal decisions one’s family life are protected—whether the choice be about with whom to live (Moore) or how to educate one’s children (Pierce). [A:] Banning certain choices about how to dispose of the body of a family member interferes with an individual’s right to make a personal decision about family rituals surrounding death. [C:] By interfering with those family rituals, the ban on cryogenic freezing arguably intrudes into the fundamental right to choose how to order one’s family life.
Ultimately, whether Bobby’s wife can establish a fundamental right to freeze her husband cryogenically will depend on how the right is framed: Is it the right to cryogenically freeze oneself or is it the right to determine what happens with one’s body after death.
If the court holds that there is not a fundamental right to cryogenically freezing one’s body . . .
Essay Grading Standards
90-95-100 A grade of 90-100 demonstrates a high degree of competence in response to the question. While not reserved for a perfect answer, a 90-100 answer demonstrates a full understanding of the facts, a complete recognition of the issues presented and the applicable principles of law, and a good ability to reason to a conclusion. A 90-100 answer is clear, concise and complete.
80-85 A grade of 80-85 demonstrates clear competence in response to the question. An 80-85 answer demonstrates a fairly complete understanding of the facts, recognizes more of the issues and applicable law, and reasons fairly well to a conclusion.
70-75 A grade of 70-75 demonstrates competence in response to the question. A 70-75 answer demonstrates an adequate understanding of the facts, an adequate recognition of most of the issues and law, and adequate ability to reason to a conclusion.
60-65 A grade of 60-65 demonstrates some competence in response to the question but is inadequate. A 60-65 answer demonstrates a weak understanding of the facts, misses significant issues, fails to recognize applicable law, and demonstrates inadequate reasoning ability.
50-55 A grade of 50-55 demonstrates only limited competence in response to the question and is seriously flawed. A 50-55 answer demonstrates little understanding of the facts or law and little ability to reason to a conclusion.
40-45 A grade of 40-45 demonstrates fundamental deficiencies in understanding facts and law. A 40-45 answer shows virtually no ability to reason or analyze.
0 A grade of “0” should be assigned only when the applicant makes no attempt to answer the question, or when the answer shows no reasonable attempt to identify or address the issues raised by the question.
In connection with assignment of very high grades, it should be emphasized that the grade of 100 is not reserved for a “perfect” answer; it is not reserved even for a single “best” answer, which a grader may encounter to a particular question or on a particulate examination. A grade of 100 may be assigned if the grader believes that the applicant has done an exceptional job considering the time and circumstances.
See, e.g., Robert C. Downs & Nancy Levit, If It Can’t Be Lake Woebegone . . . A Nationwide Survey of Law School Grading and Grade Normalization Practices, 65 UMKC L. Rev. 819, 822–23 (1997); Steven Friedland, A Critical Inquiry Into the Traditional Uses of Law School Evaluation, 23 Pace L. Rev. 147, 164 (2002); Herbert N. Ramy, Moving Students from Hearing and Forgetting to Doing and Understanding: A Manual for Assessment in Law School, 41 Cap. U. L. Rev. 837, 839 (2013); Roy Stuckey et al., Best Practices for Legal Education: A Vision and a Roadmap 236 (2007).
Whether you see a recent “decline” in bar passage rates may depend on your perspective. In 2008, the overall bar passage rate for applicants across the nation was 71%. Since then, bar passage rates have declined. The most notable decline was between 2014 and 2015 when national bar passage rates declined from 64% to 59%. See National Conference of Bar Examiners, 2015 MBE Statistics, http://www.ncbex.org/assets/media_files/Bar-Examiner/articles/2016/BE-March2016-2015Statistics.pdf (last visited Sept. 2, 2017). Although law schools have felt a decline in the past eight years, the pass rates are not (as of yet) lower than they have been historically. Throughout the eighties, nineties, and up until 2008, overall pass rates fluctuated from a low of 63% to a high of 74%. See statistics provided by National Conference of Bar Examiners Publications and Research (May 31, 2016, 12:21 p.m.) (email on file with author). Thus, 2008 was a high-water mark, and 2015 was a low-water mark. The question is whether 2015 marks the beginning of a trend toward significantly lower bar passage rates or whether that year was an aberration.
Friedland, supra note 1, at 197–98.
See infra Part I.B.
See infra Part I.C.
See infra Part I.D–E.
See, e.g., Judith Wegner, Better Writing, Better Thinking: Thinking Like a Lawyer, 10 Leg. Writing 9, 18–19 (2004) (explaining that law professors have “tacit knowledge”); Suzanne J. Schmitz & Alice M. Noble-Allgire, Reinvigorating the 1L Curriculum: Sequenced “Writing Across the Curriculum” Assignments as the Foundation for Producing Practice-Ready Law Graduates, 36 S. Ill. U. L.J. 287, 288–89 (2012) (“[F]ew professors offer instruction on how to draft the analysis . . . .”).
William M. Sullivan et al., Educating Lawyers: Preparation for the Profession of Law 25–27, 61, 98 (2007) [hereinafter Carnegie Report].
Id. at 98.
Susan J. Becker, Advice for the New Law Professor: A View from the Trenches, 42 J. Leg. Educ. 432, 442 (1992).
Wegner, supra note 7, at 18 (“Law professors also expect students in class to develop the ability to apply the law to novel fact patterns.”); see also Becker, supra note 10 (“Law professors repeatedly pontificate that mastery of law requires much more than mere memorizing of black letter law: one must develop razor-sharp analytical skills.”); James Jay Brown, Forging an Analytical Mind: The Law School Classroom Experience, 29 Stetson L. Rev. 1135, 1139–40 (2000) (identifying “[a]nalytical reasoning skills” and “[a]n ability to effectively communicate [that reasoning]” as two of the “six educational goals that are generally espoused by professors”); Donald J. Kochan, “Thinking” in a Deweyan Perspective: The Law School Exam as a Case Study for Thinking in Lawyering, 12 Nev. L.J. 395, 407 (2012) (“[T]he effective exam taker must master the skill of assessment of the relevant law and facts and application of law to facts.”).
See, e.g., Andrea A. Curcio, Gregory Todd Jones & Tanya M. Washington, Does Practice Make Perfect?: An Empirical Examination of the Impact of Practice Essays on Essay Exam Performance, 35 Fla. St. U. L. Rev. 271, 288 (2008) (“[T]he majority of points were allocated to factual application and analysis.”); Elizabeth Fajans, Hitting the Wall as a Legal Writer, 18 Leg. Writing 3, 7 (2012) (quoting Email from Ruth McKinney, Clinical Prof. Law, U. N.C. Sch. L., to Elizabeth Fajans, Assoc. Prof. Leg. Writing & Writing Specialist, Brooklyn L. Sch., A Papers (July 7, 2010, 9:32 a.m. EST) (copy on file with Elizabeth Fajans)) (“The thing that is MOST important [to professors on exams] is pristine logic followed by precise communication of that logic.”); Michael T. Gibson, A Critique of Best Practices in Legal Education: Five Things All Law Professors Should Know, 42 U. Balt. L. Rev. 1, 8–10 (2012) (“Application is the key to many essay exam questions, and its value is why we encourage students to write out answers to old exams.”); Ramy, supra note 1, at 859 (“For example, many professors value a student’s legal analysis more than a correct recitation of a memorized legal principle.”); see also Steve Sheppard, An Informal History of How Law Schools Evaluate Students, with a Predictable Emphasis on Law School Final Exams, 65 UMKC L. Rev. 657, 679–80 (1997) (noting that a 1952 Handbook of Law Study stated that the modern essay questions was a test “not of knowledge of the law, but of its use”).
Patrick Wiseman, “When You Come to a Fork in the Road, Take It,” and Other Sage Advice for First-Time Law School Exam Takers, 22 Ga. St. U. L. Rev. 653, 655 (2006).
Andrew B. Ayers, A Student’s Guide to Law School: What Counts, What Helps, and What Matters 34 (2013) (quoting several of his law professors when he was in law school); see also Friedland, supra note 1, at 197–98 (“The primary skill tested [in law school exams] is analytical reasoning.”).
See, e.g., Philip C. Kissam, Law School Examinations, 42 Vand. L. Rev. 433, 440–41 (1989) (identifying “rule application” as one of three critical exam skills); Peter F. Lake, When Fear Knocks: The Myths and Realities of Law School, 29 Stetson L. Rev. 1015, 1047–48 (2000) (“In general, when using cases on exams, realize that the maximum effectiveness of case usage is when a student can take a case and identify a specific rule of law and work with the facts of that particular case to compare and contrast it with the issue that is presented.”); Wiseman, supra note 13, at 660 (providing example).
Nelson P. Miller & Bradley J. Charles, Meeting the Carnegie Report’s Challenge to Make Legal Analysis Explicit—Subsidiary Skills to the IRAC Framework, 59 J. Leg. Educ. 192, 208 (2009).
There are law school essay exams that do not require or do not emphasize the application of law to facts, such as an essay exam that asks students to evaluate the policy reasons behind the law or to advocate for changes in the law. This article is focused on the traditional issue-spotting essay exam in which the student is presented with a complex fact pattern and the student must identify the legal issues it raises and assess the arguments that opposing parties will make about those issues. That kind of essay exam remains central to law school education. See, e.g., Friedland, supra note 1 (identifying the “issue spotter” as the “classic” final examination); Kissam, supra note 15, at 438–39 (identifying the many different kinds of exams, including “the classic ‘issue-spotter’”); Sheppard, supra note 12, at 657 ("The exams of the many American schools now follow a surprisingly few patterns based on a few hypothetical questions and, less often, on a group of many multiple-choice questions.); Ramy, supra note 1 (describing the law school examination as “most typically . . . a series of lengthy hypothetical fact patterns”).
See Christine Coughlin, Joan Malmud Rocklin & Sandy Patrick, A Lawyer Writes: A Practical Guide to Legal Analysis 131 (2d ed. 2013).
See id. at 135.
Miller & Charles, supra note 16, at 209 (identifying analogical reasoning as one of two forms of inductive reasoning).
Application is not, however, the only skill that is necessary to write a strong exam. Before exam day, students must synthesize a legal framework from cases they have read and the class discussion about those cases. They must then either memorize the rules within that legal framework or, if the exam is an “open book” exam, create a system by which those rules can be easily retrieved. On exam day, students must be able to read fact patterns carefully; spot the legal issues raised by those facts, which requires analogizing the facts presented in the exam hypothetical to previous fact patterns; recall the precise rules that govern those legal issues; apply those rules to the hypothetical facts to predict the most likely resolution of the each issue; and convey the above aptitudes coherently, in writing, and under strict time constraints. See, e.g., Brown, supra note 11; Kissam, supra note 15 (describing the skills are necessary to succeed on a law school exam); Greg Sergienko, New Modes of Assessment, 38 San Diego L. Rev. 463, 469 (2001) (same); Ayers, supra note 14, at 10 (same). A poor exam can result from a failure at any one of the points listed above. Students would benefit from explicit instruction about each of these skills. This article focuses on the application of law to fact because of the central importance of that skill.
Kissam, supra note 15, at 470.
Id.; see also Michael Hunter Schwartz, Teaching Law by Design: How Learning Theory and Instructional Design Can Inform and Reform Law Teaching, 38 San Diego L. Rev. 347, 352 (2001) (“[W]hile most professors critique the selected students’ classroom attempts to perform legal analysis, law professors fail to state explicitly what students need to know, or to explain how to spot legal issues or to perform legal analysis.”).
Kissam, supra note 15, at 470–71; see also Mary Beth Beazley, Better Writing, Better Thinking: Using Legal Writing Pedagogy in the “Casebook” Classroom (Without Grading Papers), 10 Leg. Writing 23, 70–71 (2004) (critiquing law professors’ failure to explicitly label the steps in an analysis).
Kissam, supra note 15, at 470; see also Schwartz, supra note 24 (“[L]aw professors devote considerable classroom time to critiquing students’ case reading and case evaluation skills even though, ironically (or, perhaps, perversely), law professors seldom test case reading skills explicitly.”).
Beazley, supra note 25, at 67 n.185.
Michael Hunter Schwartz, Expert Learning for Law Students 8 (2d ed. 2008).
C. Steven Bradford, The Gettysburg Address as Written by Law Students Taking an Exam, 86 Nw. U. L. Rev. 1094, 1095 (1992).
Schwartz, supra note 24, at 357.
Downs & Levit, supra note 1, at 824 (quoting Paul T. Wangerin, “Alternative” Grading in Large Section Law School Classes, 6 Fl. J.L. & Pub. Pol’y 53, 54 (1993)).
Alfred Z. Reed, Training for the Public Profession of the Law: A Report to the Carnegie Foundation for the Advancement of Teaching, Bulletin No. 15, at 359-60 (1921).
Becker, supra note 10, at 443. See also Bradford, supra note 29, at 1094–95; David Nadvorney, Teaching Legal Reasoning Skills in Substantive Courses: A Practical View, 5 N.Y. City L. Rev. 109, 109 (2002); and Reed, supra note 32, at 303 for further discussion about professors’ dismay about the quality of student exam-writing.
Expecting expert written analyses but failing to provide the instruction that will help students produce is not only illogical, but also it is at odds with the American Bar Association’s newly revised accreditation standards. In 2015, the American Bar Association issued new educational standards. Among those new standards is a requirement that law schools establish learning outcomes (ABA Standard 302) and that law schools “utilize both formative and summative assessment methods in its curriculum to measure and improve student learning and provide meaningful feedback.” Am. Bar Ass’n, ABA Standards and Rules of Procedure for Approval of Law Schools 2017–2018, at 15–16, 23 (2017) (ABA Standards 302 and 314). Law schools must then assess the degree to which students are achieving competency in the school’s stated learning outcomes. Id. at 23–24 (ABA Standard 315). Thus, another reason for professors to teach exam-writing skills is that doing so will allow them to “measure and improve student learning and provide meaningful feedback” in compliance with the new ABA standards. Id. at 23 (ABA Standard 314).
National Conference of Bar Examiners, Comprehensive Guide to Bar Admissions Requirements 2017 30–32, http://www.ncbex.org/publications/bar-admissions-guide/ (last visited Sept. 4, 2017).
See Denise Riebe & Michael Hunter Schwartz, Pass the Bar! 141–53 (2006); Wanda M. Temm, Clearing the Last Hurdle: Mapping Success on the Bar Exam 38–41 (2015).
Riebe & Schwartz, supra note 37, at 179–181; Temm, supra note 37, at 51–53. Most typically students produce objective memoranda and persuasive briefs. But they can also be asked to draft letters to clients, letters to opposing counsel, or parts of contracts, wills, complaints or other legal documents. See Riebe & Schwartz, supra note 37, at 181.
See Riebe & Schwartz, supra note 37, at 146 (emphasizing importance of following “CRAC” format); Temm, supra note 37, at 76–77 (providing a sample answer that relies on a combination of rule-based and analogical reasoning).
See supra note 2.
Christian C. Day, Law Schools Can Solve the “Bar Pass Problem”—“Do the Work!”, 40 Cal. W. L. Rev. 321, 326 (2004).
Noam Scheiber, An Expensive Law Degree, and No Place to Use It, N.Y. Times (June 17, 2016), http://www.nytimes.com/2016/06/19/business/dealbook/an-expensive-law-degree-and-no-place-to-use-it.html?_r=0.
J. Maureen Henderson, Why Attending Law School Is the Worst Career Decision You’ll Ever Make, Forbes (June 26, 2012, 10:21 a.m.), http://www.forbes.com/sites/jmaureenhenderson/2012/06/26/why-attending-law-school-is-the-worst-career-decision-youll-ever-make/#620170bb1f1e.
Elizabeth Olson, Burdened with Debt, Law School Graduates Struggle in Job Market, N.Y. Times (April 26, 2015) http://www.nytimes.com/2015/04/27/business/dealbook/burdened-with-debt-law-school-graduates-struggle-in-job-market.html?_r=0.
Denise Riebe, A Bar Review for Law Schools: Getting Students on Board to Pass Their Bar Exams, 45 Brandeis L.J. 269, 279–80 (2007).
National Conference of Bar Examiners, Instructions for Taking the MEE, http://www.ncbex.org/pdfviewer/?file=%2Fdmsdocument%2F25.
National Conference of Bar Examiners, Preparing for the MPT, http://www.ncbex.org/exams/mpt/preparing/ (last visited Sept. 11, 2017).
Research on file with author.
Kissam, supra note 15, at 497.
Research on file with author.
The scale that follows is the grading scale that Washington State uses to grade essay answers. Washington State Bar Examiners, MEE Grading Standards, http://www.wsba.org/~/media/Files/Licensing_Lawyer Conduct/Admissions/MEE and MPT Grading Standards.ashx (last visited Sept. 11, 2017). The National Conference of Bar Examiners also uses a one-to-six scale to train people who will be grading the bar exam. E-mail from Judith Gundersen, Director of Test Operations, National Conference of Bar Examiners, to Joan M. Rocklin (Aug. 18, 2016, 1:14 p.m. PT) (on file with author). The National Conference of Bar Examiners describes Washington State’s scale as “not identical” to, but “not inconsistent with” the scale it uses to train people who will be grading the bar exam. Id.
Kochan, supra note 11, at 407 (recommending that “[i]nstead of outline dumping . . ., the effective exam taker must master the skill of assessment of the relevant law and facts and application of law to facts”).
See, e.g., Riebe, supra note 46, at 297 (indicating that as of 2006 more than 80% of law schools offered some form of bar-preparation course).
See infra notes 211–224 and accompanying text.
Stephen Gerst & Gerald Hess, Professional Skills and Values in Legal Education: The GPS Model, 43 Valparaiso U. L. Rev. 513, 524–25 (2009).
John O. Sonsteng & David Camarotto, Minnesota Lawyers Evaluate Law Schools, Training and Job Satisfaction, 26 Wm. Mitchell L. Rev. 327, 343–45 (2000).
Bryant G. Garth & Joanne Martin, Law Schools and the Construction of Competence, 43 J. Legal Educ. 469, 474 (1993) (“The clear winners on the hierarchy, according to the mean score and in terms of the ‘extremely important’ category, are communication skills—written and oral.”).
The reasons for the performance gap are “multiple and complexly interrelated.” See Nat’l Educ. Assoc., Identifying Factors that Contribute to Achievement Gaps: Discussion Guide 2, http://www.nea.org/home/17413.htm (last visited Sept. 11, 2017). This article does not address the causes of the performance gap; rather it suggests that additional training in taking exams might help close that gap. Id.
William D. Henderson, The LSAT, Law School Exams, and Meritocracy: The Surprising and Undertheorized Role of Test-Taking Speed, 82 Tex. L. Rev. 975, 998 (2004) (summarizing studies).
Speed may be an issue for other groups who perform less well than anticipated. For example, some studies in the 1990s found a performance gap between men and women in law schools. See, e.g., Lanie Guinier, Michelle Fine & Jane Balin, Becoming Gentlemen: Women, Law School, and Institutional Change 37 (1997) (“[D]espite equivalent entry profiles, there is a solid and stable gender difference in performance.”); Allison Bowers, Women at the University of Texas School of Law: A Call for Action, 9 Tex. J. Women & L. 117, 135–37 (2000) (finding a small, but persistent performance gap between men and women at the University of Texas School of Law); Felice Batlan, Kely Hradsky, Kristen Jeschke, LaVonne Meyer & Jill Roberts, Not Our Mother’s Law Schoool?: A Third-Wave Feminist Study of Women’s Experience in Law School, 39 U. Balt. L.F. 124, 139 (“[W]omen have slightly lower law school GPAs than men.”). But see William D. Henderson, The Lsat, Law School Exams, and Meritocracy: The Surprising and Undertheorized Role of Test-Taking Speed, 82 Tex. L. Rev. 975, 1029 n.153 (2004) (finding no male-female performance gap in law school); Schwartz, supra note 24, at 21–22 (same); Alexia Brunet Marks & Scott A. Moss, What Makes a Law Student Succeed or Fail? A Longitudinal Study Correlating Law Student Applicant Data and Law School Outcomes at 42 (July 6, 2015) (available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627330) (finding gender disparities have “abated”). To the extent there is a male-female performance gap, some have suggested that timed-exams play a role. See, e.g., Lanie Guinier, Lessons and Challenges of Becoming Gentlemen, 24 N.Y.U. Rev. L. & Soc. Change 1, 7–8 (1998) (noting that in her “informal discussion[s] with various professors, the observation has been made that many women perform better on take-home exams and research assignments that give them ample opportunity to think and reflect”); Daniel E. Ho & Mark G. Kelman, Does Class Size Affect the Gender Gap? A Natural Experiment in Law, 43 J. Legal Stud. 291 (2014) (observing a pre-existing gender gap eliminated by reducing class size and eliminating timed, in-class exams).
Henderson, supra note 61, at 979–80.
Id. at 997 (citing Linda f. Wightman & David G. Muller, Comparison of LSAT Performance Among Selected Subgroups 6 (LSAC, Statistical Rep. No. 9001, 1990)).
Id. at 997.
Id. at 984–85.
Henderson, supra note 61, at 981.
Id. at 1015.
Id. at 1027.
Id. at 1028-29.
Henderson, supra note 61, at 1029.
Id. at 1029.
In the Henderson study between 75% and 80% of students’ first year grades were dependent on in-class exams, and between 61% and 75% of cumulative GPA were based on in-class exams. Id. at 1039.
Id. at 982; see also Daniel Schwarcz & Dion Farganis, The Impact of Individualized Feedback on Law Student Performance (Apr. 27, 2016) (available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2772393) 67 J. Legal Ed. __ (2017) (forthcoming) (noting that at the University of Minnesota Law School “a .11 increase in GPA can improve a student’s class rank by a full 40 places” and that class rank is often very important when seeking summer jobs or a judicial clerkship).
Emily Zimmerman, What Do Law Students Want?: The Missing Piece of the Assessment Puzzle, 42 Rutgers L. J. 1, 25 (2010).
Sean Darling-Hammond & Kristen Holmquist, Creating Wise Classrooms to Empower Diverse Law Students: Lessons in Pedagogy From Transformative Law Professors, 33 Chicana/o-Latina/o L. Rev. 1, 20 (2015).
Id. at 21.
Darling-Hammond & Holmquist, supra note 84, at 55.
Id. at 55–56.
Id. at 57.
Am. Bar Ass’n, Law Schools Take Aim at Mental Illness, americanbar.org (Nov. 2015), http://www.americanbar.org/publications/youraba/2015/november-2015/law-schools-take-aim-at-mental-illness.html (last visited Sept. 12, 2017); G. Andrew H. Benjamin et al., The Role of Legal Education in Producing Psychological Distress Among Law Students and Lawyers, 11 Am. B. Found. Res. J. 225, 246 (1986).
Benjamin et al., supra note 93, at 247 (“Specifically, on the basis of epidemiological data, only 3–9% of individuals in industrial nations suffer from depression; prelaw subject group means did not differ from normative expectations.”) (internal citations omitted).
Am. Bar Ass’n, supra note 93.
See Stuckey et al., supra note 1, at 30–36 (listing the many causes of law student psychological distress).
Paula J. Manning, Understanding the Impact of Inadequate Feedback: A Means to Reduce Law Student Psychological Distress, Increase Motivation, and Improve Learning Outcomes, 43 Cumb. L. Rev. 225, 227 (2013) (“Law student psychological distress has been attributed to a number of concurrent causes—including the lack of adequate performance feedback, which, in addition to being a stressor on its own, exacerbates the pressure caused by the competitive law school environment.”); Richard Sheehy & John J. Horan, Effects of Stress Inoculation Training for 1st-Year Law Students, 11 Int’l J. Stress Mgmt. 41, 42 (2004) (citing studies that have concluded lack of feedback causes stress in law students). See also Lawrence S. Krieger, Institutional Denial About the Dark Side of Law School, and Fresh Empirical Guidance for Constructively Breaking the Silence, 52 J. Legal Ed. 112 (2002) for a more complete discussion about the relationship between law school and law students’ mental distress.
Carnegie Report, supra note 8, at 165.
Id. at 166.
Id.; see also Grant H. Morris, Preparing Law Students for Disappointing Exam Results: Lessons from Casey at the Bat, 45 San Diego L. Rev. 441, 448–49 (2008) (noting the psychological distress that “is surely multiplied when little or no feedback is provided for an entire semester”); Kissam, supra note 15, at 456 (“[T]he use of one examination in each course, the lack of much instruction and practice for, or feedback on, a student’s performance of the basic examination functions, and the marked discontinuity between course work and exam work noted previously, produce additional mysteries about what is expected on these examinations and about what factors determine the grades.”).
Michael Hunter Schwartz, Gerald F. Hess & Sophie M. Sparrow, What the Best Law Professors Do 5 (2013).
Id. at 11.
Id. at 21.
Curcio, Jones & Washington, supra note 12, at 287.
Id. at 309.
John M. Burman, Out-of-Class Assignments as a Method of Teaching and Evaluating Law Students, 42 J. Legal Educ. 447, 455 (1992).
Id. at 456.
See, e.g., Elizabeth M. Bloom, A Law School Game Changer: (Trans)formative Feedback, 41 Ohio N.U. L. Rev. 227, 236, 236 n.60 (2015) (“Teachers bemoan the amount of time it takes to provide feedback . . . .”); Phillip C. Kissam, Thinking (By Writing) About Legal Writing, 40 Vand. L. Rev. 135, 141 (1987) (noting the large class size in law schools which makes it “impractical for overburdened law professors (or anybody else for that matter) to provide much writing experience for their students”).
See, e.g., Barbara J. Busharis & Suzanne E. Rowe, The Gordian Knot: Uniting Skills and Substance in Employment Discrimination and Federal Taxation Courses, 33 J. Marshall L. Rev. 303, 312 (2000); Paula Lustbader, Construction Sites, Building Types, and Bridging Gaps: A Cognitive Theory of the Learning Progression of Law Students, 33 Willamette L. Rev. 315, 321 (1997) (“Even when law teachers want to be more explicit, they often cannot break down the reasoning to the degree necessary . . . . As experts, . . . they are not consciously aware of all that goes into their analysis.”).
Beazley, supra note 25, at 67 (“Traditional law school pedagogy does not give the casebook faculty a ready vocabulary to communicate failings, and traditional casebooks, which contain only cases and discussion questions, do not lay out and label common steps in the analytical process.”).
“IRAC” is a mnemonic device that lists the component parts of an argument. “IRAC” stands for “Issue,” “Rule,” “Application,” “Conclusion.” See Coughlin, Rocklin & Patrick, supra note 18, at 82.
In law review articles, professors similarly recommend adopting IRAC for exams, often without explaining how precisely that structure applies to exams. See, e.g., Curcio, Jones & Washington, supra note 12, at 287 (commenting that before giving a practice exam the professor reviewed IRAC); Lake, supra note 15, at 1023 (“One method that I strongly recommend to law students is as follows. After you review this first set of exams, affirmatively attempt to identify your strengths and weaknesses in terms of what we call the IRAC Formula.”); Miller & Charles, supra note 16, at 193 (“Most law schools teach the IRAC method in the first term of law school.”); Ramy, supra note 1, at 845–47 (recommending IRAC as an appropriate exam strategy but not providing any specific advice about how to structure the “A”).
Compare Tracy Turner, Finding Consensus in Legal Writing Discourse Regarding Organizational Structure: A Review and Analysis of the Use of IRAC and Its Progenies, 9 Legal Comm. & Rhetoric 351, 359 (2012) (describing IRAC in terms of analogical arguments) with Miller & Charles, supra note 16, at 208 (describing IRAC as a deductive problem of applying rules to facts). See also Lake, supra note 15, at 1026 (noting that professors mean different things when they say IRAC); Adam G. Todd, Exam Writing As Legal Writing: Teaching and Critiquing Law School Examination Discourse, 76 Temp. L. Rev. 69, 71–73 (2003) (arguing that legal writing professors should teach exam-writing skills because such instruction is not regularly included in legal writing classes).
See Tonya Kowalski, True North: Navigating for the Transfer of Learning in Legal Education, 34 Seattle U. L. Rev. 51, 54 (2010).
See id. at 55.
See also infra notes 208–210 and accompanying text.
Schmitz & Noble-Allgire, supra note 7, at 302.
Schwarcz & Farganis, supra note 79, at 4.
Id. at 4–5.
Id. at 5.
Id. at 23.
Id. at 23.
Id. at 29.
Curcio, Jones & Washington, supra note 12, at 286–90.
Id. at 288.
Id. at 289.
Id. at 290.
Id. at 291.
Id. at 294.
Schwarcz & Fargonis, supra note 79, at 30 n.72.
Carol Springer Sargent & Andrea A. Curcio, Empirical Evidence that Formative Assessments Improve Final Exams, 61 J. Legal Educ. 379, 385 (2012).
Id. at 385–86.
Id. at 385–88.
Id. at 391.
Id. The median LSAT score for each class was 159. Id. at 390 tbl.2.
Burman, supra note 110, at 451.
Id. at 451–52.
Id. at 452 (“I did not write any extensive comments . . . .”).
Id. at 456.
Id. at 453.
Id. See also Schmitz & Noble-Allgire, supra note 7, at 302, in which the authors discuss improvements that faculty saw after instituting a Writing Across the Curriculum program at Southern Illinois University School of Law. Faculty commonly saw improved organization on final exams; however, lower-performing students still struggled. Id. at 304–05.
See also Curcio, Jones & Washington, supra note 12, at 278–79 & nn.33–38 (citing studies).
Some books provide appropriate advice about how to apply the law to facts. See, e.g., John C. Dernbach, Writing Essay Exams to Succeed in Law School Not Just to Survive 47 (4th ed. 2014) (urging students to “[e]xplain the relationship between the facts and your conclusion”); Richard Michael Fischl & Jeremy Paul, Getting to Maybe: How to Excel on Law School Exams 253 (1999) (emphasizing the importance of the word “because” to connect the law and facts); and Peter T. Wendel, Deconstructing Legal Analysis: A 1L Primer 110–11 (2009) (explaining how to write an analysis by connecting key aspects of the rule to facts). However, in the glutted market of books providing advice to law students, students may not come across these books. And, even if they do, the advice—often buried more than a hundred pages into the book—may be hard to find. Law students need their professors to highlight the key skills that will be most important in succeeding on law school exams and the bar.
Carnegie Report, supra note 8, at 25; see also Lustbader, supra note 113, at 326–27.
Carnegie Report, supra note 8, at 25. The other significant feature that distinguishes expert from novice performance is that the experts can more quickly recognize contexts in which particular schemas should be deployed. Id. Here, I focus on the first trait—the development of a schema—because the contexts in which students will perform is limited to exams, and thus, for exams, students do not need to identify the particular schema that will be relevant to this context.
Carnegie Report, supra note 8, at 26, 98.
Id. at 26.
Id. at 26, 61, 98.
Id. at 61, 98.
Id. at 61.
See supra note 123; see also Gibson, supra note 12, at 21 (“Best Practices repeatedly says that a good teacher expressly tells her students what she wants them to learn.”); Deborah Zalesne & David Nadvorney, Integrating Academic Skills into First Year Curricula: Using Wood v. Lucy, Lady Duff-Gordon to Teach the Role of Facts in Legal Reasoning, 28 Pace L. Rev. 271, 271 (2008) (citing Karl N. Llewellyn, The Current Crisis in Legal Education, 1 J. Legal Educ. 211, 213 (1948–49))(“[E]ven when [a] skill is intended to be a central value of a course, the skill as such will be absorbed by the bulk of the students only if the skill is made explicitly, sustainedly, insistently the focus of organization and of class treatment.”); Riebe, supra note 46, at 331 (“[A]n especially powerful approach [to teaching] is to explicitly explain the learning process and skills to be learned and to ground the learning in a substantive context.”).
Friedland, supra note 1, at 204–05 (“[P]rior to evaluating law students on their knowledge, skill, and problem-solving proficiency, students should be fully informed about what they should be learning. . . . The more importance and clarity attached to the notice to the students, the more likely the students will pay attention to such insights.”); Paul T. Wangerin, Skills Training in “Legal Analysis”: A Systematic Approach, 40 U. Miami L. Rev. 409, 464–68 (1986) (“A teacher’s mere description to students of skills sought to be taught in law school classes will generate immediate positive results. Students, who quite often are . . . somewhat confused regarding what teachers are trying to do in connection with substantive components of given courses, will quickly grab tight hold of carefully defined skills.”); Darling-Hammond & Holmquist, supra note 84, at 38 (" ‘Most students are confused about what their professors expect of them and what sorts of skills they are supposed to be developing … Once they see what your goals are, once they see what you expect of them, your students will be in a position to adjust their approach to the material. By failing to tell them, you leave them in the dark about what they should be doing.’ ") (quoting Howard E. Katz & Kevin Francis O’Neill, Strategies And Techniques Of Law School Teaching: A Primer For New (And Not So New) Professors 4 (2009)).
Friedland, supra note 1, at 204–05.
Kowalski, supra note 118, at 99–100; Laurel Currie Oates, I Know that I Taught Them How to Do That 7 Leg. Writing 1, 5 (2001); Deborah Maranville, Transfer of Learning, in Building on Best Practices: Transforming Legal Education in a Changing World 90, 92 (Deborah Maranville, Lisa Radtke Bliss, Carolyn Wilkes Kaas & Antoinette Sedillo López eds., 2015).
Wangerin, supra note 170, at 412–14 n.4; Terrill Pollman, The Sincerest Form of Flattery: Examples and Model-Based Learning in the Classroom, 64 J. Leg. Educ. 298, 319 (2014) (noting that students “may view the syllabus as a miscellaneous group of topics” and suggesting that professors explicitly identify the skills that the professor expects the students to learn from a given assignment); Gibson, supra note 12, at 13–14 (“Our syllabi and tables of contents identify the doctrinal patterns that students need to learn. . . . However, when it comes to skills, we rarely identify patterns explicitly.”); Zalesne & Nadvorney, supra note 169, at 272–73 (lamenting that the typical syllabus and casebook “fail[s] to advise students of any skills” that will result from their reading and noting that they “describe only, or primarily, the doctrine to be covered in the book”).
Anthony Niedwiecki, Teaching for Lifelong Learning: Improving the Metacognitive Skills of Law Students Through More Effective Formative Assessment Techniques, 40 Cap. U. L. Rev. 149, 167–69 (2012) (noting the lack of explicit discussion about forms of reasoning).
Carnegie Report, supra note 8, at 25; see also Lustbader, supra note 113, at 326–27.
Carnegie Report, supra note 8, at 25.
Lustbader, supra note 113, at 327.
Beazley, supra note 25, at 46; Carnegie Report, supra note 8, at 100; Maranville, supra note 172, at 92.
Carnegie Report, supra note 8, at 25.
See supra notes 24–25 & accompanying text.
Carnegie Report, supra note 8, at 98.
Pollman, supra note 173, at 299.
Id. at 303–04 (“Intrinsic cognitive load is generated when learning material requires the learner to hold many novel elements in working memory at once.”).
Kowalski, supra note 118, at 60–61 (providing various definitions of “transfer”).
Id. at 54 (“Although normally knowledge is highly bound to specific patterns, people can learn to create more freedom of access between stored areas of knowledge by using broader, intersecting schematics.”).
Oates, supra note 172, at 6; Niedwicki, supra note 174, at 169 (“The failure to explicitly detail the underlying thought process that gets the students to the answer or end product has a detrimental effect on the students’ ability to transfer their learning to new and novel situations.”).
Thanks go to Professor Caroline Forell for providing this exam issue.
Oates, supra note 172, at 9 (“[R]esearchers have found that in some situations transfer can be enhanced through knowledge mapping. Instead of presenting underlying structures in text form, they are presented through diagrams that emphasize how the various pieces of information are related.”).
Schwartz, supra note 24, at 379 (" ‘[I]nstruction should present or encourage multiple representations of material to be learned.’ . . . [I]nstructors should generate multiple examples so that students learn to identify and emphasize key features and thereby avoid confusion.").
Oates, supra note 172, at 7.
Carnegie Report, supra note 8, at 117.
Appendix A provides an example of a constitutional law exam answer in which the answer considers the arguments on either side of the question.
Wiseman, supra note 13, at 660–61 (providing examples of a “long-winded” and concise version of an exam answer and expressing a preference for the concise version).
See supra Part II.
Providing a detailed schema is not a cure-all. Ultimately, students must be motivated to act upon the feedback that they receive. Without that motivation, outcomes won’t change no matter how effective the conceptual models, opportunities for practice, or feedback. That said, I believe providing students with a more effective schema is an important in helping students develop effective analyses.
Kowalski, supra note 118, at 98.
Pollman, supra note 173, at 305.
Id. at 306 (" ‘[S]tudying worked examples may facilitate schema construction and transfer performance more than actually solving the equivalent problems.’ ") (quoting cognitive scientist John Sweller, et al., Cognitive Architecture and Instructional Design, 10 Educ. Psychol. Rev. 251, 273 (1998)).
Id. at 301–02 (explaining that “attempting many sophisticated tasks at once can make learning slow, difficult, and laborious”).
Id. at 300.
Id. at 305–06, 311–12 (“[O]bservational learning, compared with learning by doing, enabled students to better cope with ‘the double agenda of task execution and learning’ and more easily learn complex skills.”).
Id. at 315.
Id. at 309 (citing Davida H. Charney & Richard A. Carson, Learning to Write in a Genre: What Student Writers Take from Model Texts, 29 Res. Teaching Eng. 88, 88, 90, 92–96 (1995)).
Id. at 317.
Id. at 316.
Id. at 316–17.
Michael Hunter Schwartz, A Reviewing of Teaching and Learning Theory, in Building on Best Practices: Transforming Legal Education in a Changing World, supra note 172, at 67, 68 (“Active learning activities are those in which students cannot simply sit and listen but must mentally process the lessons teachers want them to learn. . . . [Through such activities] students are more likely to retain them. The more deeply students work with what they are learning, the more likely they are to remember and use it.”); Bloom, supra note 112, at 242 (“Providing a model answer alone is not enough to ensure students will engage actively with feedback. This strategy works best when combined with exercises that encourage students to engage with the criteria and exemplars before attempting to apply them to their own work.”).
Bloom, supra note 112, at 242 (“[A] professor could assign small groups of students to review different examples of a written piece of work (such as a good and a weak essay) and discuss what makes each succeed or fail, using the rubrics to help make the judgment.”).
Carnegie Report, supra note 8, at 95 (citing Stuckey et al., supra note 1, at 109) ("‘Students cannot become effective legal problem-solvers unless they have opportunities to engage in problem-solving activities in hypothetical or real legal contexts..’ "); Schwartz, supra note 209, at 68; Alice M. Noble-Allgire, Desegregating the Law School Curriculum: How to Integrate More of the Skills and Values Identified by the Maccrate Report into A Doctrinal Course, 3 Nev. L.J. 32, 37 (2002) ("The development of skills, whether practical or analytical, requires practice. To become effective legal writers, for example, students “need to write in the discipline - a lot - to really understand how it functions.”) (quoting Carol McCrehan Parker, Writing Throughout the Curriculum: Why Law Schools Need It and How to Achieve It, 76 Neb. L. Rev. 561, 566–67 (1997)); Riebe, supra note 46, at 331 (“G]ive students new learning skills and provide them an opportunity to apply them immediately to substantive material.”).
E. Scott Fruehwald, How to Help Students from Disadvantaged Backgrounds Succeed in Law School, 1 Tex. A&M L. Rev. 83, 115–16 (2013).
Riebe, supra note 46, at 294 (“One of the most significant findings in the UCLA study was that teaching skills combined with substantive material was more effective than teaching either skills or substantive material alone. Upon reflection, this makes sense: skills are more effectively learned when students can immediately practice and apply those skills in connection with learning substantive material, and substantive material is more effectively learned when students actively process it in skills-based tasks.”); see also Lustbader, supra note 113, at 329 (“[L]earning occurs when the student’s experience intertwines with the discipline-based knowledge being taught. Thus, effective teaching occurs when the instruction combines both the student’s experience and domain-specific experience.”).
Curcio, Jones & Washington, supra note 12, at 278–79.
Fruehwald, supra note 212, at 87–88. (“Finally, repetition is essential to learning because it affects long-term memory and the connections within long-term memory. . . . [N]eural patterns that are not reactivated are hard to retrieve and may decay. To become a permanent memory, a pattern needs to be retrieved again and again.”); Curcio, Jones & Washington, supra note 12, at 304 & n. 142 (“Mastery of the skill [IRAC] requires repeated exposure to the formula and repetitious application of the formula in the context of different exam questions.”).
Riebe, supra note 46, at 331.
Richard E. Redding, The Legal Academy Under Erasure, 64 Cath. U.L. Rev. 359, 390 (2015).
Kowalski, supra note 118, at 54.
Oates, supra note 172, at 7.
Schmitz & Noble-Allgire, supra note 7, at 296–297.
Schwartz, supra note 24, at 415.
Darling-Hammond & Holmquist, supra note 84, at 58.
See Schmitz & Noble-Allgire, supra note 7, at 297 (discussing how professors at Southern Illinois University School of Law took the approach described above).
Wangerin, supra note 173, at 468.
Stuckey et al., supra note 1, at 125 (“Educational theorists agree on the importance of providing prompt feedback.”); See Sargent & Curcio, supra note 143, at 381–83; Bloom, supra note 112, at 232–33 (2015) (citing studies); Fruehwald, supra note 212, at 115–16.
Niedwiecki, supra note 174, at 177; Elizabeth Ruiz Frost, Feedback Distortion: The Shortcomings of Model Answers as Formative Feedback, 65 J. Legal Ed. 938, 942 (2016); Sargent & Curcio, supra note 108, at 381–83.
Sargent & Curcio, supra note 143, at 381–83.
Sergienko, supra note 21, at 484–85
Sergienko, supra note 21, at 484.
See Frost, supra note 226, at 947–50 (“[P]oor performers are . . .unable to see that their work is inadequate.”).
Bloom, supra note 112, at 244 (“One challenge is addressing the ubiquitous initial reaction—a student’s certainty that her own exam, which did not receive the grade she expected, had exactly the same content as the model strong exam.”); Andrea A. Curcio, Moving in the Direction of Best Practices and the Carnegie Report: Reflections on Using Multiple Assessments in a Large-Section Doctrinal Course, 19 Widener L. J. 159, 168 (2009) (“It was fascinating that students, especially those who had not done well, often self-assessed as having addressed an issue or argument when that issue or argument was nowhere to be found in their answer.”); Frost, supra note 226, at 950 (describing instances in which weaker students were unable to perceive their weakness).
Curcio supra note 228 at 168 (“I found that the self-assessment exercise confirmed what the literature suggests: struggling students often have no idea that they are struggling or have no idea how to improve.”).
See Bloom, supra note 112, at 241–242.
See example at note 188 and accompanying text. In the example, bold and underlining replaces the use of color.
Frost, supra note 226, at 962.
See also Bloom, supra note 112, at 244 (recommending a similar exercise).
Cassandra L. Hill, Peer Editing: A Comprehensive Pedagogical Approach to Maximize Assessment Opportunities, Integrate Collaborative Learning, and Achieve Desired Outcomes, 11 Nev. L.J. 667, 689–90 (2011) (describing peer editing checklist); Patricia Grande Montana, Peer Review Across the Curriculum, 91 Or. L. Rev. 783, 809–11 (2013) (providing examples of peer editing checklists). See Appendix B for a rubric that can be provided for peer assessment.
Denise Riebe, Reader’s Expectations, Discourse Communities, and Effective Bar Exam Answers, 41 Gonz. L. Rev. 481, 501 (2006) (quoting Linda L. Berger, Applying New Rhetoric to Legal Discourse: The Ebb and Flow of Reader and Writer, Text and Context, 49 J. Legal Educ. 155, 179 (1999)).
Kirsten K. Davis, Designing and Using Peer Review in a First-Year Legal Research & Writing Course, 9 Leg. Writing 1, 2 & n.6 (2003).
Ramy, supra note 1, at 863–64.
Davis, supra note 239, at 2 & n.7.
See supra notes 86–92, 108–10 & accompanying text (describing student interest in feedback); see also Sargent & Curcio, supra note 143, at 379 (“Students also believe they could learn better if they had more feedback, and many voice deep frustration at the low quality and quantity of feedback during the semester from their professors.”); Curcio, Jones & Washington, supra note 12, at 309 (describing students’ interest in additional feedback from the professor after receiving some feedback during the semester).
Bloom, supra note 112, at 233 (citing Valerie J. Shute, Focus on Formative Feedback, 78 Rev. of Educ. Research 153, 153 (2008)). But see Curcio, Jones & Washington, supra note 12, at 274–75 (noting the lack of research regarding the effect different teaching methodologies have on the acquisition of specific skills).
See supra notes 187–88 & accompanying text.
Ramy, supra note 1, at 853–54.
Curcio, supra note 227, at 161, 167–68.
Burman, supra note 110, at 452 n. 19.
Carnegie Report, supra note 8, at 168.
Sargent & Curcio, supra note 143, at 382–83.
Carnegie Report, supra note 8, at 168.
Kowalski, supra note 118, at 104–05.
Sargent & Curcio, supra note 143, at 382–83; see also P.J. Black, Chris Harrison, & Clara Lee, Assessment for Learning 43 (2003) (discussing students’ tendency to focus on their “mark” rather than the teacher’s comments).
Sargent & Curcio, supra note 143 at 381.
This exam question was created by Professor Louis Virelli at Stetson University College of Law (available at http://www.stetson.edu/law/faculty/virelli-louis-j/course-resources.php). His website provides the question, student answer, and his comments, all of which I used to create this sample answer.
See supra Part III.B.
2017 essay grading standards for the Idaho bar. See https://isb.idaho.gov/pdf/admissions/grading_standards.pdf.