A. Norm-Referenced Grading
B. Criteria-Referenced Grading
C. Pass-Fail Grading
D. Another Method—Open Grading
A. Most Law Schools Use Norm-Referenced Grading
B. The Significance Of Grading In Legal Writing Classes
III. How Norm-Refernced Grading Is Inconsistent with Current Trends in Legal Education
A. Overview
B. Assessment Of Student Learning
C. The Humanizing Law School Movement
IV. Why Legal Writing Is Well-Suited for Criteria-Referenced Grading
A. Legal Writing Classes Are Too Small for Norm-Referenced Grading
B. Legal Writing Professors Use Rubrics to Evaluate Student Performance and Provide Frequent Feedback And Opportunities for Improvement
C. The Benefits Outweigh the Risks for Legal Writing
A. Criteria-Referenced Grading Is Fair and Consistent
B. Criteria-Referenced Grading Should Not, By Itself, Lead to Grade Inflation
C. Criteria-Referenced Grading Can Effectively Communicate Student Competency to Employers
D. Overcoming Faculty Resistance to Change
Conclusion
Testing and grading are not incidental acts that come at the end of teaching but powerful aspects of education that have an enormous influence on the entire enterprise of helping and encouraging students to learn.1
Introduction
Grades matter. Ranking based on grades is an ingrained part of the law school experience. Grades are used to dole out rewards such as scholarships, law review positions, and access to prestigious clerkships. On the other side, grades are used to determine punishments like academic probation and disqualification from law school.2
Most law schools rely on norm-referenced grading systems, commonly referred to as grading curves, to evaluate students.3 Under this approach, students are evaluated in comparison to each other, with specific limitations placed on how many students can receive certain grades, with the fewest at the top and the bottom.4 This grading system has been criticized because it is based on the assumption that teachers cannot improve student competence, and because it increases student stress, interferes with deep learning, and does not adequately inform students whether they have reached a level of competence.5 Criteria-referenced grading, in which students are evaluated based on objective standards of competency rather than in comparison to other students, avoids many of the negative aspects of norm-referenced grading and is more consistent with current trends in legal education.
Two recent reports—Best Practices for Legal Education: A Vision and a Road Map,6 published by the Clinical Legal Education Association (Best Practices) and Educating Lawyers: Preparation for the Profession of Law,7 published by the Carnegie Foundation for the Advancement of Teaching (Carnegie Report)—advocate that law schools focus more on teaching professionalism, skills, and ethics and on integrating these topics into the traditional curriculum. They also recommend that schools set explicit learning objectives for their students and that they do a better job of assessing whether those objectives have been met. Another trend—exemplified by the humanizing law school movement—seeks to improve both learning and student well-being by decreasing some of the well-documented negative psychological effects of law school created in part by the focus on competition and extrinsic motivation.8 Law schools are beginning to respond to these reports by revising their curricula and preparing for anticipated changes in the American Bar Association (ABA) standards for law school accreditation that will require a greater focus on student assessment and outcome measures.9
The authors of Best Practices, the Carnegie Report, and the literature on assessment and humanizing law school are unanimous in their criticism of norm-referenced grading policies.10 They favor criteria-referenced systems because they more reliably communicate whether students are proficient in the skills required of competent professionals.11
In the current environment of curricular innovation and the increased focus on assessment methods, the time is ripe to reexamine grading practices. Part I of this Article defines basic grading principles. Part II summarizes the current state of grading in law school generally, and in legal writing specifically. Part III reviews the current trends in legal education and the related criticism of norm-referenced grading policies. Part IV explains why criteria-referenced grading should be adopted in legal writing12 classes. Part V argues that criteria-referenced grading should be adopted in other courses and responds to the concerns that such a proposal might raise. The Article concludes that the benefits of criteria-referenced grading outweigh the negatives and that legal writing can provide a model for other courses, as law schools begin to incorporate the recommendations of Best Practices and the Carnegie Report.
I. Grading Methods Defined
Norm-referenced grading is “the measurement of a student’s performance in relationship to the performance of other students” and involves a ranking process based on some type of grading curve.13 Such a system can use letters or numbers. Usually, under this system of grading, students are ranked from best to worst, and then grades are awarded based on that ordering, using some set distribution of grades.14 This is accomplished by requiring teachers to assign a specific percentage of the class to each grade, or by conforming to a prescribed mean or median.15
This grading method does not require that students meet an objective standard of achievement, and individual professors are limited in their ability to grade students on their proficiency relative to objective criteria.16 In short, “norm-referenced assessments are based on how students perform in relation to other students in a course rather than how well they achieve the educational objectives of the course.”17
The most classic form of norm-referenced grading is based on the distribution found in a “bell curve,” in which most grades are at the top of the bell, which represents the middle range, with the highest and lowest grades at the extreme ends.18 A recent survey revealed that only 16.8 percent of college and university professors rely on bell curves to grade students.19
Proponents of norm-referenced grading argue that it results in greater fairness and consistency among professors and sections, protecting students from professors who give extreme grades, at either the high or low end.20 This type of grading can be helpful when only a limited number of students can be eligible for a particular reward, like a scholarship or a job opportunity.21
Opponents have questioned whether norm-referenced systems really facilitate grading in an “absolutely fair” manner and worry that teachers improperly rely on “the normal probability curve to be some sort of scientific finite reality from which they can predict the nature of their classes.”22 One professor has accused those who use norm-referenced grading of assuming that “for all positive characteristics represented in the class which might contribute to grade achievement, there are in existence exactly equal counterbalancing negative characteristics” and that “any positive changes wrought by the teacher must be counterbalanced by negative changes.”23 Grading on a bell curve has long had critics throughout the field of education.24
B. Criteria-Referenced Grading
Criteria-referenced grading measures “a student’s performance against an established standard,” rather than in comparison to other students.25 Under this approach, a professor can determine a student’s grade based on a “numerical scale of quality.”26 For example, on a 100-point scale, any student who scores 94 or above would receive an A; any student who scores between 85 and 93 would receive a B; and so on. “No predetermined distribution of grades is required.”27 Criteria-referenced evaluation measures student performance against a standard of competence.28 This grading method is often accomplished by providing students with “rubrics, or detailed written grading criteria, which describe both what students should learn and how they will be evaluated.”29
Supporters of criteria-referenced assessment argue that “grades should reflect students’ absolute level of accomplishment” and that students should be judged on “the inherent quality of what is produced, not on the basis of what other students have produced.”30 Critics of this system worry that it does not adequately identify low performing students and note that establishing and defending criterion levels for each grade can be challenging and time-consuming for professors.31
C. Pass-Fail Grading
A pass-fail system of grading limits the distinctions between students in a professor’s final grade for a course to two.32 In undergraduate education, a pass-fail grading system gained popularity in the late 1960s as a way “to remove the stigma of traditional letter grades, to open the academy, freeing students and the process of learning from punitive ranking while retaining standards.”33 “It was thought of as a way of reducing anxiety and pressure and of encouraging students to explore other disciplines without the fear of lowering their GPA.”34
Critics argue that a pass-fail system can decrease student motivation by eliminating the reward of a higher grade for more work and can be difficult to assess when students apply to graduate schools.35
D. Another Method—Open Grading
Where a mandatory curve, or some other grading standard, is not imposed, professors may be free to grade without specific guidelines. Sometimes referred to as “open grading,”36 such a method is usually based on an individual professor’s experience and judgment “evolved and refined over time.”37 This method is increasingly rare, and can result in grading disparities.38
II. Grading in Law School
A. Most Law Schools Use Norm-Referenced Grading
Most law schools now use some type of norm-referenced grading system. However, in 1976, only nine percent of the 102 schools responding to a survey used some form of grade normalization.39 A 1993 study of law school grading practices concluded that grading curves were becoming more popular, particularly in first-year courses and in large upper-division classes.40 By 1995, eighty-four percent of the 116 schools responding to a survey used grade normalization.41 In 2003, the Association of American Law Schools (AALS) conducted its own survey of law school grading policies. Of the 145 schools responding, 115 (79.3%) had a formal grading policy and at 81 schools, the policy was mandatory.42 The study found that the most popular type of grading curve was the use of specific percentages for each grade, followed by the use of a mean. Some schools based their curve on a median grade and a number used multiple types of curves. While the majority of schools required the curve to be applied in all courses, some exempted small classes, legal writing, and seminars. Many schools also provided that the dean could override the policy.43
Recently, some of the country’s most elite schools, including Harvard, Yale, and Stanford, have switched to a modified pass-fail grading system.44 Under these systems, professors do not award letter grades, but choose from options that may include high honors, honors, pass, low pass, and fail. These grading policies seem to be a form of norm-referenced grading because they restrict the number of students that can be in each of the possible categories.45 Other schools have made smaller changes to boost the final GPAs of their students. These schools have recognized the potential competitive disadvantages in the job market if students have lower GPAs than students at similarly ranked schools. While these recent changes have created some buzz in the media and on law-related blogs,46 this is not the first time that a significant number of law schools have reevaluated their grading policies.47
B. The Significance of Grading in Legal Writing Classes
Each year, the Legal Writing Institute (LWI) and the Association of Legal Writing Directors (ALWD) conduct a national survey of legal writing programs.48 The 2010 survey shows that almost all required legal writing classes are graded, with grades that are included in students’ GPAs.49 Most law schools grade the required legal writing program based on the same mandatory curve as other required first-year courses.50
In the past two decades, the field of legal writing has made great strides within the academy.51 The course is now a required part of the law school curriculum, pursuant to the ABA’s Standards for Approval of Law Schools.52 It is taught, most often, by full-time faculty who specialize in teaching legal writing, and who, increasingly, have similar titles, benefits, and rights to participate in school governance as faculty who teach doctrinal courses.53
It took a long time and enormous effort to get to this place.54 In the 1950s, 1960s, and 1970s, legal writing courses “remained marginal and peripheral” and the faculty teaching it were treated differently from “regular faculty.”55 In the 1980s, law schools began to devote more resources to their legal writing programs,56 and by 1994, legal writing had succeeded in becoming “a permanent part of the law school core curriculum.”57 In most courses, students received grades that were included in their GPAs,58 but the “wholesale acceptance into the legal academic community” had yet to be achieved.59
The surveys, which are now conducted annually, continue to document progress for legal writing and those who teach it.60 The Sourcebook on Legal Writing Programs, published by the ABA Section of Legal Education and Admissions to the Bar, notes the following: “The historical results of these surveys, and others, clearly show a distinct national trend of upgrading and professionalizing legal writing faculty positions.”61 For example, the 2010 survey revealed that the salaries of directors and full-time faculty continue to increase;62 that while most faculty are on short-term contracts, the vast majority are not limited in the number of years they may be renewed; and that the number of programs offering long-term contracts and tenure-track positions has increased.63 Even with these achievements, equality for full-time legal writing faculty has not been reached at all schools. In addition, this progress may be threatened by a recent proposal to the ABA Standards Review Committee to eliminate Standard 405(d), which provides some security of position to legal writing faculty.64
Equal grading policies have been one of the benchmarks in evaluating the progress of the field of legal writing and the seriousness with which it is treated by both students and other faculty. Historically, legal writing was not graded at all, or, if graded, not included in a student’s GPA. In her article assessing the state of legal writing programs in 2000, Professor Jo Anne Durako wrote, “While not a direct measure of the status of LRW professionals, grading policies for LRW courses reflect the status and value placed on the field. . . . If the course is valued, as evidenced by parity in grading policies, perhaps that parity will someday extend to the teachers.”65
The ABA’s Sourcebook on Legal Writing Programs has noted that grading legal writing the same as other courses, and including it in GPA and class rank calculations, helps both students and doctrinal faculty view the course as a serious and integrated part of the first-year curriculum. Doctrinal faculty are less likely to resent the time students spend on legal writing assignments “even when that time competes with preparation for other subjects.”66
Students whose legal writing grades are part of the GPA may also take legal writing more seriously because the grading method sends the message that the course is just as important as all other first-year subjects. When the legal writing grade counts in students’ overall average, they may be more likely to expend the necessary effort to learn the important analytical, research, and writing skills taught in the course.67
In addition, grading the course “indirectly recognizes that students have diverse abilities,” and that they may do better in this course than in a time-pressured, memorization-based final exam.68
As noted above, all but a handful of schools have moved beyond grading the course pass-fail. Legal writing grades are included in students’ GPAs and graded on the same curve as other required courses. One of the questions raised by this Article is whether grading the course in a different manner—using a criteria-referenced grading method, rather than a curve—will cause any slippage of the gains achieved.69
III. How Norm-Refernced Grading Is Inconsistent with Current Trends in Legal Education
Legal education is going through a transformation. Several current trends are having an impact on a new period of openness to change in the curriculum and pedagogy of law school.70 Perhaps the most influential impetuses for this process are Best Practices71 and the Carnegie Report.72 Both recommend that law schools more consciously integrate skills, professionalism, and ethics into the curriculum. They are just the most recent in a long line of reports recommending that law school become more relevant to practice.73
The reports also recommend that law schools pay more attention to their assessment of student learning and institutional effectiveness by setting explicit goals and developing methods to determine whether those goals have been met. They are thus part of the larger assessment movement, spurred by the requirements of regional accreditation agencies and by anticipated charges to the American Bar Association (ABA) standards for accreditation of law schools.74
Another trend is the humanizing legal education movement, which has sounded an alarm about the increasing anxiety and depression experienced by law students and lawyers, and how that negatively impacts those who enter the profession and their clients. The scholars writing in this field have drawn on the work of education experts and cognitive psychologists to determine how to improve both law student well-being and learning outcomes.
The issue of grading practices is relevant to all these trends. The authors of Best Practice, the Carnegie Report, and the literature on assessment and humanizing law school are united in their criticism of norm-referenced grading. They recognize that mandatory curves are inconsistent with the crux of their recommendations and the future of law school that each envisions.
B. Assessment of Student Learning
“The assessment movement is knocking at the door of American legal education.”75 Professor Gregory Munro76 made the statement above in his 2000 book, Outcomes Assessment for Law Schools, the first to comprehensively analyze assessment in the context of legal education. Ten years later, it is fair say that the door has been opened, and the assessment movement has taken up residence in the living room. In 2011, any conference on legal education will undoubtedly include several panels and discussions on assessment,77 and a number of conferences are devoted entirely to the topic. It is a concept that has long been a staple of undergraduate education, but has only recently been on the radar for legal education.
1. Definitions and Purpose
Munro defines assessment as “a set of practices by which an educational institution adopts a mission, identifies desired student and institutional goals and objectives (‘outcomes’), and measures its effectiveness in attaining these outcomes.”78 The term is used to discuss both the evaluation of student learning and the evaluation of the educational effectiveness of the institution.79 Linda Suskie, an expert on assessment in higher education,80 describes the assessment of student learning as an “ongoing process” that establishes clear, measurable goals; ensures that students have adequate opportunities to meet those goals; and uses the information gathered “to improve teaching and learning.”81
Assessment of student learning should be designed to determine what and how students are learning, and to act as a learning tool with the goal of improving student performance.82 It is a “process, integral to learning, that involves observation and judgment of each student’s performance on the basis of explicit criteria, with resulting feedback to the student.”83
Students can be evaluated using summative or formative assessment. The traditional law school exam at the end of the semester is an example of a summative assessment. Its purpose is to “measure student performance and assign grades, rather than to provide extensive feedback.”84 Summative assessment is usually conducted at the end of the student learning process to measure the net effects of instruction “after the fact.”85 In contrast, formative assessment provides students with feedback86 “during instruction and is intended to guide the teaching-learning process.”87 It can be used during the course as “a diagnostic tool or instructional device for student learning”88 by helping teachers discover which pedagogical techniques are effective and which are not, thereby allowing them to improve their courses.89 In a class using formative assessment, “[s]tudents perform tasks, are evaluated, are provided feedback, and learn at the same time.”90
Prompt formative feedback is key to effective student learning, achievement, and satisfaction. “Frequent positive feedback helps students become self-motivated, independent learners.”91 Such feedback is most “valuable when teachers clearly articulate the criteria for competent student performance (for example, the elements of a convincing written argument), the students perform, and the students receive feedback based on the criteria.”92
Both Best Practices and the Carnegie Report devote substantial space to the topic of assessment. They recommend a greater focus on setting learning goals and assessing both students and the institution, and urge law schools to include more formative assessments, moving away from the traditional system of an entire grade based on one end-of-semester exam.93
2. “Norm-Referenced Grading Is Inconsistent with Sound Assessment Principles”94
Judith Wegner, former dean of the University of North Carolina and principal investigator for the Carnegie Report has commented that educators need to go beyond Carnegie’s call for more formative assessment:
However, I think we need to do more than [formative assessment]. We have conflated some of what we do with our grading curves and approaches to grading. We are telling students about their comparative standing when that does not make much sense to them and does not help them build expertise, which is really the point. We confuse students because we do not give them meaningful benchmarks about the progress they are making toward the goal of being effective, talented lawyers. We need to do more about that.95
Both the Carnegie Report and Best Practices criticize the use of mandatory curves and favor criteria-referenced grading as a more reliable assessment method because it is based on “explicit criteria rather than the instructor’s gestalt sense of the correct answer or performance.”96 The authors of Best Practices could not be clearer in stating their preference: “Mandatory grade curves are not consistent with best practices for assessing student learning. A bell curve outcome actually reflects a failure of instruction.”97 The Carnegie Report characterizes norm-referenced and criteria-referenced grading as representing “fundamentally opposed philosophies about the purpose of assessment in professional education”:
Those who champion grading on the curve assume that legal education largely serves a sorting function. . . . On the one hand, the benefits to society, it is argued, in identifying, recognizing, and rewarding those few who will carry on the tradition of legal scholarship as professors, scholars, and jurists are obvious, and to many they outweigh the negatives associated with this grading scheme. On the other hand, the implicit pedagogical philosophy underlying criterion-referenced assessment is that the fundamental purpose of professional education is not sorting but producing as many individuals proficient in legal reasoning and competent practice as possible.98
In addition, norm-referenced grading goes hand-in-hand with a belief that any assessment system can do little more than sort and that teachers cannot raise the performance of most students.99
3. ABA Standards Revision
In 2007, the ABA began to review its accreditation policies and formed a special committee to study “output measures.” The 2008 report issued by this committee used the recommendations of Best Practices and the Carnegie Report as jumping-off points, describing them as “influential” and representative of “the current state of thought about law school pedagogy.”100 The committee acknowledged the criticism of current grading systems and the arguments for greater use of formative assessment and criteria-referenced, rather than norm-referenced, grading.101
The committee also recognized that the regional accreditation agencies that govern the universities that house most law schools are requiring law schools to more actively participate in the regional accreditation process. These agencies have for some time focused on outcome-based measures, thus forcing law schools to move in this direction.102 The committee’s report recommended that the ABA reexamine its accreditation standards with a goal of shifting towards outcome measures based on “the latest and best thinking of U.S. legal educators (as reflected in the Carnegie Foundation and ‘Best Practices’ reports) and legal educators in other countries,” as well as “the best thinking and practices of accreditors in other fields.”103 The latest draft of the revisions to Chapter Three of the ABA’s accreditation standards, which addresses the “Program of Legal Education,” includes new rules on learning objectives and assessment. Proposed Standard 304 is entitled “Assessment of Student Learning” and provides: “A law school shall apply a variety of formative and summative assessment methods across the curriculum to provide meaningful feedback to students.”104 Whatever language is ultimately approved, there is little doubt that law schools will be required to “reevaluate and perhaps adjust their delivery of legal education.”105 Any such reevaluation should include a critical review of law school grading methods.
C. The Humanizing Law School Movement
The negative psychological effects of law school on students have been well documented. Evidence is mounting that this process begins in the first year of law school.106 Specifically, students enter law school with emotional characteristics no different from other students, but they end the first year exhibiting signs of “declining happiness and well-being.”107 The authors of one study have suggested that “various problems reported in the legal profession, such as depression, excessive commercialism and image-consciousness, and lack of ethical and moral behavior, may have significant roots in the law-school experience.”108
The movement to humanize legal education represents a response to these studies.109 At its heart, the movement seeks to “create positive learning environments for students”110 by reducing or eliminating, to the extent possible, the “undue and unnecessary stress” of traditional legal education, which interferes with learning.111 Barbara Glesner Fines, one of the movement’s leading scholars, has described its advocates as focusing on the professional development of law students, including a focus on competency and ethics with a goal of graduating “confident, caring, reflective professionals, discerning their own values and purposes, and knowing how to work with others collaboratively and to understand diverse perspectives.”112 The movement has been growing in adherents since 1991, and its principles were highlighted in both Best Practices and the Carnegie Report.113 The American Association of Law Schools (AALS) created a Balance in Legal Education section in 2006, providing further evidence of the movement’s influence.114
Research in this field has also demonstrated that stress and anxiety can have a negative effect on students’ ability to learn. Stress and anxiety interfere with receiving and processing information, affecting “not only cognitive aspects of learning but emotional and attitudinal components as well.”115 Students may cope by procrastinating, “to provide an excuse for failure and to reduce the threat to self-esteem.”116
Norm-referenced grading is inconsistent with the principles of the humanizing legal education movement because it not only fosters a stress-inducing competitive atmosphere, but it also interferes with the deep learning created by intrinsic motivation, autonomy support, and self-efficacy.117 For example, several scholars have looked to the self-determination theory of human motivation (SDT) to better explain and understand how to help their students succeed in law school and in their careers. Specifically, such research demonstrates that students learn more effectively and deeply when they are intrinsically motivated and are offered autonomy support. Under this theory, motivation can be viewed as occurring on a continuum between extrinsic and intrinsic motivation.118 Students flourish and perform better as “motivation moves . . . from external and controlled to internal and chosen.”119
According to SDT, all human beings require regular experiences of autonomy, competence, and relatedness to thrive and maximize their positive motivation. In other words, people need to feel that they are good at what they do or at least can become good at it (competence); that they are doing what they choose and want to be doing, that is what they enjoy or at least believe in (autonomy); and that they are relating meaningfully to others in the process, that is, connecting with the selves of other people (relatedness).120
Mandatory curves interfere with autonomy needs and create an “external locus of control” for a student’s learning efforts, displacing intrinsic motivation.121 Students’ experience of institutional control interferes with “learning performance, well-being, and enjoyment of the process.”122 In short, “[t]he more controlled learners feel, the less they learn.”
123 In contrast, criteria-referenced grading can provide autonomy support because students know their grade is not limited by the external controls of a predetermined limit on the number of high grades or the number of low grades.
Without a mandatory curve, if the same students were to receive the same grades, they would be more likely to experience the locus of causation as internal—relating to their own effort, understanding, and level of achievement. In that case the lack of imposed control and the greater perceived autonomy support would promote a greater sense of personal responsibility, more internal motivation for students to apply themselves, and predictably enhanced well-being and learning performance.124
Best Practices also recognized that norm-referenced grading can have a “negative effect on student motivation and learning” because it informs students only how they have performed compared to other students. It does not tell them to what extent they have met the educational goals of the course.125
In addition, curved grading interferes with the “inherent, natural desire to learn” and negatively impacts both well-being and academic performance.126 A grading curve is unrealistic because it assumes that students will perform the same in every class subject to the curve, failing to account for different responses to, for example, a particularly effective teacher or a particularly engaging subject.127
Another psychological theory relevant to student learning is based on self-efficacy—“the personal belief that you can control an outcome—that you can achieve a desired result.”128 Research in self-efficacy theory has shown that “students are more likely to study efficiently and longer when they believe they will master the material than when they have doubts about their ability to learn.”129 The converse is that students can become depressed and anxious when they “value a goal highly but develop low self-efficacy in relation to their ability to achieve that goal.”130 These findings are not affected by the ability level at which students begin their efforts to achieve a goal. Thus, helping students to increase their self-efficacy will increase the likelihood they will do well. This includes helping “students establish goals that are attainable” and reducing “the threat of negative consequences over which they have no control.”131
The traditional structure of law school provides a rich breeding ground for low self-efficacy and thus helps to explain the high levels of stress and anxiety found among law students, particularly in the first year.132 This is due to the challenges of the new skills, the lack of direct feedback, and the norm-referenced grading system.133
Another way normalization policies contribute to student stress is by magnifying an already competitive atmosphere.134 After the first semester of law school, students are keenly aware of where they stand compared to their classmates, even if they are not aware of how that ranking takes place.135 The educational literature demonstrates that students who do not do as well during the first semester as they may have expected, frequently believe that they cannot change “their place in the grade hierarchy.”136 Such students,
accept their place in the system and subsequently may expend less effort and actually achieve less than they are capable of in subsequent tests. On the other hand, in an environment that emphasizes the possibility of achievement through criterion-referenced evaluation, students have greater incentive to perform better because the possibility of success is not limited by the performance of their classmates.137
Criteria-referenced grading is a good alternative to norm-referenced grading because “the process is efficient both for teacher and students; it communicates high expectations, encourages focus, and generally provides increased transparency and a sense of fairness to grading.”138 Although there are signs that law schools are adopting some of the recommendations of Best Practices, the Carnegie Report, and the humanizing law school movement, there is no evidence that the consistent recommendations to institute criteria-referenced rather than norm-referenced grading systems are having much impact. Legal writing is a good course in which to demonstrate the merits of this grading method. The next section will focus on why legal writing courses may be the best place to take the next step.
IV. Why Legal Writing Is Well-Suited for Criteria-Referenced Grading
The arguments against norm-referenced grading apply with particular force to legal writing classes. First, most legal writing classes are too small for a curve to be valid. Second, these classes are particularly suited to criteria-referenced grading because professors already evaluate their students based on explicit criteria, even though they must conform the results of that evaluation into a final grade that is based on a curve.
In short, legal writing professors already use good assessment practices139 by communicating clear standards of competency to students and by using formative assessment through frequent feedback on multiple assignments. As one scholar has noted, when the ABA revises its standards to include outcomes measures,
legal writing programs may experience less of a sea change than other areas in the legal academy because many of the underlying philosophies and practices associated with an outcomes-based approach are already accepted and being utilized by legal writing professors. Many legal writing professors already identify concrete objectives for student learning, assess that learning, and use the results of the assessments to improve their classes.140
Because legal writing classes effectively incorporate the key theories discussed in the assessment literature, they provide “excellent models to imitate.141
A. Legal Writing Classes Are Too Small for Norm-Referenced Grading.
Curved grading systems have limited validity in small classes. Thirty to thirty-five students is generally the minimum number for a valid sample for grade normalization.142 For example, under a curve based on the GPAs of the students in a particular class, smaller numbers decrease the likelihood that the comparative student performances will be consistent with predicted performances.143 In addition, educational literature demonstrates that students in smaller classes may legitimately achieve higher grades because students learn better in classes of fewer than thirty students.144
Legal writing classes are usually too small for a curve to effectively apply. The ABA Sourcebook on Legal Writing Programs recommends that in a program using tenure-track professors “each professor in a required first-year legal writing course should have no more than 30 to 35 students” and that this faculty/student ratio should be reduced when the writing professor teaches another course at the same time.145 In a program using full-time legal writing professors on long-term or short-term contracts, each professor should have no more than 30 to 45 students each semester, “assuming the professor is not teaching any other course,” and “[s]maller numbers are better.”146 Classes taught by adjunct professors “should never have more than 15 students per class; many schools limit the size of adjunct-taught writing classes to 10 or fewer.”147 The 2010 ALWD/LWI Survey indicates that actual numbers are slightly above, but close to, these recommendations.148 In addition, the authors of the Carnegie Report noted that the legal writing classes they observed were “typically small, with around twenty students.”149
B. Legal Writing Professors Use Rubrics to Evaluate Student Performance and Provide Frequent Feedback and Opportunities for Improvement
As discussed above, criteria-referenced grading is accomplished by evaluating students based on explicit, objective standards. This is often done through the use of rubrics, which are frequently used by legal writing professors.150 “Rubrics are sets of detailed written criteria used to assess student performance . . . based on the learning goals of the course. These goals are what the professor has identified students should learn by the end of the course. Within these goals, benchmarks may describe varying levels of student performance.”151 This method tells students where they are “in relation to mastering the material,”152 rather than where they are in relation to other students in the class.
Rubrics can assist with both student learning and assessment and can make the grading process more efficient.153 At its most basic, a rubric is a “scoring guide” that allows a professor to evaluate student work based on specific guidelines.154 As Linda Suskie notes, “There is no single correct way to write or format rubrics.”155
Rubrics have been used, for example, by elementary school teachers, to assess the reading and writing skills of their students. Such “performance-based assessments” are mandated by a number of states. Rubrics used to assess proficiency in reading and writing “assist both the teacher and the learner in determining each level of performance.” One teacher noted that when she showed her students “a set of criteria with examples for establishing performance levels, [her] students were supported and were more successful in meeting performance goals.”156 Rubrics can be used to evaluate what students know about a topic.157
In the law school context, rubrics are an effective and efficient way for law professors to communicate their learning goals for students. In his book on outcomes assessment, Gregory Munro uses legal writing to illustrate this point: “the learning of effective legal writing increases if the teacher has identified the standards for good legal writing, conveyed those standards in advance to the students, and evaluated the writing on the basis of those standards.”158
Legal writing, through its use of good assessment practices, can provide a model as the law school as a whole is required to adapt to the need for outcome-based measurements. For example, while traditional law school classes have historically focused on the end-of-semester final exam, providing little or no formative assessment, legal writing classes regularly use formative assessments. Gregory Munro has acknowledged that although examples of formative assessment in law school are rare, they can frequently be found in clinics and legal writing courses.159 Professors in these courses provide frequent oral and written feedback. Often, this is “the only systematic opportunity in the first year to criticize students’ degree of mastery over course material.”160 These critiques are essential to acquiring the important basic skills of legal thinking.161 In addition, clinicians and legal writing faculty “spend the most individualized time with students.”162
Effective assessment is an important component of successful teaching and learning environments. An “[e]ffective assessment system[]” allows students to develop expertise by providing them with frequent feedback and opportunities to revise their work, by teaching them techniques for self-assessment, and by measuring “their achievement of the course goals.”163 These are common practices in the legal writing classroom.164
Because students have multiple opportunities to improve and receive individual attention, it is likely that a larger percentage of the class will reach at least a minimum level of competency in the skills being assessessed and that more of them will excel than in a larger class using only summative assessment. In addition, curves are often based on the previous grades of the class on the assumption that students will continue to perform similarly. Such a system does not make sense in a skills class in which students may do better because their grades are not tied to the memorization and timed performance usually required in an exam.165
Although legal writing uses formative assessment, explicit criteria, and opportunities for improvement, the course is still subject, at the vast majority of schools, to the same mandatory curve as other required courses.166 This may be particularly disillusioning for students, as the course gives the illusion of criteria-referenced grading, by using rubrics, or at least stated standards for what is expected on individual assignments, when, ultimately, there can be only a limited number of top grades, and often, an unavoidable percentage of grades at the bottom. Even those who favor grade normalization acknowledge “grade normalization is inherently incompatible” with teaching competency based on specific criteria because such an approach employs “intensive efforts on the individual level to develop abilities.”167
C. The Benefits Outweigh the Risks for Legal Writing
There are risks, however, in placing legal writing at the forefront of a movement to change grading policies. Over the past several decades, the legal writing community has struggled and succeeded in achieving more status and recognition for both the course itself and for those who teach it. However, grading without a curve, when other required classes are graded with a curve, could cause a slippage of these hard won gains. To gain recognition and respect for legal writing as a discipline in its own right, the legal writing community has sought pay and title equity, increased credits for the courses, and full rights to participate in law school governance. Part of gaining the respect and attention of students and other faculty has been to grade the course in the same manner as other courses.168 The thought has been that students will not put the same effort or see the same value in a course that is not as significant in the doling out of rewards and that “[n]on-legal writing faculty may see legal writing as less substantial than the doctrinal courses.”169
This fear is based on the history of not grading legal writing courses, or grading them under a pass-fail system.170 The same problems should not arise under a criteria-referenced system. Criteria-referenced grading is still grading and still communicates distinctions between students. In fact, a criteria-referenced method does so more accurately. There is no reason that a grade that results from a criteria-referenced system should not be included in the GPA or used to determine the traditional law school accolades like law review, scholarships, and prestigious jobs.
The question remains whether the legal writing field could suffer a setback in the gains achieved over the past two decades if a different system is used to grade the course. This is less likely to be a problem at schools where legal writing faculty members have been integrated into the general faculty and the law school community takes the course seriously.171 In other words, basing a grade on objective standards, rather than a curve, is less likely to cause a problem where the gains sought by legal writing professionals have already been substantially achieved.
Such a change may instead be a benefit to the legal writing program at a school, rather than a risk, because of the trend toward outcomes assessment. Law schools may welcome the opportunity to demonstrate to accrediting agencies that they are beginning to institute best practices for assessment. By grading the course in the manner recommended by Best Practices, the Carnegie Report, and assessment scholars, legal writing faculty can become the assessment experts at their schools. The program can be held out as an example to the ABA and regional accrediting agencies that the school is serious about assessment.172
The anticipated inclusion of formative assessment in the revised ABA standards makes this experience one that is valuable to the rest of the law school. Legal writing professors “are particularly well suited to help other faculty members as this shift occurs,”173 and “will be natural leaders for their colleagues both within and without the legal writing discipline as everyone adapts to this new paradigm.”174
V. Moving Beyond Legal Writing: Applying Criteria-Referenced Grading to Other Classes and Responding to Criticisms
Although it makes sense to begin the process of grading reform with legal writing, the goals of the Carnegie Report, Best Practices, and the humanizing law school movement will not be achieved with a change in just one course. A larger shift that encompasses other law school courses is a reachable goal that is worth the attention of law school reform advocates. Realistically, school-wide grading reform is no small challenge because many law schools are comfortable with the current norm-based system and may fear a lack of grading consistency, the perception of grade inflation, and the difficulty in collaborating on standards. As discussed below, these concerns can be addressed and the principles of criteria-referenced grading effectively adapted to other courses. Pursuant to the principles set forth in the Carnegie Report, the integration of skills and doctrine requires a school-wide effort. Criteria-referenced grading can be part of that effort.
A. Criteria-Referenced Grading Is Fair and Consistent
Some schools seek uniformity in grading through a norm-referenced grading system.175 “Institutional grading policies often are justified as necessary to even out differences among faculty in grading practices.”176 A concern for fairness in the sorting function by which rewards are distributed to students is one of the strongest arguments in favor of mandatory curves.177 The goal is to protect students from the effects of assignment to a professor with a tendency to assign extreme grades.178
The advantage of normalization is that it reduces or eliminates the variability of grading practices among individual professors. Two sections of the same course are normalized when it is assumed that the students in each course are of roughly comparable ability, so that differences in grades are the product of differences in the professors’ grading policies or of their teaching practices. Normalization is used to prevent the inequity that otherwise would result from random section assignment.179
However, norm-referenced grading may not always solve the problem of grades that are perceived as too high or too low. In a system based on a required mean, for example, professors may still achieve the mean by awarding extreme grades.180 One scholar has suggested that “the best means of furthering uniformity is not through rule, but through consensus,” and that grading is an issue that should regularly be discussed by the law school community.181 Others have suggested that the notion that grading on a curve is fairer and more equitable than other grading systems is “a myth.”182
Fairness can be achieved just as well, if not more effectively, through criteria-referenced grading. The use of explicit written criteria, or rubrics, can result in grading that is more efficient and more consistent, particularly after a professor has gained some experience using them.183 Moreover, if a professor’s grades seem particularly high or low, such grades should be easy to justify using a criteria-referenced system.184 If the grade reflects scores on exams or assignments, each assessed based on a specific set of standards, the professor will be able to demonstrate how the grades in that class were determined. A property professor would be able to explain, for example, that twenty-five percent of the class failed to identify the future interests issue on the midterm exam, resulting in more low grades.
B. Criteria-Referenced Grading Should Not, By Itself, Lead to Grade Inflation
There is a tendency to see any change in grading policy as representing lax standards and lack of rigor. In particular, proponents of norm-referenced grading argue that without the curve, grades will be inflated.185 As the authors of the Carnegie Report point out, this concern has been refuted in other fields, including medicine.186
Grades by themselves do not demonstrate rigor or the lack thereof. A more relevant question is whether students are being held to standards that are both sufficiently high and reasonable. Grading on a curve does not provide this information. If the curve mandates that a certain percentage of a class receive high grades, then students who do not necessarily meet a high standard set for certain skills can still achieve a high grade, merely by performing better than their classmates. On the other end, students who have achieved an acceptable level of competency and met the standard may receive a low grade because their classmates have performed better. In addition, minimal differences between the students may be exaggerated under certain norm-referenced systems, particularly when a certain percentage of each grade is mandated.
Criteria-referenced grading can, on the other hand, more effectively address the concern of grade inflation and rigor. Under this system, standards are set, with specific criteria to be met. The rubric can be detailed, so that the result provides more information about an individual student’s level of achievement. In his influential book, What the Best Teachers Do, Ken Bain noted that a good method for deciding if a course is graded too leniently is to examine the course materials and the methods used to assess student performance.187
But even with rubrics, students have performed at the very highest to lowest levels, including failing. . . . [I]t is likely professors will continue to see students perform across a spectrum, even when they provide students with rubrics. In fact, should students show improved work, rubrics could provide administrators with concrete evidence to show why mandatory means and curves are inappropriate. Specific data about student performance, collected over several years, may indicate that clusters of students do well or do poorly in a way that does not correspond to a perfect curve or pattern.188
The key is to set standards that are realistic—standards that challenge students but are attainable. This is consistent with sound assessment practices: teachers set learning objectives, then determine how to assess whether they have been met.189
A criteria-referenced system can be designed so that grades can be high or low. The advantage is that a standard can be set, rather than an arbitrary distribution determined in advance, regardless of actual student performance. Ultimately, a criteria-referenced system forces teachers to apply greater “intellectual rigor” to the grading process itself, requiring the same depth of analysis that teachers expect of their students.190
C. Criteria-Referenced Grading Can Effectively Communicate Student Competency to Employers
Another issue that can arise is one of “consumer acceptance” and problems for graduates if employers are not familiar with a new grading system.191 Grades serve an important external function by aiding potential employers in making hiring decisions. However, unlike a pass-fail system, under a criteria-referenced grading system, students will still have grades, GPAs, and a class rank. The fear that it would be “unfair to students and potential employers to gloss over differences in student preparation and proficiency under a criterion-referenced grading system” is unfounded.192 Employers should have no reason to notice any change in the grades, ranking, or other information normally provided by law school graduates applying for jobs.
While grades certainly serve a function by communicating some information to employers about students, that information is probably less useful and less accurate than is commonly thought. The naked grade does not tell the prospective employer anything about a particular school’s grading system, or about its standards. The GPA does not tell the employer whether the student was particularly adept at the skills needed for that particular job. And even under a norm-referenced system, employers do not usually have sufficient information to compare the various curves used at different schools.193
A law school that engages in curricular change, along with a change in grading procedures, could consider preemptive publicity about the meaning and advantages of the change. Through stories in legal publications, which may then be picked up on widely read blogs, the “real” meaning of a school’s grading system could be communicated to make clear that the change was not designed to give higher grades to students, but to give grades that more accurately reflect their ability to master particular skills. Hopefully, as more schools adopt the recommendations of the Carnegie Report and Best Practices, legal employers will also adapt and look more deeply and more broadly at the graduates they interview. Because students will have the opportunity to take more courses with a skills component, an employer might ask more specifically about grades in specific courses, or, beyond grades, about the specific skills that were acquired.
D. Overcoming Faculty Resistance to Change
Several scholars who have praised criteria-referenced grading have also questioned the likelihood that law schools will change their grading policies,194 acknowledging that such a move “would be a fundamental change in law school culture.”195 The authors of a recent study of law student depression similarly concluded that although the current grading system is one of the sources of law school stress, “it seems unlikely that most law schools will abandon traditional grading methods.”196
One difficulty with criteria-referenced assessment is getting faculty to agree on standards of performance. However, this problem has been overcome in other fields of study, including medicine.197 In addition, legal writing professors, who may be more accustomed to collaborating on standards, may be able to advise their colleagues on strategies for reaching consensus.
Faculty should be able to agree on school-wide goals regarding what students should be able to do at the end of, for example, the first year, the second year, and at graduation. These goals can guide professors who teach a particular subject to agree on general course objectives. Reaching consensus on what students should be learning would still allow individual faculty members to create their own rubrics and objective grading criteria for a particular assignment or exam.198
Additional steps can be taken to avoid issues with consistency and grade inflation, real or perceived. For example, in an adjunct-taught legal writing program, or with any courses taught by adjunct faculty, a director or associate dean can provide oversight to assist in the development of rubrics and objective grading criteria. Faculty teaching a particular course, or in a particular program, would have to work together to achieve a level of consistency in standards across sections—particularly in programs that extend beyond the first year, when, presumably, students can choose their section and the danger of “teacher-shopping” for a good grade becomes a risk. For each assignment, faculty can agree, for example, on a rubric that explains the qualities of an “A” paper, a “B” paper, and so on.199 Faculty can share examples of best and worst papers and discuss their strengths and weaknesses. The same process can be done with essay exams in a doctrinal course, or scholarly papers in a seminar course. Although such a process will be time-consuming at the start, it will become more efficient with experience.
In What the Best College Teachers Do, Ken Bain summarized the methods the educators he studied used to evaluate their students. The teachers focused on what students needed to learn to achieve a particular grade—grades represented “clearly articulated levels of achievement.”200 Students were expected to meet standards of excellence that were neither absolute nor arbitrary.201 The primary goal of these teachers was “to help students learn to think about their own thinking so they can use the standards of the discipline or profession to recognize shortcomings and correct their reasoning as they go. . . . Grading on a curve, therefore, makes no sense in this world.”202 These are worthy goals for law schools as well.
CONCLUSION
As legal education moves toward more integration between skills and doctrine as recommended by Best Practices and the Carnegie Report,203 the traditional methods of law school assessment will be more difficult to justify. The changes that are starting to happen in law school make grading reform more urgent, as norm-referenced grading is largely inconsistent with the positive movement toward curricular innovation, learning goals, outcomes assessment, and the humanizing law school movement.
Even without full-scale integration, if only some of the recommendations are adopted, and the ABA’s accreditation standards change to require greater use of formative assessment, the benefits of a criteria-referenced system will be hard to deny. One author has noted that the ideas represented in the draft standards—“articulating the knowledge and professional skills that students should learn in courses, designing curriculum to serve those goals, assessing students’ progress with reference to those goals and sharing that evaluation with students”—are consistent with the “signature pedagogy of legal writing,” a pedagogy that other law school programs might find it useful to adopt.204
Criteria-referenced grading will require some increased effort at the start, but it is likely to reap great rewards in both improved student well-being and academic success. It is the right thing to do for students, and for the profession as a whole. Legal writing professors can lead the way by becoming “proponents of conducting evaluation in the service of learning.”205 We need to “know what, how, and whether our students are learning and in what ways our practices—both in instruction and in assessment—are helping them to learn.”206 Criteria-referenced grading is a step in the direction of achieving that goal.
>* © 2011, Leslie M. Rose. All rights reserved. Professor and Director Advanced Legal Writing Program, Golden Gate University School of Law. Thank you to my wonderful research assistant Steffanie Bevington, to Eric Christiansen and Susan Rutberg for their helpful comments and cheerleading, to Ellie Margolis and Kristen Tiscione for their insightful critique, and to the Golden Gate Scholarship Support Group for providing a forum for me to share my ideas and receive encouragement.
1. Ken Bain, What the Best College Teachers Do 150 (Harv. U. Press 2004).
2. See e.g. Robert C. Downs & Nancy Levit, If It Can’t Be Lake Woebegone . . . A Nationwide Survey of Law School Grading and Grade Normalization Practices, 65 UMKC L. Rev. 819, 819–820 (1997); Barbara Glesner Fines, Competition and the Curve, 65 UMKC L. Rev. 879, 892 (1997); Jeffrey Evans Stake, Making the Grade: Some Principles of Comparative Grading, 52 J. Legal Educ. 583, 584–585 (2002); Paul T. Wangerin, Calculating Rank-in-Class Numbers: The Impact of Grading Differences Among Law School
Teachers, 51 J. Leg. Educ. 98, 104 (2001) (“Indeed, it probably is no exaggeration to say that a single year of grades in law school can have life-changing consequences for individual students.”).
3. Andy Mroch, Law School Grading Curves 2–5 (Am. Assn. of L. Schs. 2005) (available at https://www.aals.org/deansmemos/Attachment05-14.pdf).
4. See Jay M. Feinman, Law School Grading, 65 UMKC L. Rev. 647, 648 (1997); James O. Hammons & Janice R. Barnsley, Everything You Need to Know about Developing a Grading Plan for Your Course (Well, Almost), 3 J. on Excellence in College Teaching 51, 53 (1992).
5. See e.g. Leah M. Christensen, Enhancing Law School Success: A Study of Goal Orientations, Academic Achievement and the Declining Self-Efficacy of Our Law Students, 33 L. & Psychol. Rev. 57, 81 (2009); Peggy Cooper Davis, Slay the Three-Headed Demon! 43 Harv. Civ. Rights-Civ. Liberties L. Rev. 619, 622 (2008); Fines, supra n. 2, at 883–886; Hammons & Barnsley, supra n. 4, at 54; Emily Zimmerman, An Interdisciplinary Framework for Understanding and Cultivating Law Student Enthusiasm, 58 DePaul L. Rev. 851, 897 (2009).
6. Roy Stuckey et al., Best Practices for Legal Education: A Vision and a Road Map (Clin. Leg. Educ. Assn. 2007) [hereinafter Best Practices].
7. William M. Sullivan et al., Educating Lawyers: Preparation for the Profession of Law (Jossey-Bass 2007) [hereinafter Carnegie Report].
8 See infra sec. III(C).
9. See Susan Hanley Duncan, The New Accreditation Standards Are Coming to a Law School Near You—What You Need to Know About Outcomes & Assessment, 16 Leg. Writing 605 (2010); ABA Sec. of Leg. Educ. & Admis. to B., Stands. Rev. Comm., Student
Learning Outcomes Subcommittee May 5, 2010 Draft, https://www.abanet.org/legaled/ committees/comstandards.html (last visited June 1, 2011) (click on “Report of Subcommittee on Student Learning Outcomes,” under the “Meeting Date: July 24–25, 2010” heading) [hereinafter Student Learning Outcomes Draft].
10. See infra pt. III.
11. See infra pt. III.
12. Throughout this Article, the term “legal writing” will be used as shorthand to refer to the required course (encompassing a two-semester course in the first year, and, increasingly, an additional semester in the second year, and sometimes, the third year) that covers written and oral communication, advocacy, legal research, analysis, and depending on the program, additional skills. For an overview of what is typically covered in this course, see David S. Romantz, The Truth about Cats and Dogs: Legal Writing Courses and the Law School Curriculum, 52 U. Kan. L. Rev. 105, 139, 145–146 (2003).
13. Feinman, supra n. 4, at 648 (discussing ranking, which involves grading on a curve); see also Hammons & Barnsley, supra n. 4, at 53. In the Hammons and Barnsley article, a professor of higher education leadership and a doctoral program graduate summarize the pros and cons of several grading methods. See id. at 51.
14. Feinman, supra n. 4, at 649–652.
15. See Stake, supra n. 2, at 599.
16. Feinman, supra n. 4, at 649–652; see also Fines, supra n. 2, at 880–881 (distinguishing between the “assessment process” and the “reporting process”).
17. Best Practices, supra n. 6, at 243.
18. Maurice Scharton, The Politics of Validity, in Assessment of Writing: Politics, Policies, Practices 69 (Edward White et al. eds., Modern Language Assn. of Am. 1969) [hereinafter Assessment of Writing].
19. The American College Teacher: National Norms for 2007–2008, Research Br. (Newsltr. of Higher Educ. Research Inst. at UCLA) (Mar. 2009) (available at https://learningoutcomesassessment.org/documents/brief-pr030508-08faculty.pdf). The survey results appeared in the newsletter of the Higher Education Research Institute at UCLA. The results are based on the responses of 22,562 full-time faculty members at 372 colleges and universities around the U.S. Id. at 1. The report did not indicate what grading method other than the bell curve was used by faculty. Id. at 2.
20. Downs & Levit, supra n. 2, at 843–844, 855.
21. See Hammons & Barnsley, supra n. 4, at 53–54.
22. Gary R. Taylor, The Bell Curve Has an Ominous Ring, 46 Clearing House 119, 120 (Oct. 1971).
23. Id. at 121.
24. See Scharton, supra n. 18, at 70 (calling the decision to use a bell curve “ethically dangerous”); Hammons & Barnsley, supra n. 4, at 54.
25. Feinman, supra n. 4, at 648; see also Hammons & Barnsley, supra n. 4, at 54–55.
26. Carnegie Report, supra n. 7, at 170.
27. Id.
28. Feinman, supra n. 4, at 648–649.
29. Sophie M. Sparrow, Describing the Ball: Improve Teaching by Using Rubrics—Explicit Grading Criteria, 2004 Mich. St. L. Rev. 1, 6.
30. Carnegie Report, supra n. 7, at 170.
31. Hammons & Barnsley, supra n. 4, at 55.
32. See id. at 56-57.
33. Deborah H. Holdstein, Gender, Feminism, and Institution-Wide Assessment
Programs, in Assessment of Writing, supra n. 18, at 204.
34. Michalis Michaelides & Ben Kirshner, Graduate Student Attitudes toward Grading Systems, 8 College Q. (Fall 2005) (available at https://www.senecac.on.ca/quarterly/2005-vol08-num04-fall/michaelides_kirshner.html).
35. Id.; Hammons & Barnsley, supra n. 4, at 57.
36. Lawrence Krieger, Human Nature as a New Guiding Philosophy for Legal
Education and the Profession, 47 Washburn L.J. 247, 301 (2008).
37. Downs & Levit, supra n. 2, at 824, 852–853.
38. See id. at 836; Nancy H. Kaufman, A Survey of Law School Grading Practices, 44 J. Leg. Educ. 415, 417–418 (1994); Mroch, supra n. 3, at 2–3.
39. Downs & Levit, supra n. 2, at 820.
40. Kaufman, supra n. 38, at 423.
41. Downs & Levit, supra n. 2, at 836; see also Kaufman, supra n. 38, at 417–418 (finding that 66.4 percent of the 119 schools responding to a 1993 survey used “some form of curve” for “some classes”).
42. Mroch, supra n. 3, at 2–3.
43. Id. at 4–6.
44. Vesna Jaksic, Grading Policies Get a Tweaking: Several Schools in Recent Months Have Revamped Their Evaluation System to Improve Fairness, Natl. L.J. S1, S1 (Feb. 23, 2009); Catherine Rampell, In Law Schools, Grades Go Up, Just Like That, N.Y. Times A1, A1 (June 22, 2010).
45. Jaksic, supra n. 44, at S1; Rampell, supra n. 44, at A3. Most law schools post specific information about their grading policies in the student handbooks that are available on each school’s website. See e.g. Harv. L. Sch., Handbook of Academic Policies 2010–2011, Requirements for the J.D. Degree, https://www.law.harvard.edu/academics/handbook/rules-relating-to-law-school-studies/2010-2011-requirements-for-the-j.d.-degree-.html#J.Grades forJ.D.Students (accessed Apr. 15, 2011); Stanford L. Sch., Student Handbook 2010–2011, at 33, https://www.law.stanford.edu/experience/studentlife/SLS_Student_Handbook.pdf (accessed Apr. 15, 2011); Yale L. Sch., Yale Law School 2010–2011: Bulletin of Yale University 85 (2010) (available at https://www.yale.edu/printer/bulletin/pdffiles/law.pdf).
46. See e.g. Brian Leiter, Brian Leiter’s Law School Reports, NYU’s New Grading Curve,
https://leiterlawschool.typepad.com/leiter/2008/12/nyus-new-grading-curve.html (posted Dec. 3, 2008, 9:15 a.m. CST); Brian Leiter, Brian Leiter’s Law School Reports, Will Other Schools Follow the Yale/Harvard/Stanford Lead of Effectively Eliminating Grades?
https://leiterlawschool.typepad.com/leiter/2008/10/will-other-schools-follow-the-yaleharvard stanford-lead-of-effectively-eliminating-grades.html (posted Oct. 27, 2008, 12:15 p.m. CST); Elie Mystal, Above the Law, Harsh Curve: Competing Thoughts from Florida International and Loyola–Los Angeles,
https://abovethelaw.com/2009/11/harsh_curve_competing _thoughts.php (posted Nov. 9, 2009, 6:14 p.m. EST); Elie Mystal, Above the Law, Harvard and Georgetown Law Make Grading Easier, https://abovethelaw.com/2009/12/hls_and_gulc_ make_grading_easier.php#more (posted Dec. 3, 2009, 12:51 p.m. EST); Elie Mystal, Above the Law, Loyola Law School (L.A.) Retroactively Inflates Grades, https://abovethelaw .com/2010/03/loyola-law-school-la-retroactively-inflates-grades/#more-9204 (posted Mar. 31, 2010, 7:44 p.m. EST).
47. See e.g. Kaufman, supra n. 38, at 422 (results of 1993 survey indicated that forty-four law schools had changed their grading policies in the preceding five years and that four were considering a change); Deborah Waire Post, Power and Morality of Grading—A Case Study and a Few Critical Thoughts on Grade Normalization, 65 UMKC L. Rev 777, 786 (1997) (noting law students’ awareness that grading practices at their school might put them at a disadvantage in a time of downsizing by employers).
48. In 2010, 191 schools responded to the survey. The survey is sent to all United States AALS member law schools, AALS Non-Member Fee-Paying schools, and the University of Windsor in Ontario Canada. ALWD & Leg. Writing Inst., 2010 Survey Results, at iii (available at https://www.alwd.org/surveys/survey_results/2010_Survey_Results.pdf) [hereinafter 2010 Survey Results].
49. Only one school reported that legal writing grades were not included in the students’ GPAs. Id. at 9. Four schools reported that the course was graded purely pass-fail. Id.
50. Id. at 10 (indicating that 107 schools reported that legal writing is graded the same as other first-year courses, 46 schools reported that the course is graded on a curve specifically for legal writing, and 8 reported grading on some other curve or mean). In addition, more than half of the 120 schools responding to a 1993 study reported that they graded legal writing the same as other courses. Kaufman, supra n. 38, at 416.
51. See Linda H. Edwards, Reflections on Legal Writing: A Writing Life, 61 Mercer L. Rev. 867, 878 (2010).
52. Sec. of Leg. Educ. & Admis. to B., 2010–2011 ABA Standards and Rules of Procedure for Approval of Law Schools, at Stand. 302(a)(2), (a)(3) (ABA 2010) (available at https://www.abanet.org/legaled/standards/standards.html) [hereinafter ABA Standards].
53. See 2010 Survey Results, supra n. 48, at v–viii.
54. See Karin Mika, Acknowledging our Roots: Setting the Stage for the Legal Writing Institute, 24 Second Draft (bull. of Leg. Writing Inst.) 4, 4–6 (Spring 2010); Jill J. Ramsfield, Legal Writing in the Twenty-First Century: A Sharper Image, 2 Leg. Writing 1, 15 (1996).
55. Romantz, supra n. 12, at 133.
56. Mary S. Lawrence, The Legal Writing Institute, The Beginning: Extraordinary Vision, Extraordinary Accomplishment, 11 Leg. Writing 213, 224 (2005).
57. Ramsfield, supra n. 54, at 3–4.
58. Id. at 5.
59. Id. at 25.
60. Results of the annual survey are available at https://www.alwd.org.
61. Commun. Skills Comm., Sec. Leg. Educ. & Admis. to B., Sourcebook on Legal Writing Programs 85 (Eric B. Easton ed., 2d ed., ABA 2006) [hereinafter Sourcebook] (referencing multiple other surveys).
62. 2010 Survey Results, supra n. 48, at v–vi.
63. Id. at viii.
64. See ABA Sec. of Leg. Educ. & Admis. to B., Stands. Rev. Comm., Report of Subcommittee on Academic Freedom and Status of Position 4 (Draft of July 15, 2010) (available at https://www.abanet.org/legaled/committees/comstandards.html) (click on “Report of the Subcommittee on Academic Freedom and Status of Position,” under the “Meeting Date: July 24–25, 2010” heading); ABA Standards, supra n. 52, at stand. 405. The current standard governing legal writing professors can be found in Standard 405(d), which provides that a law school “shall afford legal writing teachers such security of position and other rights and privileges of faculty membership as may be necessary to (1) attract and retain a faculty that is well qualified to provide legal writing instruction as required by Standard 302(a)(3), and (2) safeguard academic freedom.” ABA Standards, supra n. 52.
65. Jo Anne Durako, A Snapshot of Legal Writing Programs at the Millennium, 6 Leg. Writing J. 95, 114 (2000) (analyzing results of the 1999 ALWD survey); see also Helene S. Shapo & Christina L. Kunz, Brutal Choices, 2 Persps. 6, 6–8 (1993). For an alternative view, see Steve J. Johansen, Life without Grades: Creating a Successful Pass/Fail Legal Writing Program, 6 Persps. 119, 119–121 (1998).
66. Sourcebook, supra n. 61, at 75–76.
67. Id. at 76.
68. Id.
69. See infra sec. IV(C).
70. See e.g. Jill Schachner Chanen, Re-Engineering the J.D.: Schools Across the Country Are Teaching Less about the Law and More about Lawyering, ABA J. 42 (July 2007); Edward Rubin, What’s Wrong with Langdell’s Method, and What to Do About It, 60 Vand. L. Rev. 609 (2007).
71. Best Practices, supra n. 6.
72. Carnegie Report, supra n. 7.
73. For a summary of the history of criticism of legal education, see David I.C. Thomson, Law School 2.0: Legal Education for a Digital Age 57–72 (LexisNexis 2009).
74. See infra sec III(B)(3).
75. Gregory S. Munro, Outcomes Assessment for Law Schools 3 (Inst. for L. Sch. Teaching 2000) (available at https://lawteaching.org/publications/books/outcomesassess ment/munro-gregory-outcomesassessment2000.pdf).
76. Munro teaches at the University of Montana Law School and is recognized as an expert on assessment in law school. He is cited in the Carnegie Report’s chapter on “Assessment and How to Make it Work,” Carnegie Report, supra n. 7, at 181–182, and was part of the steering committee that put together Best Practices, Best Practices, supra n. 6, at x. His work is relied on heavily in the Best Practices chapter on assessing student learning. See Best Practices, supra n. 6, at 239, 241, 253–254, 257–259. He is a frequent speaker on the topic of assessment. See e.g. Gregory S. Munro, Presentation, Assessment in Law Schools (Denver, Colo. Sept. 12, 2009) (video of presentation available at https://www.law.du.edu/index.php/assessment-conference/program); Gregory S. Munro, Presentation, The Importance of Student Assessment (S.F., Cal. Jan. 6, 2011).
77. See e.g. Am. Assn. of L. Schs., 2011 Annual Meeting Final Program 26 (How Legal Writing Faculty Can Contribute to Their Law School’s Assessment Plan), 31–32 (The Importance of Student Assessment: Part I: Why Student Assessment Matters, Part II: Improving Learning and Student Engagement Through Assessment) (2011).
78. Munro, supra n. 75, at 11. The assessment movement encompasses both student learning and institutional effectiveness; this Article addresses only the former.
79. Best Practices, supra n. 6, at chs. 7–8; Munro, supra n. 75, at 12; see also Victoria L. VanZandt, Creating Assessment Plans for Introductory Legal Research and Writing Courses, 16 Leg. Writing 313, 320–321 (2010). The proposed changes to the ABA accreditation standards also separate assessment of student learning (Standard 304) from institutional effectiveness (Standard 305). ABA Sec. of Leg. Educ. & Admis. to B., Student Learning Outcome Draft, supra n. 9, at 4.
80. Linda Suskie, Assessing Student Learning: A Common Sense Guide, at xi (2d ed., Jossey-Bass 2009).
81. Id. at 36, 38, 43, 50.
82. Munro, supra n. 75, at 11; see also Robert J. Marzano, Classroom Assessment & Grading That Work 104 (Assc. for Supervision & Curriculum 2006).
83. Munro, supra n. 75, at 12 (internal quotations and citations omitted).
84. Gerald F. Hess, Heads and Hearts: The Teaching and Learning Environment
in Law School, 52 J. Leg. Educ. 75, 105–106 (2002).
85. Munro, supra n. 75, at 35–36; see also Scharton, supra n. 18, at 69.
86. Hess, supra n. 84, at 105–106.
87. Scharton, supra n. 18, at 69.
88. Munro, supra n. 75, at 35–36.
89. Duncan, supra n. 9, at 623.
90. Munro, supra n. 75, at 36.
91. Hess, supra n. 84, at 106.
92. Id.
93. See Best Practices, supra n. 6, at 235–263; Carnegie Report, supra n. 7, at 162–184.
94. Munro, supra n. 75, at 119–120.
95. The Opportunity for Legal Education—A Symposium of the Mercer Law Review, 59 Mercer L. Rev. 821, 837 (2007–2008) (This portion of the symposium issue included a transcript of the morning session, held on November 9, 2007, at which Dean Wegner spoke.).
96. Best Practices, supra n. 6, at 245; see also Carnegie Report, supra n. 7, at 169.
97. Best Practices, supra n. 6, at 244 (“Norm-referenced assessment allows grades to be distributed along a bell curve. We should not be concerned about whether students’ performances will be distributed along a normal ‘bell curve’ because one should not expect it to be.”).
98. Carnegie Report, supra n. 7, at 168.
99. Id.
100. ABA Sec. of Leg. Educ. & Admis. to B., Report of Special Committee on Outcome Measures 1, 6–11 (July 27, 2008) (available at https://apps.americanbar.org/legaled/ committees/comstandards.html [hereinafter Outcome Measures Report].
101. Id. at 9–10.
102. Id. at 46–47; see e.g. W. Assn. of Schs. & Colleges, Criteria for Accreditation, https://www.acswasc.org/about_criteria.htm (accessed June 1, 2011); N.C. Assn. of Colleges & Schs., Criteria for Accreditation, https://www.ncahlc.org/information-for-institutions/ criteria-for-accreditation.html (accessed June 1, 2011).
103. Sec. of Leg. Educ. & Admis. to B., Outcome Measures Report, supra n. 100, at 54.
104. Sec. of Leg. Educ. & Admis. to B., Student Learning Outcome Draft, supra n. 9, at 4.
105. Duncan, supra n. 9, at 611.
106. See Todd David Peterson & Elizabeth Waters Peterson, Stemming the Tide of Law Student Depression: What Law Schools Need to Learn from the Science of Positive Psychology, 9 Yale J. Health Policy, L. & Ethics 357, 358–359 (2009); Kennon M. Sheldon & Lawrence S. Krieger, Does Legal Education Have Undermining Effects on Law Students? Evaluating Changes in Motivation, Values, and Well-Being, 22 Behav. Sci. & L. 261, 262 (2004).
107. Sheldon & Krieger, supra n. 106, at 275–276, 280.
108. Id. at 283.
109. The brochure and other symposium materials from the Humanizing Legal Education Symposium, on October 19–21, 2007, can be found at https://washburnlaw.edu/human izinglegaleducation/.
110. Barbara Glesner Fines, Fundamental Principles and Challenges of Humanizing Legal Education, 47 Washburn L. J. 313, 318 (2007–2008)
111. Id. at 314.
112. Id. at 320 (footnote omitted).
113. See Michael Hunter Schwartz, Humanizing Legal Education: An Introduction to a Symposium Whose Time Came, 47 Washburn L.J. 235, 235–236 (2007–2008).
114. Fines, supra n. 110, at 316; Bruce J. Winick, Greetings from the Chair, Equipose (Newsltr. of Am. Assn. of L. Schs., Sec. on Balance in Leg. Educ.) 1 (Dec. 2009) (available at https://www.aals.org/documents/sections/balance/BalanceInLegalEdDec_09.pdf).
115. Hess, supra n. 84, at 80.
116. Id.
117. See e.g. Rebecca Flanagan, Lucifer Goes to Law School: Towards Explaining and Minimizing Law Student Peer-to-Peer Harassment and Intimidation, 47 Washburn L.J. 453, 461–464 (2007–2008); Susan Grover, Personal Integration and Outsider Status as Factors
in Law Student Well-Being, 47 Washburn L.J. 419, 427 (2007–2008); Hess, supra n. 84, at 78; Krieger, supra n. 36, at 274, 297–299.
118. Carol L. Wallinger, Moving from First to Final Draft: Offering Autonomy-Supportive Choices to Motivate Students to Internalize the Writing Process, 54 Loy. L. Rev. 820, 824 (2008).
119. Krieger, supra n. 36, at 298.
120. Kennon M. Sheldon & Lawrence S. Krieger, Understanding the Negative Effects of Legal Education on Law Students: A Longitudinal Test of Self-Determination Theory, 33 Pers. Soc. Psychol. Bull. 883, 885 (2007).
121. Krieger, supra n. 36, at 298.
122. Id.
123. Wallinger, supra n. 118, at 826.
124. Krieger, supra n. 36, at 298.
125. Best Practices, supra n. 6, at 243.
126. Krieger, supra n. 36, at 297.
127. Id. at 299.
128. Ruth Ann McKinney, Depression & Anxiety in Law Students: Are We Part of the Problem and Can We Be Part of the Solution? 8 Leg. Writing 229, 233 (2002).
129. Id. at 234.
130. Id. at 235.
131. Id. at 236.
132. Id. at 240–241.
133. Id. at 242–244; see also Christensen, supra n. 5, at 79.
134. Fines, supra n. 2, at 896.
135. Id.
136. Feinman, supra n. 4, at 650.
137. Id.
138. Krieger, supra n. 36, at 301–302; see also Fines, supra n. 110, at 318 (arguing that only “fundamental institutional reform” can counteract the negative impacts of competition, Fines proposes that one part of the reform include allowing students “to work against a pre-determined set of criteria rather than grading them on a comparative basis”).
139. Best Practices, supra n. 6, at 239 (“[E]xcept perhaps in legal writing and research courses, the current assessment practices used by most law teachers are abominable.”).
140. Duncan, supra n. 9, at 611; see also McKinney, supra n. 127, at 232 (noting that legal writing professors are in the best position to “take a leadership role” in experimenting with change).
141. Duncan, supra n. 9, at 621.
142. See Downs & Levit, supra n. 2, at 835; Stake, supra n. 2, at 591–592 n. 19.
143. Stake, supra n. 2, at 601.
144. Fines, supra n. 2, at 894.
145. Sourcebook, supra n. 61, at 89 (This recommendation refers to workload, rather than class size.).
146. Id. at 95, 100. The ALWD Survey indicates that most programs use full-time faculty on short or long-term contracts, with many using a hybrid system that includes tenure-track, contract, and adjunct faculty. 2010 Survey Results, supra n. 48, at iii, 5.
147. Sourcebook, supra n. 61, at 112.
148. 2010 Survey Results, n. 48, at viii, 84, B-20,
149. Carnegie Report, supra n. 7, at 104.
150. Sparrow, supra n. 29, at 8.
151. Id. at 7. Professor Sparrow has included examples of rubrics for a variety of courses, including Civil Procedure, at the end of her article. Additional examples of rubrics used for briefs, memos, client letters, and other documents, submitted by legal writing professors, can be found at https://www.lwionline.org/grading_rubrics.html.
152. Sparrow, supra n. 29, at 9 (emphasis omitted).
153. Suskie, supra n. 80, at 137–139.
154. Id. at 138.
155. Id.
156. Mary Jo Skillings & Robbin Ferrell, Teaching Reading Student-Generated Rubrics: Bring Students into the Assessment Process, 53 Reading Teacher 452, 452 (Mar. 2000).
157. Id. at 455.
158. Munro, supra n. 75, at 15.
159. Id. at 36; see also Carnegie Report, supra n. 7, at 104–111 (praising legal writing classes for providing frequent feedback and opportunities for simulated practice).
160. Romantz, supra n. 12, at 144.
161. Id.
162. Fines, supra n. 110, at 317.
163. Hess, supra n. 84, at 86.
164. Duncan, supra n. 9, at 621–622.
165. See Feinman, supra n. 4, at 652 (noting that normalization “limits professorial flexibility” and “may mask real differences in student learning” by failing to recognize differences in student achievement from year to year).
166. See e.g. 2010 Survey Results, supra n. 48, at 10.
167. Downs & Levit, supra n. 2, at 856.
168. See supra sec. II(B).
169. Sourcebook, supra n. 61, at 77.
170. See id.
171. Id. at 77–78 (discussing effects from different grading policies between legal writing and doctrinal classes).
172. For an overview of developing learning outcomes in a legal writing class and specific examples of assessment plans for an introductory course, see VanZandt, supra n. 79, at 324–336, 352–360. See also Mary A. Crossley & Lu-in Wang, Learning by Doing: An Experience with Outcomes Assessment, 41 U. Toledo L. Rev. 269 (2010) (describing one law school’s experience in developing a system to assess student learning).
173. Duncan, supra n. 9, at 609; see also Thomson, supra n. 73, at 135 (noting that “skills teachers serve as catalysts in their law schools”).
174. Duncan, supra n. 9, at 611.
175. Downs & Levit, supra n. 2, at 821–822, 843–844; see also Daniel Keating, Ten Myths About Law School Grading, 76 Wash. U. L.Q. 171, 186–188 (1998) (discussing the potential unfairness of “unregulated” grading).
176. Fines, supra n. 2, at 892.
177. Id. at 895; see also Carnegie Report, supra n. 7, at 169–170 (discussing and responding to arguments in favor of norm-referenced grading).
178. Fines, supra n. 2, at 893, 897.
179. Feinman, supra n. 4, at 652.
180. Fines, supra n. 2, at 893.
181. Feinman, supra n. 4, at 652.
182. Thomas R. Guskey, Making the Grade: What Benefits Students? 52 Educ. Leadership 14, 16 (Oct. 1994).
183. Sparrow, supra n. 29, at 28–30 (“Just as with teaching a new course, or adopting a new text, creating rubrics becomes easier over time, and the investment is worth it.”).
184. See Krieger, supra n. 36, at 303.
185. See e.g. Downs & Levit, supra n. 2, at 819, 843–844, 854 (noting that even a change to a grade normalization policy has the potential to create at least the perception of grade inflation); Krieger, supra n. 36, at 301 (responding to concerns that “open grading” will cause grade inflation or grade deflation).
186. Carnegie Report, supra n. 7, at 169–170.
187. Bain, supra n. 1, at 172. A similar study focusing on law teachers will be published by Harvard University Press in 2011, and is described at https://washburnlaw.edu/bestlawteachers/.
188. Sparrow, supra n. 29, at 36.
189. See e.g. Barbara Glesner Fines, Incorporating Effective Formative Assessment into Course Planning: A Demonstration and Toolbox, in Legal Education at the Crossroads vol. 3 (2009) (available at https://www.law.du.edu/assessment-conference/program). The handout for this presentation includes a role play in which a professor brainstorms with the associate dean about developing and implementing learning goals for a Trusts and Estates course.
190. Sparrow, supra n. 29, at 37.
191. Downs & Levit, supra n. 2, at 824.
192. Carnegie Report, supra n. 7, at 169–170.
193. See Jaksic, supra n. 44, at S1 (explaining that some top law schools had revised their grading policies “to better convey their students’ accomplishments to employers”).
194. Fines, supra n. 2, at 908.
195. Van Zandt, supra n. 79, at 341 n. 129.
196. Peterson & Peterson, supra n. 106, at 381.
197. Carnegie Report, supra n. 7, at 170–171.
198. Krieger, supra n. 36, at 301–303.
199. See e.g. Mary Beth Beazley, A Practical Guide to Appellate Advocacy, Teacher’s Manual 20–22 (2d ed., Aspen Publishers 2006).
200. Bain, supra n. 1, at 160.
201. Id.
202. Id.
203. See e.g. Erwin Chemerinsky, Rethinking Legal Education, 43 Harv. Civ. Rights-Civ. Liberties L. Rev. 595, 595–597 (2008); Legal Education at the Crossroads—Ideas to Accomplishments: Sharing New Ideas for Integrated Curriculum (Sept. 2008) (available at https://bestpracticeslegaled.files.wordpress.com/2008/09/crossroadsmatlsonline.pdf) (The extensive materials from this conference include reports of numerous curricular reform projects by law schools across the country.).
204. Carol McCrehan Parker, The Signature Pedagogy of Legal Writing, 16 Leg. Writing 464, 472, 474 (2010).
205. Rebecca S. Anderson & Bruce W. Speck, Suggestions for Responding to
the Dilemma of Grading Students’ Writing, English J. 21, 25 (Jan. 1997).
206. Id.