I. Introduction

The arrival of generative artificial intelligence (GenAI) chatbots such as ChatGPT[1] is revolutionizing the way law students learn. As this technological shift impacts legal education, it is critical that legal research and writing (LRW) professors adapt to adequately equip their students for the NextGen bar exam and subsequent practice. The use of GenAI has the potential to hide students’ lack of skills. LRW professors often use a student’s written work product as the sole basis for assessing a student’s skills. But if GenAI is the true author of the student’s work, then the professor will be evaluating the GenAI’s skillset—not the student’s. Thus, the traditional model of assessment in LRW courses may mask a student’s lack of skill. In short, GenAI has the potential to allow students to circumvent the core of what LRW professors are attempting to teach them, albeit at their own peril.

The following three scenarios, in which only one student passes the NextGen bar exam, illustrate the issue. First, we have Professor A, a legal writing veteran, who lacks understanding of GenAI chatbots and therefore forbids his students from utilizing them. He believes that his traditional method of grading solely on the final written memorandum or motion brief will sufficiently indicate if Student A is ready for the NextGen bar, which he is aware will directly assess legal writing. Seeing an opportunity to save time and knowing that there is no way their professor can definitively detect whether they used GenAI, Student A leverages GenAI chatbots to produce well-structured, B+ written work. Professor A’s AI detection software unsurprisingly fails to detect this, and Professor A gives high-quality feedback on the work, oblivious to the use of GenAI. Over time, Student A becomes dependent on GenAI for most of the writing process and consequently fails the NextGen bar exam, which demands original writing skills and legal analysis. This would come as a surprise to Professor A because Student A’s written work suggested they had achieved Professor A’s course learning outcomes.

Conversely, Professor B wholeheartedly embraces GenAI chatbots. Believing they represent the legal profession’s future, she teaches students how to incorporate GenAI into their legal writing. Like Professor A, she still relies on memoranda or motion briefs as the sole graded assessment for her writing courses, and grades Student B’s paper as a B+. However, Professor B neglects to consider that the NextGen bar exam requires independent drafting and editing. As a consequence, Student B, who overly depends on GenAI for tasks necessitating independent writing expertise, fails the exam. This outcome would surprise Professor B as well because the written work she used to appraise Student B’s abilities suggested that the course learning outcomes had been achieved.

Finally, we have Professor C who, being aware of GenAI chatbots and the potential for students to use them, adapts her assessments. She recognizes the potential for GenAI to mask the true level of a student’s understanding and skills related to legal analysis and other lawyering abilities, which could lead to insufficient preparation for the NextGen bar and practice. Consequently, Professor C employs a variety of assessment methods to accurately measure Student C’s knowledge and abilities against the learning outcomes of the course. This mix of assessments uses more than one graded assessment and does not solely depend on the final written product as the only proof of acquired skills and knowledge. Student C, like the other students, achieved a B+ average on the combination of assessments, but was able to pass the NextGen bar exam. Even though Student C used GenAI chatbots to assist with her legal writing courses, Professor C’s diverse assessments accounted for that and evaluated the skills needed for Student C to pass the bar in a manner that prevented an over-reliance on GenAI.

The difference in the scenarios is the validity[2] of the assessments employed; it is not dependent on professors allowing GenAI use. Professor C found ways to measure learning outcomes that accounted for the fact that GenAI can mimic the writing product, and in so doing, both she and her students had a more accurate picture of the students’ knowledge and skills.

Requiring students to draft legal documents has its place. It is true that lawyers need to produce well-written documents.[3] Many scholars argue that good writing is “essential to ethical, competent legal representation.”[4] And writing is instrumental in developing thoughts.[5] The writing process, then, is a way to learn.[6] Writing helps students to think critically and creatively.[7] By “demanding reflection and deliberation,” it helps students overcome “cognitive bias and related logical reasoning fallacies.”[8] Thus, writing develops our students’ legal reasoning.[9]

Consequently, conventional wisdom has been that if a legal writing professor “identifie[s] the standards for good legal writing, convey[s] those standards in advance to the students, and evaluate[s] the writing on the basis of those standards,” then students will more effectively learn the skills needed to produce good legal writing.[10] Students composing carelessly with GenAI chatbots, however, will do no such thing. This conventional wisdom was the downfall of Professors A and B and their students.

With the release of ChatGPT[11] to the general public in November 2022,[12] anyone with access to the internet can now “write” with GenAI.[13] The term generative artificial intelligence (aka generative AI or GenAI) refers to a type of “artificial intelligence capable of generating text, images, or other media” in response to prompts.[14] ChatGPT, created by OpenAI,[15] is a type of GenAI chatbot[16] that uses a generative pretrained transformer (GPT) large language model to hold written, human-like conversations with users.[17]

Although ChatGPT is far from the only generative AI chatbot,[18] it is the most advanced AI chatbot so far.[19] ChatGPT has a deep and profound understanding of human language[20] and thus generates human-like text to interact with users in a conversational way.[21] ChatGPT can answer prompts and follow-up questions and reject inappropriate requests, such as how to build weapons.[22] It can produce a recipe;[23] write poems, essays, and screenplays;[24] draft emails; and help users learn a different language.[25] It can spruce up a resume with more active, concise language.[26] Most relevant here, it can be prompted to draft legal documents[27] and, when coupled with Lexis and Westlaw legal databases, will revolutionize what it means to draft legal documents.[28] Alarmingly, AI detectors fail more than they succeed in accurately determining if text is human-generated or AI-generated.[29]

At the same time as the emergence of this new technology, the National Conference of Bar Examiners (NCBE) is developing a new bar exam, termed the NextGen bar exam.[30] The NextGen bar exam will reduce its coverage of subject matter knowledge that students need to memorize and increase testing of skills necessary for lawyering.[31] Two of the foundational skills it will now explicitly test are legal writing and legal research.[32] The content scope of the foundational skills that the NextGen bar exam will test will require examinees to draft and edit many different types of legal documents when provided a client scenario and legal sources.[33] While some of the concepts tested will overlap with skills students may learn from using GenAI chatbots, students will still need knowledge and skills to pass the legal writing portion of the NextGen Bar—and they cannot develop the knowledge and skills if they rely too heavily on AI-generated writing.

The influence of assessment methods on how and what students learn is more significant than any other single factor.[34] Thus, professors must be thoughtful about the assessments that evaluate student learning.[35] Considerable academic research exists regarding assessments and how they work.[36] Assessment can refer to evaluating various levels of the legal educational process, but this article deals exclusively with assessment of individual students, which occurs when professors gather information on how well students are learning the objectives in a single course.[37]

To create an assessment-centered course, a professor goes through four stages.[38] First, the professor develops student learning outcomes that identify the knowledge, skills, abilities, experiences, perspectives, and personal attributes that the professor expects the students to exhibit at the end of the course.[39] Second, the professor designs and assigns measures that the professor will use to evaluate whether students have met the learning outcomes created in stage one.[40] Third, the professor evaluates the student work produced as a result of implementing the assessments assigned in stage two and gives feedback.[41] And fourth, both students and professors use the information obtained from the third stage to improve student learning.[42]

This article focuses only on the first part of stage two: designing assessments that will determine whether students have actually achieved the learning outcomes LRW professors have identified. Thus, the paper assumes LRW professors are aware that their courses should have articulated learning outcomes,[43] have already articulated the learning outcomes for their courses (stage 1), and are seeking ways to adequately assess them in light of GenAI and the NextGen bar exam.[44] Likewise, this article does not discuss a professor’s feedback[45] or tools such as rubrics[46] (stage 3), despite the importance of both. It also does not discuss GenAI policies in legal writing courses because that would be putting the cart before the horse. LRW professors should choose the best mix of assessments and then craft a GenAI policy rather than crafting a GenAI policy and trying to create assessments around it.

This article aims to inform those who, like Professor A, misunderstand GenAI chatbots’ capabilities, prevalence, and detection tools. And it aims to caution those who join in Professor B’s enthusiasm for GenAI by using the NextGen bar exam as an example of when students may need to use skills without the help of GenAI. Although it doesn’t directly advocate for any one approach professors should take regarding GenAI’s use in the legal writing classroom, it does tackle this problem from the point of view that GenAI chatbots are here to stay, and LRW professors must think critically about how that fact affects their assessments.

LRW professors should develop a range of assessments to help prepare students for the NextGen bar exam while simultaneously reducing the odds that students are relying too heavily on generative AI to produce a written product without using critical reading, critical thinking, and legal analysis, which would give professors and the students themselves a false sense of the students’ abilities. Part II reviews some foundational knowledge about assessments and describes contemporary assessment practices in legal writing. Part III considers two rationales for changing the status quo. First, it explores how GenAI chatbots work and then considers their strengths and weaknesses. Next, it explains how current assessments in legal writing fail to take into account that students are already using GenAI chatbots, GenAI chatbots can currently produce legal documents, GenAI detectors fail more than they work, and the NextGen bar exam may require students to be adept at both drafting from scratch and revising. Part IV discusses a wide range of assessments, both formative and summative, that LRW professors can use to properly measure student learning in light of GenAI and the skills needed to pass the NextGen bar exam.

Part II reviews some basic tenets of student assessment in the classroom generally and then describes contemporary assessment practices in legal writing classrooms.

A. Assessment Terminology, Categories, and Effectiveness

The term “assessment measures,” as discussed in this paper, means tools used to obtain and document information about student knowledge, skills, and ability in a course.[47] Assessments are categorized in many ways and in groups that often overlap so that one assessment can be a mix of categories. There are norm-referenced assessments,[48] as opposed to criterion-referenced assessments;[49] there are direct assessments[50] as opposed to indirect assessments;[51] and there are performative assessments[52] as opposed to cognitive assessments.[53] Assessments are also categorized as formative, summative, or both.[54] No matter the category, each course should have multiple assessments,[55] and the assessment measures must be valid, fair, and reliable to be effective.[56]

1. Multiple Formative and Summative Assessments

Assessments can be formative, summative, or both.[57] The ABA requires law schools to “utilize both formative and summative assessment methods in its curriculum to measure and improve student learning and provide meaningful feedback to students.”[58] The ABA defines formative assessments as “measurements at different points during a particular course . . . that provide meaningful feedback to improve student learning.”[59] Formative assessments are those in which the teacher provides timely and helpful feedback to students on how well they are learning the content of the course and/or demonstrating particular skills and that give students opportunities to practice those skills again after the feedback.[60] In contrast to ranking or grading students, formative assessment’s purpose is to aid learning—to reflect on what students can demonstrate they know or don’t know and to correct misunderstandings.[61]

The ABA defines summative assessment as “measurements at the culmination of a particular course . . . that measure the degree of student learning.”[62] Most educational research, however, does not limit summative assessments to the end of the course, but defines it as an assessment “that is used for assigning a grade or otherwise indicating a student’s level of achievement”[63] at a certain point in time.[64] Summative assessments, then, indicate how well a student has learned the knowledge and skills the professor meant to teach in a course.[65] Usually, summative assessments come in the form of a grade on an assignment.[66] Podium professors have notoriously given only one summative assessment at the end of a law school course: the dreaded final exam.[67] Many LRW professors have also chosen to summatively assess only the final draft of a memorandum or motion at the end of the semester.[68]

All law school courses, including legal writing, should provide a number of summative assessments during the semester, not just one or two.[69] No research supports professors determining student grades based on one final draft of a legal document.[70] In fact, a lone summative assessment possesses substantial potential for error.[71] Incorporating multiple summative assessments throughout the semester instead of one exam or one paper that accounts for the majority of students’ grades is a more accurate methodology for determining students’ aptitude.[72] Educators now realize that students need multiple opportunities to prove what they know through summative assessments in part because not all students demonstrate their skills and knowledge in the same way.[73]

Summative assessments can also be formative.[74] Professors can assign grades to papers on which they also provide feedback.[75] While purely formative assessments offer students a chance to enhance their skills, some students may not allot adequate time and effort to ungraded work.[76] But if all assessments are summative, the lasting ramifications of a grade diminish their efficacy as a tool for improvement.[77] Thus, student learning thrives with the combination of assessments that are summative, formative, and both, as well as prompt feedback on those assessments.[78]

2. Validity, Fairness, and Reliability

To be effective, an assessment measure must be valid, fair, and reliable.[79] Validity is based on whether the assessment measures whether students are learning what the professor taught and intended the students to learn in the course (i.e., whether students are meeting the course’s learning objectives).[80] To be valid assessments, assignments in a legal writing course “need to be designed so that they provide an opportunity for students to learn [the material], show what they have learned and demonstrate that they have achieved competency in the knowledge [and] skills . . . [that] the module covers.”[81] As explained below, a legal writing assessment may not be valid if a student can use GenAI to produce text in such a way that the student doesn’t need to understand how to execute the skills in the learning outcome that the assessment was meant to measure. In other words, if students use GenAI as a short cut to produce a product that bypasses their own thinking and skill development, and professors are assessing only the GenAI-written product to determine if the students can perform those mental tasks, the assessment would be invalid.

To be fair, an assessment must be “equitable in both process and results.”[82] As LRW professors consider their course’s assessments, they should consider if their current assessments coupled with their policy on GenAI are unfair because the assessments ignore the inevitable pull GenAI will have on some competitive students. If a written legal document is the only product a student needs to produce for the professor to assess, then GenAI is like an answer key to a multiple-choice test that a professor knows is posted in a busy hallway where anyone can take a discreet picture, but the professor simply says, “Don’t look at the answer key.” Some students won’t, but some will, and there is no way of knowing who is who.[83] Just like knowing the answer key is available to everyone, the professor who says, “Don’t use GenAI,” ignores that the process is unfair to those who follow the rules and will unfairly grade those students’ written product.

Grade reliability depends on two elements: scoring consistency—the assessment must “yield[] the same results on repeated trials”[84]—and content sampling—the measure should cover enough content that students’ performance reflects the extent to which they met the learning objectives.[85] With huge piles of legal documents that take days and even weeks to grade, LRW professors risk scoring student papers inconsistently.[86] LRW professors should make sure that their mix of assessments as a whole—not each individual assessment—measures all of the learning outcomes for the course to achieve reliability.

B. LRW Course Grades Based Almost Exclusively on One Written Product

The primary purpose of assessments within legal writing classrooms is to ascertain whether students are successfully acquiring the knowledge and skills that professors believe they should be learning to pass the bar and succeed in practice.[87]

Although LRW professors have embraced teaching the process of writing,[88] and many LRW professors incorporate various forms of purely formative assessment throughout their courses,[89] LRW professors may rely heavily on summatively assessing students’ knowledge via the end product, not during the process, to arrive at the students’ final grades.[90] Although there is no definitive data as to what percentage of LRW professors rely almost exclusively on one grade for the summative assessment, there is evidence it occurs. Students’ final grades in many fall 1L legal writing courses are based overwhelmingly—and sometimes exclusively—on the grade assigned to their final draft of an objective office memorandum.[91] And while professors teach the process of writing in the second semester too, most spring 1L assessments assign the greatest weight in the final grade to producing a written document—usually an appellate brief or a trial motion—for the professor to use as a measure of student learning.[92] Some professors may assign shorter skill-building or process-focused assignments, including oral argument, in either or both semesters, but they don’t always count towards the students’ overall grade.[93] Thus, despite professors’ best efforts, students may focus on producing the end product rather than on the process of producing it and the learning that accompanies that process.[94]

This pattern of grading reveals LRW professors’ assumption that if students produce a legal document that meets the standards of the assignment, the student necessarily engaged in the analysis and critical thinking needed to do so. This assumption had its pitfalls in the past;[95] however, applying the same assumption today in the wake of GenAI could be disastrous to student learning. While there are good reasons LRW professors previously structured their courses to summatively assess students almost exclusively on one final draft of a document,[96] the rise of GenAI and the NextGen Bar exam necessitates adding or restructuring some assessments.

III. Rationales for Change: Generative AI and the NextGen Bar Exam

With the rise of generative AI, students may not be practicing the skills required by course learning outcomes when they draft legal documents. Instead, students may be relying on GenAI in ways that circumvent learning and practicing those skills—skills they will need to pass the NextGen bar and practice law. To understand how to craft valid, fair, and reliable assessments, professors must have a good grasp of the powers and limitations of GenAI chatbots. Part A of this section explains how GenAI chatbots work and what their strengths and weaknesses are. Part B then explores the development of the NextGen bar exam and what skills it will test. Finally, Part C explains how these two developments create a perfect storm for law students learning the art of legal writing.

A. GenAI Chatbots

For simplicity’s sake, this section will explain GenAI chatbots though the lens of ChatGPT because they generally work the same, and ChatGPT is the most popular one.[97] All of the ChatGPT models were trained using open-source internet data to predict the next word in a response to a prompt.[98] This background is crucial to understanding what ChatGPT can do and what its limitations are.

1. How GenAI Chatbots Work

A large language model (LLM) is a recent advancement in technology that forms the algorithmic basis of ChatGPT and other chatbots.[99] An LLM is a type of neural network[100]—“a mathematical system that learns skills by finding statistical patterns in enormous amounts of data.”[101] The neural network architecture of LLMs, called transformers, has many, many layers.[102] Neurons are the smallest component of an LLM and are the mathematical functions that calculate an output based on input.[103] Like those in our brain, the neurons of an LLM are connected to other neurons, and each of these billions of connections has its own weight.[104] The more weight in the connection, the stronger the link between neurons.[105] The stronger the link between neurons, the more one neuron pays attention to the input (words or parts of words) from the other neuron; the weaker the link, the less attention one neuron pays to the other.[106]

GenAI chatbots do not behave under the typical programming method of simply executing pre-programmed instructions in the form of computer code written by a computer programmer.[107] Instead, the LLM learns the behaviors from a broad range of data selected by a developer.[108] LLM programmers define the LLM’s architecture and the rules to build it, but the model itself creates the neurons, generates connections, and determines the weights of the connections by following instructions through a process called “training.”[109]

Like all LLMs, ChatGPT relies on probability—through extensive training, it mathematically predicts what the next word should be in a sentence.[110] The data sets LLMs are trained on determine what the responses to prompts will be and the responses are limited by that data.[111] Initially, ChatGPT went through a “pre-training” phase in which programmers fed 300 billion words into the system, including 570GB from the internet.[112] The model learned to mathematically calculate the probability of the next word in a sentence by studying this mass of internet text.[113] During pre-training, the LLM broke each word or part of a word into a “token.”[114] The LLM used the tokens to calculate the strength of the bonds between neurons as it identified patterns and relationships between tokens.[115] LLMs can also recognize the context of various tokens, so they recognize when the same letters have different meanings (e.g., bank means both a financial institution and where land meets a body of water).[116] Human reviewers then asked the LLM questions with a correct output in mind, and if the program got it wrong, the reviewers inputted the correct answer back into the system, essentially teaching it correct answers to build its knowledge base.[117]

In the second phase of training, human reviewers fine-tune the LLM’s responses.[118] Essentially, reviewers rank several responses to one prompt from best to worst.[119] OpenAI provides human reviewers with a few categories of guidelines that they use to review and rate the model’s outputs.[120] Some of the guidance is on certain types of output (e.g., “do not complete requests for illegal content”);[121] some guidance instructs the model to “avoid taking a position on controversial topics.”[122] As human reviewers rank the various responses, the LLM improves its understanding of prompts and what word sequence is most likely responsive.[123]

After training, users input a prompt—words to which the model is meant to respond.[124] The computer algorithm of the LLM processes the text in the prompt by breaking it into tokens and having neurons calculate what connections to pay attention to.[125] It continues to run calculations for each token, predicting one word after another, until the response is complete.[126] GenAI chatbots generate a response to the tokens that sounds right based on its LLM’s training and data.[127] Currently, most GenAI chatbots are not researching by mining information from internet sources.[128] They don’t have a memory that captures facts, dates, etc. and saves them.[129] In contrast, for each token of the prompt, an LLM begins to do mathematical calculations that another token it generates for the response has the highest probability of “sounding right.”[130] Crucially, LLMs that underlie ChatGPT and other GenAI chatbots can only produce writing that sounds right based on their training using other texts but cannot guarantee that the response is factually correct.[131]

As of September 2023, the free version of ChatGPT uses GPT-3.5 as its LLM and the paid versions, including ChatGPT Enterprise, use GPT-4.[132] ChatGPT Enterprise is a version of ChatGPT tailored for large enterprises (e.g., businesses or universities) that offers upgrades specifically geared to address problems those organizations face when choosing to use GenAI, such as security, privacy, and enhanced analytics.[133]

In sum, LLMs are quite good at generating text because grammar, spelling, and punctuation have rules and patterns they follow, and language is redundant.[134] Given sufficient time, appropriate training, ample data, and necessary adjustments, LLMs can “learn” to generate text so realistic that it’s virtually impossible to tell it apart from text composed by a human.[135] With this knowledge of how GenAI chatbots work, it’s easier to understand their strengths and weaknesses.

2. The Strengths of GenAI Chatbots

ChatGPT has strengths inherent in its design. ChatGPT can keep track of existing prompts in a conversation string and remember information and rules set in prompts earlier in the conversation.[136] ChatGPT’s responses to essay prompts have no grammatical errors or typos, are drafted clearly, have consistent and appropriate tone, and are well organized.[137] ChatGPT is exceptionally good at modifying text to meet various difficulty levels (e.g., “explain [the following text] at a high school level”)[138] and editing a user’s own text.[139] It can even imitate a user’s own writing style if provided examples of it.[140] It helps brainstorm ideas to combat writer’s block and quickly summarizes large amounts of text.[141] Although ChatGPT has a tendency to hallucinate,[142] it will not create case law when prompted not to and given a limited number of cases to work with.[143]

It excels on exams, both multiple-choice and essay. In one study, GPT-4 scored in the eighty-eighth percentile on the LSAT and ninetieth percentile on the Uniform Bar Exam.[144] In another study, four law professors tested how well ChatGPT could perform on law school exams that had both multiple-choice and essay questions: Constitutional Law, Employee Benefits, Taxation, and Torts.[145] ChatGPT passed all four exams with an average of C+, enough to successfully pass law school if it performed consistently across all exams.[146] It scored statistically better than chance on multiple-choice questions, and it performed better on essay questions than on multiple-choice questions.[147] As for the essay answers, it did a good job “accurately summarizing appropriate legal doctrines and correctly reciting the facts and holdings of specific cases.”[148]

The paid versions of ChatGPT are timely.[149] Because the data ChatGPT learned from only went up to September 2021, during the first ten months after ChatGPT’s release, it had little knowledge of events after September 2021.[150] Students using ChatGPT during that time could miss legal developments crucial to their analysis because the database ChatGPT was trained on only contained information up to September 2021.[151] That all changed in September 2023, when OpenAI announced that the paid versions of ChatGPT—the “Plus” version and Enterprise version that both use GPT-4—can now browse the internet to respond to prompts with up-to-date information.[152] The free version is still limited by the knowledge cut-off but will be connected to the internet “soon.”[153]

3. The Weaknesses of ChatGPT and the Rapidly Evolving Resolutions

According to OpenAI, there are some limitations with the GPT-3.5 and GPT-4 technology.[154] The three weaknesses most relevant to assessment of legal writing in law school are 1) ChatGPT’s tendency to hallucinate case law and legal sources; 2) its failure to recognize the truth; and 3) some analysis issues that apply to legal writing.

First, OpenAI admits that GPT-4 still hallucinates,[155] which occurs when LLMs provide “an answer that is factually incorrect, irrelevant, or nonsensical.”[156] For example, Professor Ashley Binetti Armstrong asked ChatGPT to draft a legal memorandum and it created one, but with fake statutes.[157] And when she asked it if there were any other cases she should review on the memorandum’s topic, it responded with a list of fake cases and real cases that were irrelevant.[158] This unfortunate weakness was the beginning of trouble for a couple of attorneys in New York who did not check the case law ChatGPT provided before including it in a brief they filed in court and were subsequently sanctioned.[159]

Second, although ChatGPT produces plausible-sounding answers, it does not produce “right” answers because responses are based on probability.[160] Consequently, it doesn’t necessarily tell the truth, even if it is not inventing sources.[161] One law professor found this out in dramatic fashion when, as part of a research study, ChatGPT was prompted to generate a list of legal scholars who had sexually harassed someone, and it listed the professor, even though he had never even been accused of anything of that nature, and the class trip on which the alleged harassment occurred never happened.[162]

Third, ChatGPT’s legal essay answers have analysis weaknesses that apply to legal writing. In one study on how well ChatGPT could perform on law school exams, ChatGPT struggled to spot issues, failed to go into sufficient detail when applying legal rules to hypothetical facts, and misunderstood some legal terms.[163] If not prompted to follow IRAC order (Issue, Rule, Analysis, Conclusion) or another similar structure in its essay responses, it did not consistently do so.[164] Additionally, ChatGPT would include material in exam answers not covered in the particular course.[165]

Solutions to these deficiencies are quickly emerging. One development that affects multiple weaknesses are plugins for ChatGPT.[166] Plugins are AI chatbot enhancements that add more search capabilities to the chatbot by connecting ChatGPT to specific third-party services, websites, or databases so it can search those in real-time.[167] In March 2023, the paid version of ChatGPT made plugins available, which allows ChatGPT to hallucinate less and to access more current information and private databases.[168] Some plugins help with drafting or analyzing information.[169] For example, the AskYourPDF plugin allows users to upload a pdf (e.g., an opinion) to its website and then ask ChatGPT questions about it.[170] Thus, this plugin could enhance ChatGPT’s ability to summarize caselaw and synthesize rules.

Because it was trained on public sources on the internet, ChatGPT cannot answer questions pertaining to information in private databases.[171] However, in addition to plugins, technology is being developed to give certain LLMs access to companies’ internal knowledge databases, which will extend their capabilities.[172] Most importantly, LLMs are being trained on legal-specific databases. Thomson Reuters acquired Casetext, whose flagship product is CoCounsel, in a deal that closed in August 2023.[173] CoCounsel uses AI to review documents, help with legal research memoranda, prepare depositions, and analyze contracts.[174] Casetext had early access to OpenAI’s GPT-4 language model, which serves as the LLM model for CoCounsel.[175] Casetext boasts that with this advantage, the platform “can read, comprehend, and write at a postgraduate level.”[176] Thomson Reuters has also incorporated GenAI into Westlaw Precision.[177]

Not to be outdone, in May 2023, LexisNexis announced the development of Lexis+ AI, a chatbot that works like ChatGPT but was trained on LexisNexis’ database of legal documents.[178] It “features conversational search, insightful summarization, and intelligent legal drafting capabilities, all supported by state-of-the-art encryption and privacy technology to keep sensitive data secure.”[179] It will also give users citations to check the responses.[180]

Training on legal-specific databases would reduce hallucinations pertaining to legal sources and give students access to the most current legal information. Training on legal-specific databases and in legal writing could also potentially reduce errors having to do with IRAC and misunderstood legal terms. Resolving these weaknesses will encourage students to embrace GenAI chatbots even more.

The next section explains what we know so far about the NextGen bar exam and the scope of legal writing skills that will be tested on it.

B. The NextGen Bar Exam

Thirty-nine states and the District of Columbia have adopted the Uniform Bar Exam (UBE).[181] The NCBE develops the components of the UBE.[182] The UBE comprises the Multistate Bar Examination (MBE), the Multistate Essay Examination (MEE), and two written exercises that make up the Multistate Performance Test (MPT).[183]

The NCBE created the Testing Task Force (TTF) in January 2018 to undertake a three year study “to ensure that the bar examination continues to test the knowledge, skills, and abilities required for competent entry-level legal practice in a changing legal profession.”[184] Although the NCBE originally approached the study “with no preconceived conclusions about whether the bar exam will or should change, whether in terms of content, format, timing, or method of delivery,”[185] it is now clear that the bar exam will change drastically beginning in the summer of 2026.[186]

1. The Current State of the Bar Exam

The NCBE calculates an examinee’s UBE score by compiling the three component test scores.[187] The MBE is weighted as 50% of the UBE overall score, the MEE as 30%, and the MPT as 20%.[188] The MBE is a six-hour examination with 200 multiple-choice questions focused on seven subject areas: civil procedure, constitutional law, contracts, criminal law and procedure, evidence, real property, and torts.[189]

The MEE takes half the time of the MPT; it consists of six thirty-minute essay questions on thirteen substantive subjects.[190] The MEE does not explicitly test legal writing.[191]

Each of the two exercises in the MPT consist of one ninety-minute problem that outlines a written lawyering task the examinee must complete, such as drafting a contract, memorandum, letter, or brief.[192] Each problem has a File and a Library.[193] The File consists of documents that contain client facts.[194] The Library consists of law that may be applicable.[195] Examinees are tested on their ability “to extract from the Library the legal principles necessary to analyze the problem and perform the task.”[196] As for effectively communicating in writing, the NCBE only explicitly tests the examinee’s “ability to [] assess the perspective of the recipient of the communication; and [] organize and express ideas with precision, clarity, logic, and economy.”[197]

2. The Development of the NextGen Bar Exam

In the first phase of the study, the TTF held a series of listening sessions from November 2018 through June 2019 wherein it asked a myriad of interested parties for input on how the NCBE can accomplish its goal of testing the knowledge, skills, and abilities required for competent contemporary entry-level legal practice.[198] In formulating its final recommendations, the TTF was guided in part “by the prevailing views expressed by stakeholders that the bar exam should test fewer subjects and should test less broadly and deeply within the subjects covered [and] greater emphasis should be placed on assessment of lawyering skills to better reflect real-world practice and the types of activities [newly licensed lawyers] perform.”[199]

In the second phase, the TTF undertook “a national practice analysis to provide empirical data on the job activities of newly licensed lawyers.”[200] The survey results showed that more than 90% of newly licensed lawyers performed the most common tasks almost weekly and that respondents classified those common tasks as “high importance”—those essential for their job.[201] One theme common to the top five essential tasks was “written and spoken communications.”[202] In addition to tasks performed, the survey asked respondents to rate the skills, abilities, and other characteristics that were critical to a newly licensed lawyer’s practice.[203] Four of the five skills, abilities, and other characteristics with the highest average criticality ratings are the focus of teaching in legal writing courses: written/reading comprehension, critical/analytical thinking, written expression, and identifying issues.[204]

Phase three of the study combined the results from phase one and phase two to create a final recommended blueprint for the bar exam’s redesign.[205] The TTF recommended that legal research be weighted as 17.5% of the NextGen bar exam and legal writing and drafting as 14.5%.[206] This percentage is one and a half to two times the recommended percentage of individual subject matter areas.[207] In January 2021, the NCBE Board of Trustees approved the TTF’s final recommendation of changes to the existing bar exam.[208] The recommendations will transform the exam’s content, structure and format, scoring, timing, frequency, and delivery.[209]

As for the content, the NextGen bar exam originally eliminated several subject matters, leaving only eight.[210] Instead, it added seven new foundational skills it will explicitly test.[211] And LRW professors already test most of these new skills: legal research; legal writing; issue spotting and analysis; investigation and evaluation; client counseling and advising; negotiation and dispute resolution; and client relationship and management.[212]

The NextGen bar exam will be an integrated exam that will test all these subjects and foundational concepts and skills, resulting in a single score[213] using stand-alone multiple-choice questions (almost half of the final score), item sets (just under one-third of the final score), and two longer writing tasks (25% of the final score).[214] “An item set is a collection of test questions based on a single scenario or stimulus such that the questions pertaining to that scenario are developed and presented as a unit.”[215] The questions within the item sets can be in the same format or different formats (e.g. short answer, multiple choice, or essay answers).[216] Some item sets will provide legal resources such as case law, statutes, rules, and regulations and will be focused on drafting or editing a legal document.[217] The long writing tasks will resemble the current MPT, except one of the two will consist of multiple-choice and short-answer questions focused on legal research followed by the long writing task.[218]

Although we do not know yet exactly how these new foundational skills will be assessed, it is clear that the NCBE intends to follow the interested parties’ desires from phase one and the survey’s results from phase two to emphasize skills acquired in legal research and writing courses and deemphasize memorization of subject matter knowledge.

The NextGen bar exam will be administered for the first time in July 2026.[219] As has always been the case, jurisdictions are free to choose to adopt it or not.[220] Jurisdictions are currently considering whether to adopt the NextGen bar exam when it is available, although as of October 2023, none have formally announced plans to do so.[221] For two years after its rollout, the NCBE will produce both the UBE and the NextGen bar exam,[222] although only one or the other will be available to each jurisdiction.[223] The February 2028 UBE will be the last one the NCBE offers.[224]

These changes demand that LRW professors closely examine the content scope of legal writing to ensure students have opportunities to practice the skills the NextGen bar exam will test.

In May 2023, the NCBE released the content scope of the foundational skills that will be tested on the NextGen bar exam.[225] The content specifications for each foundational skill will “guide development of test questions and provide notice to candidates of what may be tested and how.”[226] The tasks for legal writing and drafting that the NextGen bar exam will test include:

  • “Draft or edit correspondence to a client explaining the legal implications of a course of action, updating the client on the status of the client’s matter, and/or providing advice on the next steps to be taken in the matter.”

  • Given draft sections of a complaint or an answer to a complaint in a matter, identify language that should be changed, and make suggestions for how that language should change, consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.”

  • Given draft sections of affidavits that must be submitted to a court or other tribunal in a matter, identify the best affiant and best language to support each element to be proved, consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.”

  • Given draft provisions of a contract, identify language that should be changed, and make suggestions for how that language should change, consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.”

  • "Given a collection of legal sources, draft specified section(s) of a document, demonstrating skill at formulating an original legal analysis. This task may include:

    • an objective memorandum;

    • a persuasive brief or letter; or

    • another common document, such as a mediation brief, an opinion letter, or a draft proposal for a contract."[227]

Only one of the five foundational skills requires examinees to draft a memorandum, brief, letter, or other legal document from scratch.[228] Four of the five legal writing foundational skills tested on the NextGen bar exam—those in italics above—do not necessarily involve drafting.[229] Instead, the first foundational skill asks students to “draft or edit correspondence to a client.”[230] Three others anticipate that students will be “given draft sections of” a complaint, answer, affidavit, or contract,[231] implying the examinees will not draft those sections from scratch. Once given those draft sections, students must identify changes to language that will meet the document’s objectives, "based on the relevant rules and standards and consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.[232] In other words, students must know how to edit a draft that they did not write, which is what lawyers will be doing more and more in practice with the rise of GenAI.

With this background, the next section explains how many LRW professors’ current assessment practices may not consider current realities regarding GenAI and the NextGen bar exam.

C. The Collision of GenAI, the NextGen Bar Exam, and Assessment

GenAI chatbots are rapidly improving. Roughly one to two times per month since December 2022, OpenAI, the developer of ChatGPT, has announced the release of a new improvement.[233] Plugins became available within six months of ChatGPT’s release.[234] ChatGPT Enterprise, with privacy protections, was released in August 2023.[235] And ChatGPT Pro and ChatGPT Enterprise are now connected to the internet, making the answers to prompts more timely.[236]

In that same time frame, the NCBE began pilot testing draft items and item types and completed the content scope outline.[237] In July 2023, it released the first set of sample questions;[238] it plans on releasing more sample questions throughout 2023 and 2024.[239] It will begin field testing operational items in January 2024 and plans on publishing the final exam design and test content specifications in early to mid-2024.[240] It is on a track that will result in some students who started law school in fall 2023 taking the NextGen bar exam in 2026.[241]

The reality is that students are using GenAI chatbots, GenAI chatbots can produce current legal writing assessment products, AI detectors are inadequate, and now more than ever, law students will need to understand the process of writing to pass the bar exam. The following section addresses each of these considerations in turn.

1. GenAI Chatbots in Higher Education

Students are using GenAI chatbots.[242] “On TikTok, the hashtag #chatgpt had more than 578 million views” as of January 2023 and included videos showing ChatGPT writing papers.[243] One recent survey “showed that within just 100 days of ChatGPT’s launch in November 2022, nearly one in three [undergraduate] students reported regular use of GenAI tools, compared to only about one in 10 faculty members.”[244] Additionally, the survey showed that 69% of students that are already using GenAI reported they “will continue to use generative AI tools even if their instructor or institutions prohibits it.”[245] “[Within] four years, every freshman will have grown up writing their high school essays with ChatGPT.”[246]

Students are using GenAI to enhance their learning process.[247] They are using GenAI as an on-campus tutor to break down challenging concepts.[248] They are using it as an editor to help with grammar, tone, and word choices.[249] They are using it to identify potential counter-arguments to a stance a particular student intends to adopt, enabling the student to address those counter-arguments in their paper.[250] And students are using it to prepare for presentations by asking it to provide a list of potential questions from the audience.[251]

And it is not only undergraduate students. LexisNexis conducted a survey in March 2023 that included lawyers, law students, and legal consumers in the United States.[252] It found that 44% of law students had used GenAI.[253] Moreover, 9% of law students reported currently using GenAI for school and 25% planned to eventually incorporate it into their school or work.[254]

Educators across the nation are grappling with how to adapt learning in light of GenAI chatbots.[255] Law students themselves are worried that despite the accuracy and validity issues that plague GenAI chatbots, competitive students will try to use GenAI to get ahead of the curve.[256] But how hard would it be for ChatGPT or the like to write a legal document?

Using prompt engineering, students can walk GenAI chatbots through legal drafting steps instead of doing the steps on their own.[257] “Prompt engineering is the process of using specific words and phrases along with choices about the structure and organization of those words and phrases to write instructions that improve generative AI’s ability to provide a response that is useful to a human prompter.”[258] To elicit more effective responses, users write prompts informing ChatGPT of a specific rhetorical situation.[259] To draft an essay, for example, a student uses ChatGPT to brainstorm paper topics.[260] They then write a prompt explaining the topic of the essay and asking ChatGPT to come up with a thesis.[261] Then the student prompts it for an outline.[262] Next, the student requests that ChatGPT draft specific paragraphs or sentences based upon that outline.[263] Finally, the student edits ChatGPT’s draft.[264]

Using prompt engineering, law professors have explored how GenAI chatbots can produce first drafts of memoranda,[265] briefs,[266] complaints,[267] contracts,[268] and wills.[269] For example, Professor Ashley Binetti Armstrong explored ChatGPT’s ability to draft a legal memorandum.[270] She gave it a client’s fact pattern and the issue, narrowed down the analysis, asked it to apply caselaw to the client’s facts, and prompted it to draft a question presented.[271] Although it made up legal sources, had it been prompted with relevant, real caselaw, the resulting text would have been comparable to a first-year law student’s work.[272]

LLMs’ ability to perform the complex reasoning needed to draft legal memoranda or motions significantly improves if the prompts are a “series of intermediate steps” termed a “chain of thought.”[273] Although a chain of thought imitates complex reasoning used by humans, this does not prove that an LLM is actually reasoning.[274] But students can certainly use this technique to create a mimicked complex reasoning in text generated by AI chatbots. Professor Andrew Perlman demonstrated the power of GenAI chatbot’s “reasoning” in a series of prompts and responses designed to write a legal analysis of several client issues.[275] The responses were mostly correct and, at the very least, would pass for a middle-of-the-road law student’s writing.[276]

With the right prompts, GenAI chatbots can perform the tasks needed to draft legal documents. ChatGPT can identify relevant rules[277] and can accurately summarize and synthesize rules.[278] It will not create a fake case or legal sources when prompted not to.[279] Instead, because users can input prompts into ChatGPT-4 that are up to 25,000 words,[280] students can easily feed it client facts, statutes, or portions of opinions.[281] ChatGPT can quickly summarize case law.[282] It can suggest arguments to make in a brief,[283] apply law to a client’s facts,[284] and analogize precedent opinions to, and distinguish them from, client facts.[285] ChatGPT can create thesis sentences, improve flow, revise for conciseness,[286] correct grammar and mechanical issues, and increase text’s persuasiveness.[287]

With a little practice, students can produce all sorts of legal documents with ChatGPT. This wouldn’t be a problem if there was a way to tell if a GenAI chatbot wrote the product a student turned in. However, as the next section describes, AI detectors fail miserably at this task.

3. The Miserable Failure of AI Detectors

Anyone can now access and utilize GenAI, which is becoming more and more adept at deceiving individuals with its ability to produce text, audio, images, and videos that appear to be created and captured by human beings.[288] As one student put it, many professors “assume that if an essay is written with the help of ChatGPT, there will be some sort of evidence—it will have a distinctive ‘voice,’ it won’t make very complex arguments, or it will be written in a way that AI-detection programs will pick up on. Those are dangerous misconceptions.”[289]

Plagiarism checkers that educators have used in the past are not effective against ChatGPT because, while GenAI learns from datasets, it does not copy text from them.[290] By May 2023, more than a dozen companies offered tools to identify whether something was generated with AI.[291] AI detectors are tools “claiming they can reliably differentiate between human and non-human text.”[292] But more often than not, they mistake AI-generated text for human-generated text, or designate human-generated text as coming from AI.[293] Particularly problematic is research showing that multiple AI detectors disproportionately and incorrectly flag non-native English speakers’ writing as AI-generated.[294]

And the AI detector developers know their products are not reliable. GPTZero is one of the programs that claims to detect text written by ChatGPT.[295] But it admits upfront that “at GPTZero, we don’t believe that any AI detector is perfect,” and it cautions against using it to “punish” students for using GenAI.[296] One user inputted content that was 100% AI generated, but GPTZero concluded there was only a 55% probability that it was AI-generated.[297] OpenAI admitted that an early classifier tool it released that purported to distinguish between human-written and AI-written text missed an estimated 74% of AI-generated text;[298] its AI detector also falsely identified 9% of human-generated text as AI-generated.[299] A mere six months after releasing its AI detector, OpenAI shut it down due to its low rate of accuracy.[300]

AI detectors have also failed independent evaluations. TechCrunch ran a test wherein it gave an AI chatbot eight prompts and then ran the responses through seven popular AI text detectors: OpenAI’s own classifier, AI Writing Check, GPTZero, Copyleaks, GPT Radar, CatchGPT, and Originality.ai.[301] The results marked a spectacular failure for the detectors:

  • None of the detectors identified the text generated by a prompt to draft a marketing email as AI text;

  • Only one of the detectors (GPTZero) identified the text generated by a prompt to draft a news article as AI text;

  • Only one of the detectors (CatchGPT) identified the text generated by a prompt to draft a resume as AI text;

  • Only one detector (GPTZero) identified the text generated by a prompt to draft an encyclopedia entry as AI text;

  • Only two of the detectors (GPTZero and CatchGPT) identified the text generated by a prompt to draft a college essay as AI text;

  • Only two of the detectors (GPTZero and CatchGPT) identified the text generated by a prompt to draft a cover letter as AI text; and

  • Three of the detectors (GPTZero, OpenAI classifier, and CatchGPT) identified the text generated by a prompt to draft an essay outline as AI text.[302]

AI detection tools such as Turnitin erroneously flag student work as having been generated by AI chatbots.[303] A recent experiment tested sixteen samples that were drafted by a mix of five high school students and GenAI.[304] Turnitin incorrectly identified 50% of the test samples.[305] It has the most trouble identifying text that is a mix of AI-generated and human-generated writing.[306] Turnitin admits that its AI detection tool has a higher false positive rate in real life than it originally reported from controlled studies.[307]

Understanding how AI detectors work exposes the problems when applying them to legal writing. AI detectors look for writing that is consistently average, which is what AI chatbots strive for.[308] Of course, human-generated writing can also be consistently average, which would trigger the detectors.[309] Documents that have set styles or formats such as lab reports or five paragraph essays are more likely to be flagged as AI-generated.[310] That leaves open the possibility that memoranda that follow a set style and format would be more likely to be flagged erroneously. AI detectors look for words or phrases that are less common in human-generated text and are repeated throughout a document.[311] They look for “burstiness”—when a text overuses certain words and phrases and lacks variation.[312] Thus, false positives can come from repetitive words (e.g., repeating buzzwords from a case or the text of a statute), repetitive lists (e.g., repeating lists of factors and elements), and paraphrased wording that repeats ideas (e.g., conclusions required in headings and subsections),[313] all of which are widespread in legal writing. Many human-written legal documents also repeat legal words and phrases, and law students are taught not to use elegant variation in legal documents, all of which could alert an AI-detector.

One of the biggest problems facing AI detectors is essentially a “technological arms race.”[314] GenAI technology is rapidly evolving and detectors are, and for the foreseeable future will be, trained on older LLM versions, which makes them less effective over time.[315] Detectors will improve as text-generating AI improves, but the detectors can never get ahead of the generators.[316] Moreover, writers can easily evade detectors by revising, rearranging, or replacing some words or sentences in AI-generated text.[317] Tools such as Quillbot and Undetectable.ai help students evade AI detectors.[318]

Watermarking—“embedding a secret code of sorts into AI-generated text”—could help detectors identify AI-generated text, but it is not available yet.[319] Some scientists are skeptical that watermarking will ever be a workable tool for detectors to identify AI-generated text,[320] especially because anyone can remove the watermarking code since it relies on open-source code.[321] And it is subject to the same problem as AI detectors now: users can easily get around detection by replacing words with synonyms or otherwise slightly revising AI-generated text.[322]

GenAI cannot tell a user if it wrote specific text either. ChatGPT cannot recognize its own work and has claimed authorship of classics such as Crime and Punishment.[323] One Texas A&M professor put his students’ papers through ChatGPT, asking if it wrote them, and if ChatGPT said it wrote a paper, he ran that paper through again.[324] If ChatGPT answered a second time that it wrote the paper, the professor gave the student author a zero on the assignment.[325] But the students had written the essays themselves.[326]

Because of AI detectors’ unreliability, some institutions of higher education are opting out of using them altogether.[327]

As professors struggle to identify text generated by AI chatbots, the NCBE is quickly rolling out the NextGen bar exam, which will require both drafting and revision skills, as explained in the next section.

4. Drafting vs. Revision Skills Needed for the NextGen Bar Exam

Scholars are already urging law schools to begin to understand the changes that are coming with the NextGen bar exam and adjust their curriculum and bar-passage programming in advance of it.[328] This article’s impetus does not arise from the perspective that legal writing assessments should change because of the NextGen bar exam. Instead, it arises from the realization that GenAI may trick professors who use traditional assessment measures into thinking students have mastered the skills needed to pass the NextGen bar exam. Effective LRW professors will not only understand what GenAI chatbots can do and anticipate that students will use them but will also know what the NextGen bar exam will require students to know and do. Thus, such professors can revise both formative and summative legal writing assessments and incorporate a mixture of those assessments to assess the skills and knowledge needed to master learning objectives and pass the bar without inadvertently assessing a product that does not require students to demonstrate those skills and understandings.

It comes as no surprise that the NextGen bar exam will require students to draft original sections of legal documents when given relevant legal sources.[329] Thus, LRW professors should keep assessments in the mix that require students to draft full office memos,[330] trial motions, appellate briefs, settlement memoranda, and/or client letters. Additionally, because the majority of skills tested by the NextGen bar exam requires students to revise text, LRW professors should consider adding assessments to the mix that test students’ abilities to revise drafts they did not write or to prove they have the skills necessary to do so without drafting a complete document.

Part IV of this article introduces tweaks the professor can make to directly assess the knowledge and skills a student learns or assess students’ writing processes rather than simply assessing drafts of the students’ written products.

IV. Proposed Assessments That Consider the Effect of GenAI and Prepare Students for the NextGen Bar Exam

LRW professors should select a mix of assessments to ensure that they measure students’ mastery of course learning outcomes.[331] The following sub-sections introduce assessments that can be combined in various ways to acknowledge the abilities and limitations of GenAI and that consider the skills and knowledge needed to pass the NextGen bar exam. Most of these suggestions are not new types of assessments and would be useful in legal writing classrooms despite GenAI. The novelty is using a broader mix of summative assessments than before. Assessment literature has identified incorporating evidence from “multiple methods and sources of information” as a best practice for evaluating student learning, because doing so is more likely to result in accurate evaluations[332] and will enhance student learning.[333] This Part explains how to select the best mix. As explained earlier, utilizing a mix of formative and summative assessments is essential.[334]

In addition, the mix should include both traditional and performance measurements.[335] Traditional measurements represent the standard forms of testing in an educational context; they consist of multiple-choice, true/false questions, and short-answer or essay exams.[336] Performance measurements require a “student’s active generation of a response that is observable either directly or indirectly via a permanent product.”[337] This includes a variety of products that are highly discipline-specific; architectural drawings, research proposals, multimedia presentations, and memoranda of law are all examples of performance measurements.[338] Currently, legal writing courses rely heavily on performance measurements in the form of written legal documents and not as much on traditional measurements.[339] In addition to adding more traditional measurements, professors should also evaluate the process of writing, and not simply the end product, when using performance assessments.[340]

The following proposed assessments do not require either a whole-sale adoption of GenAI chatbots or a prohibition on them in legal writing classrooms. On the one hand, if professors simply allow students to use GenAI chatbots for all their writing, such a practice would neglect the value of writing for developing critical thinking and legal analysis skills.[341] On the other hand, prohibiting the use of GenAI chatbots altogether is unrealistic, and using them wisely can help students learn the revision skills tested on the NextGen bar exam. Instead of either extreme option, professors should choose a mix that includes some assessments encouraging the use of GenAI and others where GenAI assistance can’t possibly help.[342]

The proposed assessments below comprise three distinct functions and are organized accordingly.[343] Professors should select assessments for their mix from all three functions and possibly from a fourth category of hybrid assessments that serve more than one of the functions at once. First, section A offers some suggestions for assessments that examine specific skills students need to draft legal documents but do not rely on long legal drafts to do so. Next, section B offers some alternatives for assessments that aim to evaluate students’ skills incrementally throughout the production of a legal document, rather than by merely judging a finished draft. This approach goes beyond the traditional method of providing feedback on a preliminary draft and assessing a final version with those comments integrated. That traditional method does not necessarily ensure that students are taking the correct steps or gaining the underlying knowledge to independently write and revise the initial draft. Third, section C describes proposed assessments that still involve students drafting a legal document; however, rather than evaluating steps along the way, a professor incorporates components into the assessment that either prevent reliance on GenAI or permit its use in a way that serves pedagogical purposes. Finally, section D explains hybrid assessments—those that could serve two or more of the three distinct functions depending on how the professor uses them.

A. Assessing Student Learning Independent of Long Drafts

Legal writing courses may already incorporate some traditional measures to test discrete skills such as citation and grammar. This section challenges professors to rethink these measures to focus on grading and assessing critical reading, critical thinking, and legal analysis.

1. Multiple-Choice Quizzes or Tests

One way to assess students’ knowledge, critical thinking, critical reading, and legal reasoning skills is through the use of multiple-choice or short-answer quizzes or tests. The NextGen bar exam will use multiple-choice and short-answer questions in its item sets to test foundational skills.[344] Consequently, assessing students using multiple-choice or short-answer quizzes gives them a chance to practice these types of questions for legal writing.

There is a high correlation between students’ performance on multiple-choice exams and essay examinations.[345] A recent empirical study on the use of multiple-choice questions in law school found that “multiple-choice questions, when properly constructed using identified guiding principles, are an efficient and effective way to assess legal reasoning abilities.”[346] This is not a new discovery; past research has also shown that multiple-choice questions can assess “higher-order thinking skills.”[347] Legal reasoning is an internal cognitive process, but some students face challenges when trying to express this reasoning in written form.[348] Thus, multiple-choice questions focused on assessing students’ legal reasoning or critical reading skills may test such knowledge better than using the proxy of a legal document. Multiple-choice questions can also help students showcase skills needed for revising longer texts; when students compare passages, identify problems with text, or rhetorically analyze text using multiple-choice questions, professors are isolating and assessing the mental processes needed to revise longer texts.

To draft test items—“the basic unit of observation of any test,” also known as individual test questions[349]—a professor must understand the content, determine the cognitive behavior to be tested, and write with originality and clarity.[350] As long as professors understand the content, though, they can quickly learn item-writing concepts[351] by following established guidelines for creating multiple-choice exams.[352]

LRW professors who teach using team-based learning (TBL) have been giving multiple-choice quizzes for years.[353] As one of the steps in TBL,[354] students take a readiness assurance quiz (RAQ), which is a multiple-choice quiz that assesses their ability to apply the foundational material in their current course unit.[355] The students take the RAQ independently first, and, before they know their score, they take the same RAQ as part of a team with five to seven other students.[356] During the team RAQ, students debate answers, discuss principles, and review the course material together to come to a consensus on the right answer.[357] Both the individual and team RAQs account for a small percentage of the students’ overall grades.[358]

These multiple-choice quizzes are application quizzes, not comprehension quizzes.[359] In the legal writing context, RAQs ask students to do such things as compare case explanations and select the best one along with the reason for that ranking; to choose the best thesis sentence after reviewing an argument paragraph; to choose the best large-scale organization for a specific memorandum; to select the best synthesized rule; to recognize specific tones in a passage and understand how to change the tone; or to select the best theory of the case for a passage in a motion. In essence, students do what professors do when they grade papers: they review a passage and judge it according to the foundational doctrines of legal writing. This evaluation of text is also what students will need to do when revising passages on the NextGen bar exam.

Some scholars dismiss multiple-choice questions as a measurement for writing skills.[360] But LRW professors teach more than writing skills, and learning outcomes for their courses include more than just how to write. They include competency in applying legal reasoning, examining the hierarchy of authority to properly use law in legal analysis, synthesizing rules and explaining legal tests, recognizing and using canons of statutory construction, composing legal citation, and much, much more.[361] Often, legal documents that students write are used as proxies for assessing students’ knowledge of those concepts, but that knowledge could be tested independently through a multiple-choice or short-answer quiz.

Professors who are leery that multiple-choice answers promote rote memory instead of critical thinking skills can always allow challenges to multiple-choice questions.[362] Students then have the option of challenging questions they perceive as unfair, ambiguous, illogical, or having some other flaw.[363] If their challenge successfully persuades the professor that a particular question is flawed in some way, then the entire class is awarded credit for that question.[364] A similar RAQ appeal process is built into TBL portions of courses.[365] Challenging questions involves evaluating them for quality, which necessarily requires a deeper understanding of material and higher-order critical thinking skills.[366] It also offers instructors an understanding of students’ thought processes during multiple-choice exams.[367] The process of writing the challenge furthers learning as well because students must review course material to prepare the challenge.[368]

2. Final Exams that Mimic the NextGen Bar Exam

LRW professors may also choose to include a final exam that mimics how the NextGen bar exam will test legal writing skills. Testing improves learning.[369] Research shows practice tests are great ways to prepare for a final test,[370] and students who take practice tests perform better than those who do not.[371] The act of practicing a test teaches students how to best take the test and “can enhance retention [of material] by triggering elaborate retrieval processes.”[372] Moreover, practice tests help ease testing anxiety and increase takers’ mental stamina.[373]

Using a simulated part of the bar exam in legal writing classrooms is not a new idea.[374] Professors Alexa Chew and Kaci Bishop have explained how the current MPT is a teaching tool for legal writing that includes a decent writing assignment.[375] They posit that “[t]eaching the MPT can also help illuminate for bar takers their own work styles and their processes or preferences for research, analysis, and writing, thereby helping them understand how to make the best use of those styles, processes, and preferences, both on the bar exam and in practice.”[376] They then explain how to improve performance on the MPT through practice.[377] Despite the benefits of using performance tests like the MPT, their use is not common as a summative assessment tool in legal writing courses.[378] With the rise of GenAI chatbots, this should change.

Instead of having the final assessment at the end of the semester be a final draft of a memorandum, motion, or brief, LRW professors could give a final exam that has an MPT-like longer writing task or be in the form of an item set, both of which will be on the NextGen bar exam. The final exam could include multiple-choice and short-answer questions as well as a directive to draft a legal document using the legal sources provided. Many law school final exams utilize software preventing students from accessing the internet or other programs, and they are proctored and in-person. These elements of the law school final exam format would also be a way for professors to test their students’ writing skills knowing GenAI chatbots were not helping.

B. Assessing Skills at Various Stages Before the First Draft

Suppose a legal writing professor wants to assign a complete office memorandum, a trial motion, an appellate brief, or some other long piece of legal writing. The suggestions below contemplate professors assessing the steps of that process in ways other than giving feedback on a first draft.

1. Evaluating Organizational Tools

LRW professors often teach students to create tools to organize their research and analysis before writing. Why not assess those as the students move through the process of writing? If professors create research projects with short-answer questions to begin a large writing assignment, or have students keep journals or research trails throughout the process of research and writing, professors could assess those. Throughout the process, professors could ask students to create case briefs or case charts to organize case law and turn them in for review.[379] Students could take notes directly on electronic or hard copies of opinions from important cases and turn them in for professors to assess critical reading skills.[380] As students begin to analyze the law, professors could have students turn their notes into concept maps or diagrams to visually represent legal concepts, relationships, or arguments.[381] GenAI chatbots can create outlines, so a better way to assess that skill might be to give students an empty or partially completed outline of a memorandum or motion and have them complete it in class,[382] or to provide students with an illogically ordered outline and have them revise its organization in class.[383] All of these assessments look at the students’ process and ability to analyze complex ideas, identify connections, and present information in a visually organized manner. They are also difficult for ChatGPT to mimic.

When assessing steps in a larger written product, it may be helpful to grade the steps on an all-or-nothing basis. Under this technique, if the student gives a good faith effort (as determined solely by the professor) towards completing a written product, the student earns a grade of 100%; if student does not, they get a zero. This is not a pass/fail grade in the sense that the professor simply checks to make sure it is complete: the students actually get a score of 100% that is calculated into the other numeric grades assigned during the course.

This approach is helpful if a professor wants students to try their best and does not want them to worry if the product is perfect. The more students practice skills, the lower their anxiety will be and the higher their performance will be.[384] Because law students are intent on getting the best grade they can, when practice experiences are graded, students often ignore the opportunity to learn that those experiences bring and simply concentrate on what the final grade is.[385] To encourage approaching such experiences as opportunities to learn, professors should incorporate many low-risk formative assessments in law school courses.[386] As long as professors emphasize that they are looking to reward effort, not the product, students perceive the assignment as low-risk and are more apt to try it themselves instead of relying on ChatGPT and risking getting a zero.

Having an all-or-nothing grade solves a few problems with low-risk formative assessments. First, having no grade associated with a project often results in students thinking, “I am not putting much effort into this.”[387] This is not because students do not care about their learning. But putting the most work and attention into the assessment that accounts for the largest percentage of their final grade is a strategy students employ to handle the high stakes of law school.[388] Additionally, students are afraid of failing.[389] Their highly competitive nature leads them to put in a good faith effort so as not to get a failing grade, even if the assignment is not worth a lot of points, because they want every point they can get.[390] Students graded on a curve will not risk being the one student who puts forth so little effort in comparison to the others that they get a 0%.[391] Finally, giving an all-or-nothing grade speeds up the grading for professors.

2. Incorporating More Oral Communication Skills Assessment

Many LRW professors already incorporate oral arguments into their 1L spring semester writing courses.[392] “[O]ral argument reinforces the basic analytical skills that students are learning in the classroom,”[393] and gives professors the perfect opportunity to assess those analytical skills in a manner beyond students’ written work. LRW professors should think about increasing the percentage of students’ overall grades for which oral arguments count and incorporating more oral skills into the fall 1L semester.[394]

LRW professors should consider adding alternative oral communication assessments, too. During an objective legal writing course, for example, professors could assign client interaction simulations where students engage in mock client interviews to demonstrate their ability to elicit relevant information and to recognize key facts.[395] Or students could lead client counseling sessions to provide legal advice, address client concerns, and summarize complex legal tests and concepts.[396] Simulating client interactions will also help students practice other skills that will be tested on the NextGen bar exam, such as client counseling and advising, and client relationship and management.[397] In the spring, students could engage in settlement conferences or mediations in addition to oral arguments.[398]

3. Live Grading and Student Conferences

Setting up meetings allows professors to ask questions about legal analysis and reasoning before students communicate them in writing and will effectively reduce student reliance on GenAI, or at least tip off the professor to its use. Holding conferences before the first draft is due effectively discourages students from relying on ChatGPT to draft legal documents because students will anticipate the need to orally explain the law and their analysis.

If LRW professors chose to hold conferences after the first draft’s deadline, they could revise the format of their student conferences to be more of an assessment than a chance for students to ask questions on professors’ comments. Asking probing, open-ended questions would ensure that even if students have used GenAI to help them draft their final product, they still understand the law and are able to analyze issues on their own. Asking questions like, “What do you think is the most important point of this case as it applies to our client’s facts?,” “Do you think you are missing any key facts from this case explanation?,” or “Talk me through how you arrived at this synthesized rule” will reveal if the student understands the material in the paper while also revealing what the student’s writing and thinking process was. This dialogue “gives instructors clues regarding the underlying reasons for students’ performance.”[399] This type of dialogue also can reveal other problems that students are having that would not be evident from their text alone, such as the student is having trouble accessing videos on the classroom platform.[400]

Redefining the format of conferences can prevent reliance on GenAI, too. Whether the conference is held before or after turning in a paper, LRW professors could restructure conferences to be more student-led by having students simulate a meeting with another lawyer.[401] For example, students could be required to email their professor an agenda of substantive questions and issues a day before the simulated meeting.[402]

Holding live-grading sessions with students has all the benefits of student conferences on steroids.[403] Live grading or live critiquing is a feedback method wherein the professor gives oral feedback to students on their work while the professor reviews the written work for the first time.[404] The student and professor meet in person or online in real time (such as through Zoom).[405] The professor has a hard copy or electronic copy of the student’s work available for both to see and the professor reads, reacts to, and sometimes marks on the work with the student present.[406] Instead of using the text to guess what the student meant or where the student’s confusion was, the professor and student have a dialogue about the text.[407] In a live meeting, the professor can ask students the reasoning behind their organization or word choices, or the professor can ask them to articulate what they meant by unclear passages.[408] In this way, the professor can address the specific issue at hand, not simply the written words.[409]

C. Designing Assessments with Components that Render GenAI Less Helpful

Below are strategies to reduce students’ desire and/or ability to rely on GenAI and that teach students important skills even if they use GenAI chatbots.

1. Collaborative Assignments

Professors could allow students to draft documents as a team or group. While not as popular an assessment in law school as drafting documents individually, in practice, lawyers collaboratively draft documents all the time.[410] Most matriculating law students are digital natives[411] who gravitate towards collaborative work, and thus enjoy assessments in which they collaborate with their peers.[412] Students will still get practice drafting legal documents, but group work helps to eliminate cheating.[413] This is especially true for small groups.[414] Working as a team makes students accountable to one another,[415] which may lower their desire to use GenAI chatbots if they were instructed not to. Group drafting also incorporates the social construct theory of writing[416] in which “writing is not a ‘solitary cognitive activity’ [but] is instead a social process.”[417]

Hundreds of studies prove that cooperative learning groups[418] are superior to other teaching methods,[419] although there has admittedly been little research specifically focusing on the use of cooperative learning in graduate-level education.[420] “[C]ompared to competitive learning and individualist learning, cooperative learning can enhance student achievement, promote critical thinking, foster positive attitudes towards the subject area, increase interpersonal skills, decrease attrition rates, and improve students’ self-esteem.”[421] Studies show that students engaged in collaborative assignments achieve higher test scores than those who do not[422] and have increased motivation and retention.[423] Students in diverse collaborative groups—those with different races, genders, etc.—“tend to have a deeper understanding of the material and remember more than those in homogeneous groups.”[424] Additionally, students given cooperative-learning opportunities are more inclined to ask the professor questions both in office hours and during class than their counterparts in traditional teaching settings.[425]

When students team up to analyze, think critically, and solve problems, they usually end up with better results than when they work independently.[426] Students learn as they hear content repeated in different ways, as they summarize content into more familiar words, and as others correct their application of the content.[427] The analysis in a collaboratively written document is often more thorough than one written by one student. And where one student working alone may miss a point in the material and not realize it until the professor gives feedback (e.g. “I forgot a roadmap!”), someone else in the group may remember that same point and teach it to the others. Studies have shown that the deepest learning of material occurs when students have to explain the material to a peer.[428]

If done mindfully and in the right setting and conditions, professors can reduce the likelihood that law students will not engage with their group. First, if groups are going to be permanent throughout the semester, it would be best to have the group develop a contract together outlining their expectations of each other.[429] Second, if law students understand what skill or knowledge a group assignment is designed to teach them and how that skill or knowledge will connect to their legal career outside of school or other individual assessments, students are less likely to “free-ride” in a group.[430] Showing that the power of the group dynamic helps students produce high-quality work also combats free-riders.[431] This can be done by having students attempt an assessment on their own and then attempt the same task as a group. Third, when assigning collaborative assessments, faculty should take the opportunity to teach law students about the collaborative nature of law practice and the benefits of learning to work with a group now.[432] And finally, professors should consider reserving a percentage of the final grade on an assignment for students to grade their teammates. Each student could report to the professor privately their estimate of the work done by each teammate.[433] Or each student could complete an assessment worksheet for each teammate that discusses each member’s contributions that the professor passes on to the evaluated teammate.[434]

2. Self-Assessment and Reflection

Self-assessments encompass a wide range of activities, but at their core, self-assessments (also referred to as reflections or self-evaluations) are guided exercises that ask students to critically gauge their understanding of key concepts and skills, evaluate their performance, and set goals for further improvement.[435] Self-assessment promotes metacognitive skills[436] and self-regulated learning,[437] which are important skills for life-long learners such as lawyers.[438] Developing metacognitive skills also helps students as they prepare for the NextGen Bar exam.[439] After all, when students can understand and monitor how they learn and then “control[] and adjust[ their] thinking for the purpose of learning” the knowledge and skills needed to pass the bar exam, it will be easier for them to do so.[440] Moreover, self-assessment promotes academic integrity because students must self-report on their learning progress.[441] If students know they will have to reflect on their writing process, they are less likely to use GenAI chatbots to draft documents without their professor’s express permission.

For students to successfully assess their own strengths and weaknesses, “the professor must first have provided the students with a clear understanding of what criteria students should use to gauge their performance against in order to determine what makes a product good or poor.”[442] Once students are provided guidelines, self-assessment can take many forms. Students could create a self-assessment portfolio wherein students complete a series of self-assessments that a professor assigns throughout a course.[443] A less-regimented self-assessment could have students maintain reflective journals throughout the course, wherein they document their learning experiences, insights gained, and challenges faced. Professor Joi Montiel advocates for a method known as “Self-Assessment by Comparative Analysis.”[444] To use this technique, professors create a “good” example of a finished product, which they release after students complete a writing project along with instructions that methodically guide students through the process of evaluating their work product against the “good” example.[445] Students compare their own work to the “good” example, analyze the differences, identify areas for improvement, and strategize on how to enhance both their output and process.[446] Some self-assessments also ask students to reflect on their writing processes to provide the professor with insights not apparent from written text alone.[447] This reflection on process would be especially useful to determine if students relied on GenAI in unhelpful and/or unauthorized ways.

3. Peer Evaluation

Peer evaluation, also known as peer review, peer editing, or peer assessment, encompasses various activities wherein students learn from one another by providing and receiving feedback on each other’s work.[448] Some evidence suggests that the act of crafting feedback is possibly even more advantageous than merely receiving it.[449] In addition to peer evaluation being a learning tool for the peer reviewer, professors can assess each reviewer’s comments to see how well the reviewer understands legal writing concepts. This assessment of the reviewers is additionally valuable for reviewers because once they get into practice and use GenAI chatbots, their skill at recognizing something amiss in a draft and revising work they have not drafted will be in high demand. And reviewing a draft not written by themselves is part of what the NextGen bar exam will use to test their legal writing skills. Although a peer reviewer could in theory run another student’s paper through a GenAI chatbot asking it to critique the paper generally, it would offer mostly mechanical revisions and those revisions pertaining to flow and organization. If professors specifically required reviewers to analyze the substance of the paper (i.e., critiquing explanations of case law or rule synthesis), students would be more likely to do it themselves. And even if they didn’t, students would need to have a more sophisticated grasp of legal writing skills to be able to craft helpful prompts for a GenAI chatbot.

Incorporating peer evaluation has some benefits that self-assessments and teacher-assessments do not. First, the act of identifying issues in a peer’s work can enhance a student’s ability to recognize similar deficiencies in the student’s own work.[450] Some researchers argue that students develop an understanding of assessment standards by replicating the grading of their professors.[451] This makes students better at understanding how to be successful on their own future assessments,[452] and prepares them to review AI-generated writing in practice. Second, law students need to see examples of a range of their peers’ work to gain an understanding of standards in writing they will be doing in practice,[453] which will also help them recognize when GenAI chatbots stray from those standards and where NextGen bar exam passages need revision. Third, peer-based assessment aids students in building collaborative relationships with their peers and teaches them to be open to receiving constructive feedback on their written work from colleagues.[454] Fourth, when reviewers get a rare opportunity to see a peer’s unique and personal writing style, it broadens their perspectives by exposing them to varied approaches towards and analyses of the same problem or task.[455] Fifth, peer editing drives home the professor’s message about writing with the audience in mind.[456] Students start to appreciate readers’ frustrations when they attempt to understand a document that is disorganized, unclear, or filled with grammatical errors.[457]

Before assigning peer assessments, professors should train students on what to look for based on predetermined criteria and instruct them how to give feedback to maximize consistency.[458]

The format of a peer evaluation assessment can vary.[459] Professors can also choose to assign multiple reviewers to each document.[460] Students can work on one paper, assess multiple papers, work in groups to review papers, or work on their own.[461] If professors plan to provide peer feedback to student authors, or to prevent friendship marking—where peers assign higher grades to their friends, regardless of their actual performance, rather than providing objective, thoughtful feedback[462]—professors should make sure students are double-blind: neither the authors nor the reviewers know each other’s names. To achieve this, professors can use online tools like a Moodle workshop[463] or Eli Review[464] to appoint peer reviewers. However, professors do not need to pass peer feedback on to student writers. The act of giving good feedback helps peer reviewers develop revision skills and professors can assess reviewers’ understanding by looking at their comments.[465] In other words, a professor can comment and respond to peer feedback, essentially assessing a peer reviewer’s knowledge through the peer reviewer’s feedback.[466]

D. Assessments that Serve Two or More Distinct Functions

The proposed assessments below could be used in more than one way. They would be appropriate as stand-alone assessments or used in conjunction with an assignment requiring a longer written document.

1. Incorporating GenAI into Writing Assignments

Rather than eschewing GenAI chatbots entirely, some assessments could incorporate them. There are many benefits to doing so. First, incorporating technology into the assessment of student learning equips students with essential technical skills necessary for the contemporary landscape of law practice.[467] Because they are digital natives, matriculating law students today “don’t find digital tools a challenge”[468] and “may be more comfortable with technology than their predecessors, and thus more likely to rely on non-transparent computer processes for decision-making.”[469] At the same time, that comfort with technology does not always mean that they are prepared to use the types of technology they will need in the practice of law.[470] Incorporating GenAI into the legal writing curriculum will help to fill this gap.[471]

Second, if done correctly, using successive prompts to walk generative AI chatbots through the process of writing a first draft could help students practice what they will be doing on the NextGen bar exam. After the first prompt, students will have to recognize what is missing from a chatbot’s response to craft the next prompt for the chatbot to either revise what it wrote or add to it. Students will have to “identify language that should be changed . . . consistent with the facts, the relevant legal rules and standards, and the client’s objectives, interests, and constraints.”[472] And students will need to identify if the responses meet the needs of the legal document’s audience.

Third, by incorporating GenAI chatbots, professors can better meet expectations held by law students and lawyers that they incorporate real-world technology into the legal writing classroom. As of April 2023, 61% of lawyers and 44% of law students expect GenAI to change the way law is taught and studied.[473] According to a recent LexisNexis survey, some law students surveyed would like to see professors using a hybrid approach: talking about the uses for GenAI, permitting it for certain tasks, and forbidding it in other instances.[474] Digital natives are at ease with technology and expect professors to integrate it into the curriculum.[475] When professors choose to use technology to assess students, it “sends a message to the students that their professors are invested in their success.”[476]

Fourth, professors can introduce the strengths and weaknesses of GenAI chatbots, as well as ethical concerns with using them in practice. This education will help students put GenAI’s use in perspective. Just because students are digital natives doesn’t mean they are information literate, and professors may need to increase instruction on information literacy in light of GenAI chatbots.[477] Professors could also introduce GenAI issues facing attorneys such as ownership and copyright issues,[478] security and privacy issues,[479] and bias issues.[480]

This article cannot cover every specific assessment that could incorporate GenAI chatbots into the learning process. The suggestions that follow provide just a few ideas to get LRW professors started. Professors could begin with exercises designed to show students the limitations of ChatGPT due to hallucinations. For example, students could be instructed to prompt ChatGPT to do legal research and analysis on a client problem. Then students would be required to search for the legal sources ChatGPT came up with to determine if the sources are real and if the sources say what ChatGPT says they did. The students’ critical evaluation of how well ChatGPT performed would be the product professors assessed.[481]

Once students saw for themselves the limitations of ChatGPT, professors could incorporate training and assessment on prompt engineering into legal writing courses.[482] Students could be trained to specify the tone of the responses they want generated,[483] to specify the length of responses,[484] and to instruct ChatGPT not to fabricate legal sources.[485] They could learn how to improve ChatGPT’s responses by using chain-of-thought prompts.[486] ChatGPT saves the string of prompts inputted by a user, which is referred to as the chat history.[487] The professor could assess both a students’ chat history as it pertains to a writing assignment and the final written product.

After students have done some legal research on an issue, professors could teach students how to prompt ChatGPT through preparing a first draft of a memorandum, motion, client letter, or contract or through drafting discrete portions of them (e.g., have ChaptGPT draft case summaries[488] or analogize or distinguish precedent opinions from client facts[489]). After ChatGPT produces a first draft, students could be instructed to revise the draft themselves, and professors would then assess both the first AI-written draft and students’ revised drafts, in which students would have included commentary on how they adapted the AI-written text.

Finally, professors could ask students to take a memorandum they initially wrote and run it through ChatGPT to revise it in some specific way. A GenAI chatbot can edit grammar and punctuation, create thesis sentences, improve flow, and revise for conciseness.[490] It can also help students change the format and tone of their memoranda into trial motions or client advice letters. The professor could then assess the way the student prompted ChatGPT through the revisions or only assess the final product after the revisions.

2. Multimodal Assessments

Professors should consider using at least one assessment that allows students to showcase their creativity as they demonstrate their knowledge of legal analysis or writing. This could include creating a podcast,[491] inventing or playing a game,[492] posting on or responding to a video discussion forum,[493] or taping a screencast.[494] Encouraging atypical or unconventional tasks within the law school classroom fosters creativity.[495] The conventional law school classroom often overlooks the significance of creativity in the process of problem solving.[496] And students enthusiastically respond to assessments that are out of the ordinary.[497]

Multi-modal assessments could be used to assess discreet subjects such as citation or grammar. These types of assessments would fall under those described in Part IV(A) that are not tied to a larger writing project. But professors should think more broadly. If students don’t understand the law, they can’t write about it. So professors can have students create something that demonstrates they understand the law needed to address their clients’ issues in a memorandum or brief. These assessments would be similar to those in Part IV(B). A few ideas include:

  • Have students create videos explaining a rule and the key facts, holding, and reasoning of a case they will use to illustrate that rule in a memo or brief and upload the videos to a video discussion forum. Allow other students to comment on them, and, as a class, choose the one that most succinctly explains the rule and case while including all the key facts and accurately stating the rule and holding.

  • Have students create a short screencast where they walk through the rule statements they will use in a memorandum or brief and show where in precedent cases they found the explicit rule, how they synthesized a rule, or how they derived an implicit rule.

  • Have students create a podcast or cartoon storyboard that tells the client’s story in a persuasive way, emphasizing the theme they will use in their motion or brief.

  • Have students create a video or podcast summarizing the law and explaining the arguments they will include in a memorandum or for one issue. For arguments in a brief, this assignment could be a precursor to their oral argument.

  • Professors could create an escape room experience using clues that require students to apply the legal research and writing skills they learned to “escape” with their team.[498] Students learn through the process of creating problems for others to solve and also through the act of solving problems.[499]

  • Professors could create Jeopardy or trivia games to test students’ knowledge of concepts like hierarchy of authority, sources of law, or information literacy. Better yet, have teams of students create games for each other.[500]

Although students may use generative AI chatbots for a portion of the project, they can’t use them for the entire thing. Students will still have to grapple with the material, and “asking students to place concepts in a non-traditional format . . . requires a grappling with the materials that results in a deeper understanding of it.”[501] And depending on the focus of the creative work, the assignment could increase the knowledge and skills students need for the NextGen bar exam. For example, if the substance of a video covered components of a complaint or answer, the student would be better prepared, when provided a complaint on the NextGen bar exam, to “identify language that should be changed, and make suggestions for how that language should change.”[502]

3. In-Class Writing Assignments or Short-Answer Quizzes

In addition to multiple-choice quizzes, LRW professors should consider in-class writing assignments. Having students write in class gives the professors more control over the writing process and some assurance that students are not using GenAI chatbots during that time. It also gives students practice writing under time constraints—something they will need to do on the NextGen bar.[503]

LRW professors can take two different approaches to assessments using in-class writing time. First, professors can have students begin drafts of larger documents in class to ensure that students at least begin the process on their own.[504] For example, professors can have students begin to draft the statement of facts for a motion in class. If part of a larger writing project, professors could assess the in-class writing separately or assess it as part of the larger project, but with the caveat that students cannot change it for the final draft or if they do, they must explain why they made the changes.

Alternatively, professors can assign a short writing assignment designed to be fully completed by the end of class. The short in-class writing assignment could be a stand-alone assessment of a certain skill a professor is teaching. For example, the professor could ask students to draft synthesized rules or case explanations that won’t be used in a final memorandum but will be assessed on their own.

Additionally, in-class writing assessments need not take up the whole class time. Professors could assign a “minute paper,” which is a timed writing exercise that students complete during class in response to a professor’s prompt.[505] The writing time is usually only one to five minutes long, forcing students to prioritize the points they wish to make, but still giving them a chance to demonstrate their knowledge through writing.[506] Professors can customize specific legal writing prompts to focus either on the research and writing process or theory itself or on the subject matter of the client hypothetical the class is working on. The professor can use the minute paper to determine the extent to which students understand legal writing doctrine or the issues in the case without waiting for the final product.

LRW professors should consider whether a short in-class writing assignment could be a graded quiz. Student performance improves when students are given several short tests or quizzes rather than one long test or writing assignment.[507] Studies also show that students learn better when quizzes they take count towards their grade.[508] And if quizzes are graded, studies show students prefer quizzes that are not weighed too heavily against their grade (defined as 10 to 25% of the total grade).[509] Timely assessing what students can write in one class period could help reduce the grading a professor does at the end of the semester. It could also help isolate assessment of specific learning outcomes.

Conclusion

GenAI chatbots are changing what it means to write a legal document. This shift will occur even more rapidly when Lexis and Westlaw each release a GenAI chatbot product whose LLM will be trained on their respective, up-to-date legal databases. Concurrently, the pathway to becoming an attorney is drastically changing in most jurisdictions. Bar examinees will need to understand and demonstrate the skills necessary to independently draft legal documents and revise them. Like it or not, the convergence of these two phenomena will drastically impact what law students will need to learn in legal writing classrooms. GenAI is changing how writing is assessed in higher education.[510] LRW professors should understand GenAI’s capabilities and limitations and reevaluate assessments in legal writing courses to ensure students are actually learning the skills they will need to pass the NextGen bar exam and practice law.

Changing assessment methods effectively changes how and what students learn.[511] The assessments LRW professors use will not be effective unless professors choose a mix of assessments that tell them, at the end of a legal writing course, whether their students have the capacity to be capable lawyers after the full force of GenAI and the NextGen bar exam emerges.[512] The best mix will include assessments that serve different functions. Some should assess skills independently of requiring the production of long written legal documents such as memoranda or motions. Some should assess students’ skills during different steps of the writing process and before the process. And others should incorporate GenAI into the drafting and editing of longer documents or incorporate elements that discourage students from using GenAI for those purposes.

This article is meant more as a triage for LRW professors’ classrooms than a comprehensive guidebook on all GenAI issues. There are many questions that it leaves unexplored for future scholarly consideration.

  • What does it mean to write now?

  • What exactly is the problem with letting GenAI help write legal documents in practice as long as lawyers are fulfilling their ethical obligations?

  • What GenAI policies should LRW professors or law schools adopt?

  • Should LRW professors rethink our learning outcomes in light of GenAI? What would revised learning outcomes look like?

  • Should students cite to or acknowledge the use of GenAI in their writing? Should lawyers? Should scholars?

  • Is it fair to allow or require GenAI use in law school? Are students who are better at prompt engineering or who can afford more resources given an unfair advantage?

  • How do writers account for the inherent bias in GenAI?

  • What ethical concerns should lawyers have about using GenAI? What confidentiality concerns need to be resolved?

This article is simply a first step in helping professors adjust to the rapid acceleration of new technology and revisions to attorney licensing requirements.


  1. All the information on GenAI in this article is current as of September 2023, due to the Journal of the Legal Writing Institute’s publishing schedule.

  2. See infra Part II(A)(2). The difference in scenarios arguably depends on students’ compliance with the GenAI prohibition, too. The first problem with relying solely on compliance is that many professors are not technologically savvy enough to word such a policy in a manner consistent with their intent because they are not as aware as students of how many tools and products have this technology. Ria Bharadwaj, Catherine Shaw, Louis NeJame, Sterling Martin, Natasha Janson & K. Fox, Time for Class: Bridging Student and Faculty Perspectives on Digital Learning 5 (2023) (e-book) (“Faculty and administrators lag students in tool usage and thus cannot form effective policies to address the use of AI in courses.”). The second is that there is no way to check if student complies with the policy, even if they certify they did. See infra Part III(C)(3). Finally, as one student put it, “Look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.” Owen Kichizo Terry, I’m a Student. You Have No Idea How Much We’re Using ChatGPT, Chron. Higher Educ., (May 12, 2023), https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt? [https://perma.cc/NFD9-HR7B].

  3. Nancy E. Millar, The Science of Successful Teaching: Incorporating Mind, Brain, and Education Research into the Legal Writing Course, 63 St. Louis Univ. L.J. 373, 377 (2019) (“A lawyer’s ability to write well is highly prized.”).

  4. Id. at 377 & n.25 (collecting articles that discuss writing as essential to competent representation).

  5. Suzanne Ehrenberg, Embracing the Writing-Centered Legal Process, 89 Iowa L. Rev. 1159, 1187 (2004).

  6. See Kirsten K. Davis, “The Reports of My Death Are Greatly Exaggerated”: Reading and Writing Objective Legal Memoranda in a Mobile Computing Age, 92 Or. L. Rev. 471, 476 (2014).

  7. Ehrenberg, supra note 5, at 1187.

  8. See Davis, supra note 6, at 497.

  9. Ehrenberg, supra note 5, at 1186; see Davis, supra note 6, at 476 (“Critical thinking and careful reasoning are still at the core of a lawyer’s intellectual duties, and exploring legal analysis in writing is an essential component of meeting that duty.”).

  10. Gregory S. Munro, Outcomes Assessment for Law Schools 15 (2000).

  11. ChatGPT is short for chatbot generative pre-trained transformer. Lucas Mearian, What Are LLMs, and How Are They Used in Generative AI? Computerworld (May 30, 2023), https://www.computerworld.com/article/3697649/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html [https://perma.cc/6ZGC-XF54].

  12. Introducing ChatGPT, OpenAI (November 30, 2022), https://openai.com/blog/chatgpt [https://perma.cc/7BBK-JZZN].

  13. Alex Hughes, ChatGPT: Everything You Need to Know About OpenAI’s GPT-4 Tool, BBC Sci. Focus (Sept. 25, 2023), https://www.sciencefocus.com/future-technology/gpt-3/ [https://perma.cc/8BYP-J9FM] (“A free version of ChatGPT (GPT-3.5) is available for anyone to use on the ChatGPT website. All you have to do is sign up to get a login, and you can be mining the depth of the AI model in seconds.”).

  14. Generative Artificial Intelligence, Wikipedia (Nov. 4, 2023), https://en.wikipedia.org/wiki/Generative_artificial_intelligence [https://perma.cc/72VQ-MNVL]; accord George Lawton, What Is Generative AI? Everything You Need to Know, TechTarget, https://www.techtarget.com/searchenterpriseai/definition/generative-AI [https://perma.cc/5G4S-6Z5G] (last visited July 1, 2023).

  15. ChatGPT, OpenAI, https://openai.com/chatgpt [https://perma.cc/YSY4-KMJL] (last visited July 4, 2023).

  16. AI chatbots are also referred to as AI content tools or AI writing tools. Honorlock, Preventing AI Content Tools in Higher Ed: Instructional Design Tips & Technology to Stay Ahead of ChatGPT & Other AI Content Tools 5 (2023) (e-book). This paper uses the term “generative AI chatbots” or “GenAI chatbots” to distinguish generative AI that relies on large language models to produce text as opposed to generative AI that produces images or other media. ChatGPT is an example of a generative AI chatbot.

  17. Mearian, supra note 11; Maria Diaz, How to Use ChatGPT: Everything You Need to Know, ZDNet (Oct. 3, 2023), https://www.zdnet.com/article/how-to-use-chatgpt. [https://perma.cc/K5SP-4E5N].

  18. Some of the alternative chatbots that utilize generative AI are Google Bard, HuggingChat, Bing AI, Sparrow by DeepMind, YouChat, Chatsonic, and OpenAI Playground. Krissy Davis, The Best AI Chatbots: ChatGPT and Other Alternatives, We Are Developers (May 30, 2023), https://www.wearedevelopers.com/magazine/best-ai-chatbots-chatgpt-and-other-alternatives [https://perma.cc/652M-FM8D]. Additionally, DALL-E is “an AI system that can create realistic images and art from a description in natural language.” DALL-E2, OpenAI, https://openai.com/dall-e-2 [https://perma.cc/SW7U-FB32] (last visited July 1, 2023).

  19. What Is Generative AI?, McKinsey & Co. (Jan. 19, 2023), https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai [https://perma.cc/6J7E-YAAV].

  20. GPT-4, OpenAI (Mar. 14, 2023), https://openai.com/research/gpt-4 [https://perma.cc/SR2A-EGD5].

  21. Hughes, supra note 13.

  22. ChatGPT Impacts and Warnings, Lexis+ (Mar. 23, 2023), https://plus.lexis.com/document/lpadocument?pddocfullpath=%2Fshared%2Fdocument%2Fanalytical-materials%2Furn%3AcontentItem%3A67VF-N1C1-F361-M034-00000-00&pdsourcegroupingtype=&pdcontentcomponentid=500750&pdisurlapi=true&pdmfid=1530671&crid=e4d2e0a8-3209-40de-94d8-ce3b96ebdfdf.

  23. Hughes, supra note 13.

  24. Id.; Kevin Roose, How Should I Use A.I. Chatbots Like ChatGPT?, N.Y. Times (Mar. 30, 2023), https://www.nytimes.com/2023/03/30/technology/ai-chatbot-chatgpt-uses-work-life.html [https://perma.cc/BH86-9RJB].

  25. See Francesca Paris & Larry Buchanan, 35 Ways People Are Using A.I. Right Now, N.Y. Times (Apr. 14, 2023), https://www.nytimes.com/interactive/2023/04/14/upshot/up-ai-uses.html [https://perma.cc/VYJ6-P7JV].

  26. Mearian, supra note 11.

  27. See infra Part III(C)(2).

  28. See infra Part III(A)(3).

  29. See infra Part III(C)(3).

  30. See generally Nat’l Conf. Bar Exam’rs, About the NextGen Bar Exam, Next Gen, https://nextgenbarexam.ncbex.org/ [https://perma.cc/25KK-HPLK] (last visited July 2, 2023).

  31. See infra Part III(B)(2).

  32. See NCBE Testing Task Force, Overview of Recommendations for the Next Generation of the Bar Examination 4 (2021), https://nextgenbarexam.ncbex.org/themencode-pdf-viewer/?file=https://nextgenbarexam.ncbex.org/wp-content/uploads/TTF-Next-Gen-Bar-Exam-Recommendations.pdf#zoom=auto&pagemode=none [https://perma.cc/F8V3-QAMQ] [hereinafter NCBE TTF Overview of Recommendations].

  33. See Nat’l Conf. Bar Exam’rs, Bar Exam Content Scope 4 (May 2023), https://nextgenbarexam.ncbex.org/pdfviewer/ncbe-nextgen-content-scope-may-24-2023/ [https://perma.cc/7G7L-43JB] [hereinafter NCBE Bar Exam Content Scope].

  34. Roy Stuckey et al., Best Practices for Legal Education: A Vision and a Road Map 175 (2007).

  35. Andrea Susnir Funk, The Art of Assessment: Making Outcomes Assessment Accessible, Sustainable, and Meaningful 16 (2017).

  36. Rogelio Lasso, A Blueprint for Using Assessments to Achieve Learning Outcomes and Improve Students’ Learning, 12 Elon L. Rev. 1, 4 & n.3 (2020) (collecting citations of scholarship that do this).

  37. Funk, supra note 35, at 28. Institutional assessment “examines whether a school’s graduates as a whole have achieved competency in broad areas of knowledge, skills, and values that the faculty have deemed essential to the practice of law.” Kelly Terry, Gerald Hess, Emily Grant & Sandra Simpson, Assessment of Teaching and Learning 1 (2021). Program assessment refers to evaluating how students are meeting “learning objectives in multiple courses with cohesive program learning objectives, such as specialization programs and skills training.” Ruth Jones, Assessment in Legal Education: What Is Assessment, and What the *# Does It Have to do with the Challenges Facing Legal Education?, 45 McGeorge L. Rev. 85, 88 (2013).

  38. Michael Hunter Schwartz, Sophie Sparrow & Gerald Hess, Teaching Law by Design: Engaging Students from the Syllabus to the Final Exam 156 (2d ed. 2017).

  39. Id. at 157–58; see Munro, supra note 10, at 17. For a discussion on how to create learning outcomes for a legal writing course, see Victoria L. VanZandt, Creating Assessment Plans for Introductory Legal Research and Writing Courses, 16 Legal Writing 321, 330–44 (2010).

  40. Schwartz, Sparrow & Hess, supra note 38, at 158–62.

  41. Id. at 162–68.

  42. Id. at 168.

  43. See Am. Bar Ass’n, Standards and Rules of Procedure for Approval of Law Schools, at 26 (2022–23), https://www.americanbar.org/content/dam/aba/administrative/legal_education_and_admissions_to_the_bar/standards/2022-2023/22-23-standard-ch3.pdf [hereinafter ABA Standards]; David I. C. Thomson, What We Do: The Life and Work of a Legal Writing Professor, 50 J. L. & Educ. 170, 175–81 (2021).

  44. The history of learning outcomes and assessment in legal education and the criticisms of legal education’s deficiencies in this area are detailed elsewhere. See, e.g., Melissa N. Henke, When Your Plate Is Already Full: Efficient and Meaningful Outcomes Assessment for Busy Law Schools, 71 Mercer L. R. 529, 532–44 (2020); Olympia Duhart, The “F” Word: The Top Five Complaints (and Solutions) About Formative Assessment, 67 J. Legal Educ. 531, 532–36 (2018); Abigail Loftus DeBlasis, Building Legal Competencies: The Montessori Method as a Unifying Approach to Outcomes-Based Assessment in Law Schools, 42 Ohio N. Univ. L. Rev. 1, 17–21 (2015); Denitsa R. Mavrova Heinrich, Teaching and Assessing Professional Communication Skills in Law School, 91 N.D. L. Rev. 99, 100–07 (2015); Samantha A. Moppett, Control-Alt-Incomplete? Using Technology to Assess “Digital Natives”, 12 Chi.-Kent J. Intell. Prop. 77, 83–88 (2013); Anthony Niedwiecki, Teaching for Lifelong Learning: Improving the Metacognitive Skills of Law Students Through More Effective Formative Assessment Techniques, 40 Cap. Univ. L. Rev. 149, 149–52 (2012).

  45. For more information on giving feedback, see generally Elizabeth M. Bloom, A Law School Game Changer: (Trans)formative Feedback, 41 Ohio N. Univ. L. Rev. 227 (2015); Teresa McConlogue, Assessment and Feedback in Higher Education: A Guide for Teachers 118–34 (2020); Daniel Schwarz & Dion Farganis, The Impact of Individualized Feedback on Law Student Performance, 67 J. Legal Educ. 139 (2017).

  46. For more information on rubrics, see generally Sophie M. Sparrow, Describing the Ball: Improve Teaching by Using Rubrics—Explicit Grading Criteria, 2004 Mich. St. L. Rev. 1 (2004); Dannelle D. Stevens & Antonia J. Levi, Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective feedback, and Promote Student Learning (2005).

  47. Kristin B. Gerdy, Teacher, Coach, Cheerleader, and Judge: Promoting Learning Through Learner-Centered Assessment, 94 Law Libr. J. 59, 69 (2002); Rogelio A. Lasso, Is Our Students Learning? Using Assessments to Measure and Improve Law School Learning and Performance, 15 Barry L. Rev. 73, 76 (2010).

  48. Norm-referenced assessments evaluate student performance in relation to a predetermined set of other students’ performances and are designed to rank each student in relation to the median, usually based on a curve. Stuckey et al., supra note 34, at 243; Terry, Grant, Hess & Simpson, supra note 37, at 33; McConlogue, supra note 45, at viii.

  49. Criterion-referenced assessments are those designed to measure students’ knowledge against a predetermined standard or criteria. Stuckey et al., supra note 34, at 244. The teacher sets the goals that students must meet and the criterion-referenced assessments measure how close a student is to mastering them. Id. at 24445.

  50. Direct assessment measures require students to demonstrate their acquired knowledge, such as through exams, clinical performances, or capstone performances. See Stuckey et al., supra note 34, at 267; Terry, Grant, Hess & Simpson, supra note 37, at 17.

  51. Indirect assessment measures involve gathering opinions from students themselves, their peers, or other observers. See Stuckey et al., supra note 34, at 267; Terry, Grant, Hess & Simpson, supra note 37, at 17. Some examples of indirect assessments are surveys, interviews, and focus groups. Id. at 17–18.

  52. “Performance assessments measure students’ ability to perform a task; They measure whether students are able to use previously learned concepts to resolve new legal problems.” Lasso, supra note 47, at 87.

  53. “[C]ognitive assessments test the acquisition of applicable knowledge of the substantive law.” Id. at 87.

  54. See infra Part II(A)(1).

  55. See infra Part II(A)(1).

  56. See infra Part II(A)(2).

  57. Stuckey et al., supra note 34, at 255.

  58. ABA Standards, supra note 43, at 26.

  59. Id.

  60. Munro, supra note 10, at 72–73; Lasso, supra note 47, at 77.

  61. Judith Welch Wegner, Reframing Legal Education’s “Wicked Problems”, 61 Rutgers L. Rev. 867, 886 (2009); Lasso, supra note 47, at 77.

  62. ABA Standards, supra note 43, at 26.

  63. Stuckey et al., supra note 34, at 255.

  64. See Am. Bar Ass’n, Legal Writing Sourcebook 227 (J. Lyn Entrikin & Mary B. Trevor eds., 3d ed. 2020) (“Summative assessment evaluates how much learning has occurred in the class at a particular moment in time.”) [hereinafter Legal Writing Sourcebook].

  65. Lasso, supra note 47, at 77.

  66. Schwartz, Sparrow & Hess, supra note 38, at 174.

  67. Lasso, supra note 47, at 79.

  68. See infra Part II(B).

  69. Lasso, supra note 36, at 48; Schwartz, Sparrow & Hess, supra note 38, at 174.

  70. Schwartz, Sparrow & Hess, supra note 38, at 174.

  71. Lasso, supra note 36, at 39–40.

  72. Jones, supra note 37, at 101; Moppett, supra note 44, at 93. Incorporating multiple summative assessments throughout the semester also minimizes students’ stress, boosts their enthusiasm, and encourages their efforts. Id. at 93.

  73. Schwartz, Sparrow & Hess, supra note 38, at 174, 177.

  74. Stuckey et al., supra note 34, at 260–61.

  75. Lasso, supra note 47, at 78.

  76. Jones, supra note 37, at 107.

  77. Id.

  78. Moppett, supra note 44, at 93; see Jones, supra note 37, at 107.

  79. Stuckey et al., supra note 34, at 239; Terry, Grant, Hess & Simpson, supra note 37, at 22.

  80. Terry, Grant, Hess & Simpson, supra note 37, at 22; see McConlogue, supra note 45, at 24.

  81. McConlogue, supra note 45, at 25.

  82. Munro, supra note 10, at 109.

  83. See infra, Part III(C)(3).

  84. Stuckey et al., supra note 34, at 239.

  85. Munro, supra note 10, at 107; Lasso, supra note 47, at 91.

  86. Alison Bone, Ensuring Successful Assessment 6 (Roger Burridge & Tracey Varnava eds., 1999) (e-book).

  87. Stuckey et al., supra note 34, at 235. There are other minor purposes as well: assessments are “used to grade and rank students, to motivate students, to help employers more easily select employees, to provide feedback to students about their progress, to teachers about their effectiveness . . . [and] to enhance the learning experience and improve student performance.” Lasso, supra note 47, at 78.

  88. John A. Lynch, The New Legal Writing Pedagogy: Is Our Pride and Joy a Hobble?, 61 J. Legal Educ. 231, 233 (2011); see Elizabeth Fajans & Mary R. Falk, Against the Tyranny of Paraphrase: Talking Back to Texts, 78 Cornell L. Rev. 163, 174 (1993); Ellie Margolis & Susan L. DeJarnatt, Moving Beyond Product to Process: Building a Better LRW Program, 46 Santa Clara L. Rev. 93, 98 (2005). For information on the beginning of process theory infiltrating the legal writing realm, see generally J. Christopher Rideout & Jill J. Ramsfield, Legal Writing: A Revised View, 69 Wash. L. Rev. 35, 51–56 (1994); Nancy Soonpaa, Using Composition Theory and Scholarship to Teach Legal Writing More Effectively, 3 Legal Writing 81 (1997); Jo Anne Durako, Kathryn M. Stanchi, Diane Penneys Edelman, Brett M. Amdur, Lorray S.C. Brown & Rebecca L. Connelly, From Product to Process: Evolution of a Legal Writing Program, 58 U. Pitt. L. Rev. 719 (1997).

  89. See Legal Writing Sourcebook, supra note 64, at 239–40 (listing possible formative assessments in legal writing courses).

  90. Thomson, supra note 43, at 187 (“The final memo is where most of the grade weight comes in the fall semester of [legal writing].”). Ellie Margolis and Susan DeJarnatt detail Temple Law School’s approach to teaching 1L legal writing, which is typical of most legal writing courses. Margolis & DeJarnatt, supra note 88, at 100–07. In both the fall and spring, students’ grades rely solely on the fall memo and the spring brief. Id. at 105, 106.

  91. See Thomson, supra note 43, at 187; Margolis & DeJarnatt, supra note 88, at 105–06 (stating that 1L students’ fall grades in their school’s legal writing program were based exclusively on the final draft of a memo); see Legal Writing Sourcebook, supra note 64, at 134 (stating that in the first semester, most LRW professors “assign little weight” to early writing assignments, but may assign one or two longer memorandum assignments later in the semester). According to 2020–21 ALWD/LWI Survey, in courses focusing principally on objective legal analysis and writing (typically a 1L course taught in the fall), about 37% of professors did not assign a grade that counted towards the final course grade to a first draft of a major assignment, but did assign a grade to the final draft, and about 35% assigned grades to both drafts. ALWD & LWI, ALWD/LWI Legal Writing Survey, 20202021: Report of the Individual Survey 34 (2021), available at https://www.lwionline.org/sites/default/files/2020-2021-ALWD-and-LWI-Individual-Survey-report-FINAL.pdf [https://perma.cc/68NQ-UMWR] [hereinafter ALWD/LWI Individual Survey]; see also Legal Writing Sourcebook, supra note 64, at 446–47 (stating that some LRW professors “give interim draft grades with little or no value” but suggesting that LRW professors grade the interim drafts).

  92. Thomson, supra note 43, at 203 (stating a typical slate of assessments would be “six citation exercises, three email memos, and two major briefs, one addressed to a trial court and one addressed to an appellate court”); Legal Writing Sourcebook, supra note 64, at 135 (stating most LRW programs require one or two trial or appellate briefs and an oral argument in the spring semester). Thirty-three percent of professors responding to the 2020–21 ALWD/LWI Individual Survey assigned a grade only to the final draft of their large writing assignments, and 39% assigned a grade to both drafts. ALWD/LWI Individual Survey, supra note 91, at 34; see Margolis & DeJarnatt, supra note 88, at 105–06 (stating that 1L students’ spring grades in their school’s legal writing program were based exclusively on a final draft of a brief). During the spring, professors may also grade oral arguments. See generally Rachel G. Stabler, All Rise: Pursuing Equity in Oral Argument Evaluation, 101 Neb. L. Rev. 438, 465–67 (2023) (discussing various approaches to grading oral arguments).

  93. See Legal Writing Sourcebook, supra note 64, at 133–35, 239–40; Margolis & DeJarnatt, supra note 88, at 100–07 (detailing all the assessments and feedback given to students over the entire year in their 1L legal writing courses). Thirty-three percent of surveyed professors indicated that they assigned additional types of writing assignments; they reported that these additional assessments were not major writing assignments (or pieces of them), but were designed to let the teachers see the students’ research and writing process and their critical thinking apart from memo writing (e.g., billing entries; performance tests; case briefs or charts; outlines; client interviews; oral reports; reflection assignments; research charts, reports, or journals; citation exercises; multiple-choice or short-answer assessments). ALWD/LWI Individual Survey, supra note 91, at 37. It is unclear from the data if these assignments were graded.

  94. See Niedwiecki, supra note 44, at 169.

  95. Id. at 182 (“If students successfully complete an assignment but arrive at the end product improperly, the students are not likely to correct the learning process that led to successful completion. Without understanding the internal thinking of the students, the professor is unable to correct any process errors.”).

  96. See Margolis & DeJarnatt, supra note 88, at 107–31.

  97. See Maria Diaz, ChatGPT v. Bing Chat v. Google Bard: Which is the best AI Chatbot?, ZDNet (Sept. 29, 2023), https://www.zdnet.com/article/chatgpt-vs-bing-chat-vs-google-bard-which-is-the-best-ai-chatbot/ [https://perma.cc/58HM-ZB6R].

  98. GPT-4, supra note 20.

  99. Mearian, supra note 11.

  100. Kurt Muehmel, What is a Large Language Model, the Tech Behind ChatGPT? Dataiku (June 7, 2023), https://blog.dataiku.com/large-language-model-chatgpt;chatgptn [https://perma.cc/B3ER-LALP]; How Should AI Systems Behave, and Who Should Decide? OpenAI (Feb. 16, 2023), https://openai.com/blog/how-should-ai-systems-behave [https://perma.cc/4ZTK-CC6V] [hereinafter How Should AI Systems Behave].

  101. Kevin Roose & Cade Metz, How to Become an Expert on A.I., N.Y. Times (Apr. 4, 2023), https://www.nytimes.com/article/ai-artificial-intelligence-chatbot.html [https://perma.cc/KB4B-EDUW].

  102. Muehmel, supra note 100.

  103. Id.

  104. Id.

  105. Id.

  106. Id.

  107. See id.

  108. How Should AI Systems Behave, supra note 100.

  109. Muehmel, supra note 100.

  110. Hughes, supra note 13.

  111. See Adrian Tam, What Are Large Language Models, Machine Learning Mastery (July 20, 2023), https://machinelearningmastery.com/what-are-large-language-models [https://perma.cc/5JN8-6YLL]; Muehmel, supra note 100.

  112. How Should AI Systems Behave, supra note 100; Hughes, supra note 13.

  113. How Should AI Systems Behave, supra note 100.

  114. Muehmel, supra note 100; Kevin Roose, How Does ChatGPT Really Work?, N.Y. Times (Mar. 28, 2023), https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html [https://perma.cc/74VK-MEUQ].

  115. See Muehmel, supra note 100; Roose, supra note 114.

  116. Roose, supra note 114.

  117. Hughes, supra note 13; How Should AI Systems Behave, supra note 100.

  118. How Should AI Systems Behave, supra note 100.

  119. Hughes, supra note 13.

  120. How Should AI Systems Behave, supra note 100.

  121. Id.

  122. Id.

  123. Hughes, supra note 13.

  124. Tanay Varshney & Annie Surla, An Introduction to Large Language Models: Prompt Engineering and P-Tuning, Nvidia Developer (Apr. 26, 2023), https://developer.nvidia.com/blog/an-introduction-to-large-language-models-prompt-engineering-and-p-tuning/#:~:text=Prompting LLMs,intended use of the model [https://perma.cc/8883-FXLT].

  125. Muehmel, supra note 100.

  126. Mearian, supra note 11.

  127. Muehmel, supra note 100.

  128. Id.

  129. See id.

  130. Id.

  131. Id. To learn how to use ChatGPT, see Diaz, supra note 17 and Jessica Lau, How to Use ChatGPT, Zapier (Oct. 17, 2023), https://zapier.com/blog/how-to-use-chatgpt/#manage-chatgpt-data [https://perma.cc/69X9-556V].

  132. GPT-4, supra note 20; Introducing ChatGPT, supra note 12; Introducing ChatGPT Enterprise, OpenAI (Aug. 28, 2023), https://openai.com/blog/introducing-chatgpt-enterprise [https://perma.cc/V54D-TSHE].

  133. Introducing ChatGPT Enterprise, supra note 132.

  134. Tam, supra note 111.

  135. Muehmel, supra note 100.

  136. Hughes, supra note 13.

  137. Jonathan H. Choi, Kristin E. Hickman, Amy B. Monahan & Daniel Schwarcz, ChatGPT Goes to Law School, 71 J. Legal Educ. 387, 391 (2023) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905 [https://perma.cc/XN4N-U25P].

  138. See Roose, supra note 24; see Lea Bishop, A Computer Wrote This Paper: What ChatGPT Means for Education, Research, and Writing (Jan. 26, 2023) (unpublished), https://ssrn.com/abstract=4338981 [https://perma.cc/8VMY-LHEG] (showing prompts and results for questions such as, “Can you make that sound more like I wrote that myself?,” “Can you say that sounding like a lawyer?,” and “Can you explain what it means to be facetious in the way a first-grader might explain it?”).

  139. See Roose, supra note 24.

  140. See Bishop, supra note 138, at 13–14.

  141. Roose, supra note 24.

  142. See infra Part III(A)(3).

  143. Choi, Hickman, Monahan & Schwarcz, supra note 137, at 9.

  144. GPT-4, supra note 20.

  145. Choi, Hickman, Monahan & Schwarcz, supra note 137, at 5.

  146. Id. (“ChatGPT received a B in Constitutional Law (36th out of 40 students), a B- in Employee Benefits (18th out of 19 students), a C- in Taxation (66th out of 67 students), and a C- in Torts (75th out of 75 students).”).

  147. Id.

  148. Id. at 8.

  149. Antonio Pequeño IV, Major ChatGPT Update: AI Program No Longer Restricted to Sept. 2021 Knowledge Cutoff After Internet Browser Revamp, Forbes (Sept. 27, 2023), https://www.forbes.com/sites/antoniopequenoiv/2023/09/27/major-chatgpt-update-ai-program-no-longer-restricted-to-sept-2021-knowledge-cutoff-after-internet-browser-revamp/?sh=6016cc046e01 [https://perma.cc/CK3Z-T2NY].

  150. GPT-4, supra note 20. Google launched its AI chatbot, Google Bard, on May 21, 2023, and it has up-to-date information in its database. Hughes, supra note 13.

  151. GPT-4, supra note 20.

  152. Pequeño, supra note 149. Google Bard and Bing Chat also search through internet pages to provide more up-to-date answers to prompts. See Bing AI: Exploring Bing Chat, an AI-powered Search Engine, Semrush Blog (Aug. 31, 2023), https://www.semrush.com/blog/bing-ai [https://perma.cc/2FMK-QMUP]; Scott Clark, Is Microsoft’s AI-Driven Bing Really Better than ChatGPT?, CMS Wire (Feb. 13, 2023), https://www.cmswire.com/digital-experience/is-microsofts-ai-driven-bing-really-better-than-chatgpt [https://perma.cc/Z67U-RR6G]. When prompted for a search, for example, Bing Chat will respond not just with a list of applicable websites but with an answer to the prompt synthesized from internet sources with citations to URLs. Bing AI: Exploring Bing Chat, an AI-powered Search Engine, supra.

  153. Pequeño, supra note 149.

  154. See Introducing ChatGPT, supra note 12; GPT-4, supra note 20.

  155. GPT-4, supra note 20.

  156. Adam Pasick, Artificial Intelligence Glossary: Neural Networks and Other Terms Explained, N.Y. Times (Mar. 27, 2023), https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html [https://perma.cc/DLR6-ZB7Z].

  157. See generally Ashley Binetti Armstrong, Who’s Afraid of ChatGPT? An Examination of ChatGPT’s Implications for Legal Writing (undated) (unpublished), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4336929 [https://perma.cc/RVG7-YXN9].

  158. Id. at 4–6.

  159. See Opinion & Order on Sanctions, Mata v. Avianca, Inc., No. 1:22-cv-01461-PKC (S.D.N.Y. June 22, 2023), ECF No. 54, https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54 [https://perma.cc/KA29-S5A6].

  160. See Introducing ChatGPT, supra note 12; Muehmel, supra note 100.

  161. See No, You Can’t Cross-check if ChatGPT Is Telling the Truth! Here’s Why, Econ. Times (Feb. 1, 2023, 7:11 PM IST), https://economictimes.indiatimes.com/magazines/panache/no-you-cant-cross-check-if-chatgpt-is-telling-the-truth-heres-why/articleshow/97527446.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cpps [https://perma.cc/C6UT-L3D4].

  162. Pranshu Verma & Will Oremus, ChatGPT Invented a Sexual Harassment Scandal and Named a Real Law Prof as the Accused, Wash. Post (Apr. 5, 2023, 2:07 PM EDT), https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies [https://perma.cc/J798-NSQS].

  163. Choi, Hickman, Monahan & Schwarcz, supra note 137, at 9–10. The empirical study undertaken in in this article used the GPT-3 version of ChatGPT. See id. at 3 n.15. GPT-4 is undeniably better and may not struggle with these things as much.

  164. Id. at 10.

  165. Id. at 11.

  166. See Ludwig Makhyan, ChatGPT vs. Bard vs. Bing: What Are the Differences?, Search Engine J. (Apr. 4, 2023), https://www.searchenginejournal.com/chatgpt-vs-bard-vs-bing/483690/#close [https://perma.cc/Q93F-MDCP].

  167. Steven Vaughan-Nichols, How to Access, Install, and Use AI ChatGPT-4 Plugins (and Why You Should), ZDNet (June 16, 2023, 1:00p.m. PT), https://www.zdnet.com/article/how-to-access-install-and-use-ai-chatgpt-4-plugins [https://perma.cc/CZW2-YZEW].

  168. ChatGPT Plugins, OpenAI (Mar. 23, 2023), https://openai.com/blog/chatgpt-plugins [https://perma.cc/M2Q5-TK9P].

  169. See id.

  170. AskYourPDF, https://www.askyourpdf.com/?ref=producthunt [https://perma.cc/C52Q-HXFP] (last visited July 26, 2023).

  171. Muehmel, supra note 100.

  172. E.g., Guru, getguru.com (last visited Oct. 21, 2023).

  173. Thomson Reuters Completes Acquisition of Casetext, Inc., Thomson Reuters (Aug. 17, 2023), https://www.thomsonreuters.com/en/press-releases/2023/august/thomson-reuters-completes-acquisition-of-casetext-inc.html [https://perma.cc/FSF9-4JAB].

  174. Meet Your New AI Legal Assistant, Casetext, https://casetext.com/?utm_source=google&utm_medium=paidsearch&utm_campaign=brand-research&utm_content=_&utm_term=casetext&hsa_acc=1447382923&hsa_cam=9936516883&hsa_grp=103587268714&hsa_ad=649626319897&hsa_src=g&hsa_tgt=kwd-319546385514&hsa_kw=casetext&hsa_mt+e&hsa_net=adwords&hsa_ver=3&gad=1&gclid=Cj0KCQjwnf-kBhCnARIsAFlg493rxYwSQGbc3QbHeR9zoxzHksD4oUtqvg52FVcVrqvC5GY_HSjXIYYaAlcpEALw_wcB [https://perma.cc/F5L4-6NAF ] (last visited July 1, 2023).

  175. The Legal AI You’ve Been Waiting For, Casetext, https://casetext.com/cocounsel/ [https://perma.cc/V3XK-RJ4U] (last visited July 1, 2023); Thomson Reuters to Acquire Legal AI Firm Casetext for $650 Million, Reuters (June 27, 2023), https://www.reuters.com/markets/deals/thomson-reuters-acquire-legal-tech-provider-casetext-650-mln-2023-06-27 [https://perma.cc/BNY7-CBQT].

  176. The Legal AI You’ve Been Waiting For, supra note 175.

  177. Westlaw Precision Now Has Generative AI, Thomson Reuters, https://legal.thomsonreuters.com/en/c/westlaw/gen-ai-precision-waitlist?gclid=Cj0KCQjwwISlBhD6ARIsAESAmp6QQmWwNNMbv3e6fZtd-ITO0nhGXg8g9iTGaf37fLPQmr9FgU0I20AaAmQZEALw_wcB&searchid=TRPPCSOL/Google/LegalUS_RS_Pan-GTM_AI_Search_NonBrand-All_US/NonBrandAI-All&chl=ppc&cid=3492865&sfdccampaignid=7014O000001BkfQQAS&ef_id=Cj0KCQjwwISlBhD6ARIsAESAmp6QQmWwNNMbv3e6fZtd-ITO0nhGXg8g9iTGaf37fLPQmr9FgU0I20AaAmQZEALw_wcB:G:s&s_kwcid=AL!7944!3!659786024830!e!!g!!artificial intelligence legal research [https://perma.cc/BBB8-94HD] (last visited Nov. 16, 2023).

  178. LexisNexis Announces Launch of Lexis+ AI Commercial Preview, Most Comprehensive Global Legal Generative AI Platform, LexisNexis (May 4, 2023), https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-announces-launch-of-lexis-ai-commercial-preview-most-compre hensive-global-legal-generative-ai-platform; see Lexis+ AI: Transform Your Legal Work, LexisNexis, https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page?utm_source=google&utm_medium=ppc&utm_term=legal ai&utm_campaign=&gad=1&gclid=Cj0KCQjwho-lBhC_ARIsAMpgModJjvAnUAUqK0A8OHn1RxrTMDs8UxCCDgfFbSioF7CLoV63BmlaiVgaAjc_EALw_wcB [https://perma.cc/WP8E-5YAA] (last visited July 4, 2023).

  179. LexisNexis Announces Launch of Lexis+ AI Commercial Preview, supra note 178.

  180. Lexis+ AI: Transform Your Legal Work, supra note 178.

  181. Uniform Bar Examination, Nat’l Conf. of Bar Examr’s, https://www.ncbex.org/exams/ube [https://perma.cc/65ZE-CJG7] (last visited June 10, 2023). The jurisdictions that have not adopted the UBE wholesale are California, Delaware, Florida, Georgia, Hawaii, Louisiana, Mississippi, Nevada, South Dakota, Virginia, Wisconsin, and several U.S. territories. Id.

  182. Id.

  183. Id. All but one state also require that applicants for the bar complete the Multistate Professional Responsibility Exam (MPRE), although that test is not affected by the NextGen bar exam and will remain a stand-alone exam that is administered separately. Testing Task Force Final Update, The Bar Exam’r (Spring 2021), https://thebarexaminer.ncbex.org/article/spring-2021/testing-task-force-final-update/ [https://perma.cc/9LFH-4FZ2]. Some jurisdictions that have adopted the UBE also require applicants to take an additional jurisdiction-specific bar examination component. UBE Local Components: UBE Pre-Admission Jurisdiction-Specific Law Component Requirements, Nat’l, Conf. of Bar Exam’rs, https://www.ncbex.org/exams/ube/score-portability/local-components/ [https://perma.cc/5P9N-GVQE] (last visited June 10, 2023).

  184. Nat’l Conf. of Bar Exam’rs Testing Task Force, Testing Task Force Phase 1 Listening Sessions Executive Summary, in Your Voice: Stakeholder Thoughts About the Bar Exam: Phase 1 Report of the Testing Task Force 1 (Aug. 2019), https://nextgenbarexam.ncbex.org/themencode-pdf-viewer/?file=https://nextgenbarexam.ncbex.org/wp-content/uploads/FINAL-Listening-Session-Executive-Summary-with-Appendices-2.pdf#zoom=auto&pagemode=none [https://perma.cc/M6A5-2KT7 ] [hereinafter Phase 1 Report]; Testing Task Force Quarterly Update, The Bar Exam’r (Fall 2018), https://thebarexaminer.ncbex.org/article/fall-2018/testing-task-force-quarterly-update/ [https://perma.cc/NL5X-UWXS].

  185. Testing Task Force Quarterly Update, The Bar Exam’r (Summer 2018), https://thebarexaminer.ncbex.org/article/summer-2018/testing-task-force [https://perma.cc/9BXP-BEVX].

  186. See Implementing the Next Generation of the Bar Exam, 2022–2026, NextGen Bar Exam Future (Dec. 2022), https://nextgenbarexam.ncbex.org/about/implementation-timeline [https://perma.cc/EQ9D-AQZR]. The information on the NextGen bar exam in this article is current as of September 2023 due to the Journal of the Legal Writing Institute’s publishing schedule.

  187. Nat’l Conf. of Bar Exam’rs, Understanding the Uniform Bar Examination 5 (2023), www.ncbex.org/sites/default/files/2023-09/NCBE_Understanding_the_UBE.pdf [https://perma.cc/R84M-YJ6B].

  188. UBE Scores, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/exams/ube/scores/ [https://perma.cc/239Y-8AMU] (last visited June 10, 2023).

  189. Preparing for the MBE: Test Format, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/exams/mbe/preparing/ [https://perma.cc/2BHX-KMK9] (last visited June 10, 2023).

  190. Preparing for the MEE: Test Format, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/exams/mee/preparing/ [https://perma.cc/73UV-5YHS] (last visited June 10, 2023).

  191. MEE Subject Matter Outline, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/sites/default/files/2023-01/MEE_Subject_Matter_Outline_2023.pdf [https://perma.cc/Q2R4-WKAP] (last visited Nov. 16, 2023).

  192. Preparing for the MPT: Test Format, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/exams/mpt/preparing-mpt [https://perma.cc/73D5-5VCA] (last visited Nov. 16, 2023).

  193. Id.

  194. Id.

  195. Id.

  196. Id.

  197. MPT Skills Needed, Nat’l Conf. of Bar Exam’rs, https://www.ncbex.org/sites/default/files/2023-01/MPT_Skills_Tested_2023.pdf [https://perma.cc/5JJ4-YPQG ] (last visited Nov. 16, 2023).

  198. Phase 1 Report, supra note 184, at 1.

  199. Nat’l Conf. of Bar Exam’rs Testing Task Force, Final Report of the Testing Task Force 2 (Apr. 2021), https://nextgenbarexam.ncbex.org/wp-content/uploads/TTF-Final-Report-April-2021.pdf#zoom=auto&pagemode=none [https://perma.cc/Q76P-2RER] [hereinafter Final Report].

  200. Nat’l Conf. of Bar Exam’rs Testing Task Force, Testing Task Force Phase 2 Report: 2019 Practice Analysis 1 (Mar. 2020), https://nextgenbarexam.ncbex.org/themencode-pdf-viewer/?file=https://nextgenbarexam.ncbex.org/wp-content/uploads/TestingTaskForce_Phase_2_Report_031020.pdf#zoom=auto&pagemode=none [https://perma.cc/AP5W-JFA2].

  201. Id. at 4.

  202. Id.

  203. Id. at 5.

  204. See id.

  205. Nat’l Conf. of Bar Exam’rs Testing Task Force, Phase 3 Report of the Testing Task Force: Blueprint Development Committee and Test Design Committee Meetings 1 (Nov. 2020), https://nextgenbarexam.ncbex.org/themencode-pdf-viewer/?file=https://nextgenbarexam.ncbex.org/wp-content/uploads/TTF-Phase-3-Report-110420.pdf#zoom=auto&pagemode=none [https://perma.cc/Y9RE-RLYR].

  206. Id. at 4.

  207. Id. at 5.

  208. NCBE Board of Trustees Votes to Approve Testing Task Force Recommendations, The Bar Exam’r (Jan. 28, 2021), https://ncbex.org/news-resources/ncbe-board-trustees-votes-approve-testing-task-force-recommendations [https://perma.cc/8L8V-WBZZ].

  209. NCBE TTF Overview of Recommendations, supra note 32, at 3.

  210. Id. at 24. Originally, the NCBE said it would test only Civil Procedure, Contract Law, Evidence, Torts, Business Associations, Constitutional Law, Criminal Law and Constitutional Protections Impacting Criminal Proceedings, and Real Property. Final Report, supra note 199, at 21. The NCBE later announced that it would add Family Law back as a subject matter beginning in July 2028. NCBE Announces Update to NextGen Exam Content, Extends Availability of Current Bar Exam, Nat’l Conf. Bar Exam’rs (Oct. 25, 2023), https://www.ncbex.org/news-resources/update-nextgen-exam-content-extends-availability [https://perma.cc/VQ63-LCYB] [hereinafter NCBE Subject Matter and UBE Updates].

  211. NCBE TTF Overview of Recommendations, supra note 32, at 4.

  212. Id.

  213. Id. at 3.

  214. NextGen Bar Exam Sample Questions, NextGen Bar Exam Future, https://nextgenbarexam.ncbex.org/nextgen-sample-questions/#itemset [https://perma.cc/L2EN-SN8J] (last visited July 26, 2023).

  215. NCBE TTF Overview of Recommendations, supra note 32, at 3.

  216. See id.

  217. See NextGen Bar Exam Sample Questions, supra note 214.

  218. Id.

  219. FAQs About Recommendations, NextGen Bar Exam Future, https://nextgenbarexam.ncbex.org/faqs/ [https://perma.cc/8JKT-NHKE] (last visited June 11, 2023) [https://perma.cc/M92N-LFFN]; NCBE Subject Matter and UBE Updates, supra note 210.).

  220. See Bar Exams, Am. Bar Assoc., https://www.americanbar.org/groups/legal_education/resources/bar-admissions/bar-exams/ [https://perma.cc/5ZUN-YN8F] (last visited June 11, 2023); Judith Gundersen, Danette McKinley, Marilyn Wellington, & D. Benjamin Barros, Nat’l Conf. Bar Exam’rs, The Next Generation of the Bar Exam (Jan. 5, 2023) (presentation attended by author; PowerPoint slides on file with author).

  221. See Christine Charnosky, Current Bar Exam Will Sunset After July 2027 Administration, My Law.com (Sept. 11, 2023, 6:00 PM), https://www.law.com/2023/09/11/current-bar-exam-will-sunset-after-july-2027-administration//?slreturn=20230921195912 [https://perma.cc/8XW3-3SKQ].

  222. NCBE Subject Matter and UBE Updates, supra note 210.

  223. FAQs About Recommendations, supra note 219.

  224. NCBE Subject Matter and UBE Updates, supra note 210.

  225. NCBE Bar Exam Content Scope, supra note 33.

  226. Final Report, supra note 199, at 21.

  227. NCBE Bar Exam Content Scope, supra note 33, at 4 (emphases added).

  228. Id.

  229. See id.

  230. Id. (emphasis added).

  231. Id.

  232. Id.

  233. See Latest Updates, OpenAI, https://openai.com/blog [https://perma.cc/D6HM-YKWB] (last visited Sept. 4, 2023).

  234. See ChatGPT Plugins, supra note 168.

  235. Introducing ChatGPT Enterprise, OpenAI (Aug. 28, 2023), https://openai.com/blog/introducing-chatgpt-enterprise [https://perma.cc/MT5R-UEFS]; New Ways to Manage Your Data in ChatGPT, OpenAI (Apr. 25, 2023), https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt [https://perma.cc/U5YX-RRPD] (“We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default. We plan to make ChatGPT Business available in the coming months.”)

  236. Pequeño, supra note 149.

  237. Implementation Timeline, Nat’l Conf. Bar Exam’rs (Dec. 2022), https://nextgenbarexam.ncbex.org/about/implementation-timeline [https://perma.cc/6H5W-7G6J].

  238. NextGen Bar Exam Sample Questions, supra note 214.

  239. Marilyn J. Wellington, The Next Generation of the Bar Exam: Quarterly Update, The Bar Exam’r (Summer 2023), https://thebarexaminer.ncbex.org/article/summer-2023/next-generation-of-the-bar-exam-sum23 [https://perma.cc/4ZPW-YLWE].

  240. Id.; Implementation Timeline, supra note 237.

  241. See FAQs About Recommendations, supra note 219.

  242. See Terry, supra note 2.

  243. Kalley Huang, Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach, N.Y. Times (Jan. 16, 2023), https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html [https://perma.cc/KUD8-9SFE].

  244. Jessica Blake, Student and Faculty Perspectives on Digital Learning Differ, Inside Higher Ed (June 21, 2023), https://www.insidehighered.com/news/tech-innovation/teaching-learning/2023/06/21/student-and-faculty-perspectives-digitallearning?utm_source=Inside+Higher+Ed&utm_campaign=aa152acb58-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-aa152acb58-237252373&mc_cid=aa152acb58&mc_eid=10505b9795 [https://perma.cc/AN3F-MEK3]; see Bharadwaj, Shaw, NeJame, Martin, Janson & Fox, supra note 2, at 20.

  245. Bharadwaj, Shaw, NeJame, Martin, Janson & Fox, supra note 2, at 19.

  246. Id. at 20 (quoting Dr. Andy Pennock, Associate Professor of Public Policy and Co-chair of University of Virginia’s Generative AI Teaching and Learning Taskforce).

  247. Id. at 22–23.

  248. Mary Jo Madda, What Higher Ed Gets Wrong About AI Chatbots—From the Student Perspective, Edsurge (May 15, 2023), https://www.edsurge.com/news/2023-05-15-what-higher-ed-gets-wrong-about-ai-chatbots-from-the-student-perspective [https://perma.cc/S9KH-G8XH].

  249. See id.

  250. Id.

  251. Bharadwaj, Shaw, NeJame, Martin, Janson & Fox, supra note 2, at 23.

  252. LexisNexis, Generative AI & the Legal Profession: 2023 Survey Report 3 (Apr. 2023).

  253. Id. at 5.

  254. Serena Wellen, Learning the Law with AI: Why Law School Students Are Tentative About Using ChatGPT, LawNext (June 2, 2023), https://directory.lawnext.com/library/learning-the-law-with-ai-why-law-school-students-are-tentative-about-using-chat-gpt [https://perma.cc/W37H-RMU6].

  255. See Huang, supra note 243.

  256. Wellen, supra note 254.

  257. See Terry, supra note 2.

  258. Kirsten K. Davis, Prompt Engineering for ChatGPT Can Improve Your Legal Writing—Even if You Never Use ChatGPT, Appellate Advocacy Blog (Apr. 6, 2023), https://lawprofessors.typepad.com/appellate_advocacy/2023/04/prompt-engineering-for-chatgpt-can-improve-your-legal-writingeven-if-you-never-use-chatgpt.html.

  259. Id.

  260. See Huang, supra note 243.

  261. Terry, supra note 2.

  262. Id.

  263. Id.

  264. Id.

  265. See Daniel Schwarcz & Jonathan H. Choi, AI Tools for Lawyers: A Practical Guide, 108 Minn. L. Rev. __ (forthcoming 2023) (manuscript at 21–30), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404017 [https://perma.cc/6A8Y-NWDR].

  266. See id.

  267. See Andrew Perlman, The Implications of ChatGPT for Legal Services and Society 7–8 (Mar. 10, 2023) (unpublished manuscript), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4294197.

  268. See id. at 8–10; Schwarcz & Choi, supra note 265, at 32–35.

  269. See Perlman, supra note 267, at 10–12.

  270. See generally Armstrong, supra note 157.

  271. See generally id.

  272. See generally id.

  273. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le & Denny Zhou, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 8–9 (2023), available at https://arxiv.org/pdf/2201.11903.pdf [https:/perma.cc/6MV4-JTTQ]. But see Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo & Yusuke Iwasawa, Large Language Models Are Zero-Shot Reasoners (Jan. 29, 2023), available at https://arxiv.org/pdf/2205.11916.pdf [https://perma.cc/BR5F-W224] (reviewing various ways to construct prompts to elicit the best reasoning from LLMs).

  274. Wei, Wang, Schuurmans, Bosma, Ichter, Xia, Chi, Le & Xhou, supra note 273, at 9.

  275. Perlman, supra note 267, at 12–17.

  276. See id.

  277. Schwarcz & Choi, supra note 265 (manuscript at 8–11).

  278. See Choi, Hickman, Monahan & Schwarcz, supra note 137, at 393–94.

  279. Id. at 394.

  280. Hughes, supra note 13.

  281. See Michael A. Kaplan, Raymond S. Cooper & Ruth Fong Zimmerman, Lawyers and ChatGPT: Best Practices, Lexis+ (Apr. 30, 2023), https://plus.lexis.com/document/lpadocument?pddocfullpath=%2Fshared%2Fdocument%2Fanalytical-materials%2Furn%3AcontentItem%3A684F-0F01-F528-G091-00000-00&pdsourcegroupingtype=&pdcontentcomponentid=500749&pdisurlapi=true&pdmfid=1530671&crid=c5381d20-3d6a-4779-be59-e269fa1e3206.

  282. Kaplan, supra note 281; Schwarcz & Choi, supra note 265, at 10–18.

  283. Perlman, supra note 267, at 5–6.

  284. Schwarcz & Choi, supra note 265, at 24–27.

  285. Id. at 28–33.

  286. Id. at 34.

  287. See id. at 35.

  288. Tiffany Hsu & Steven Lee Myers, Another Side of the A.I. Boom: Detecting What A.I. Makes, N.Y. Times (May 19, 2023), https://www.nytimes.com/2023/05/18/technology/ai-chat-gpt-detection-tools.html [https://perma.cc/DNG4-2DDX].

  289. Terry, supra note 2.

  290. Honorlock, supra note 16, at 9–15.

  291. Hsu & Myers, supra note 288.

  292. Damir Mujezinovic, AI Content Detectors Don’t Work, and That’s a Big Problem, Make Use Of (May 11, 2023), https://www.makeuseof.com/ai-content-detectors-dont-work.

  293. Id.

  294. Ian Sample, Programs to Detect AI Discriminate Against Non-native English Speakers, Shows Study, Guardian (July 10, 2023, 11:00 AM EDT), https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study. [https://perma.cc/MQ78-6A58].

  295. See What Is GPTZero?, GPTZero, https://gptzero.me/faq (last visited Nov. 27, 2023) (“GPTZero is the leading AI detector for checkinga classification model that predicts whether a document was written by a large language model such as ChatGPT. GPTZero detects AI on, providing predictions on a sentence, paragraph, and document level. Our modelGPTZero was trained on a large, diverse corpus of human-written and AI-generated text, with a focus on English prose.”) [https://perma.cc/XN3Y-BE9U]. Other programs include ChatZero, developed by a Princeton University student, and an AI-text detector developed by Turnitin. Kyle Wiggers, Most Sites Claiming to Catch AI-Written Text Fail Spectacularly, TechCrunch (Feb. 16, 2023 1:59 PM CST), https://techcrunch.com/2023/02/16/most-sites-claiming-to-catch-ai-written-text-fail-spectacularly/ [https://perma.cc/KU37-AMM7].

  296. What is GPTZero?, supra note 295.

  297. James LePage, The Best Tools to Detect AI Content in 2023—Tested for Accuracy, Isotropic Blog (Aug. 6, 2023), https://isotropic.co/best-tools-to-detect-ai/#:~:text=Originality AI&text=In August 2023%2C they released,even quicker and more accurate.&text=Originality AI’s accuracy is industry,scores it as 100%25 AI. [https://perma.cc/998A-8RSS].

  298. Wiggers, supra note 295.

  299. Hsu & Myers, supra note 288.

  300. Jason Nelson, OpenAI Quietly Shuts Down Its AI Detection Tool, Decrypt (July 24, 2023), https://decrypt.co/149826/openai-quietly-shutters-its-ai-detection-tool.

  301. Wiggers, supra note 295.

  302. Id.

  303. Anna Rubenstein, Turnitin’s New AI Detection Causes Issues for BU Students, Daily Free Press (Boston U., Boston, MA) (Apr. 25, 2023 10:11 PM), https://dailyfreepress.com/2023/04/25/turnitins-new-ai-detection-causes-issues-for-bu-students [https://perma.cc/5PES-JFHJ].

  304. Vivek Naskar, AI Detectors Failing to Differentiate Between AI Generated and Human Generated Content, Git Connected (June 13, 2023), https://levelup.gitconnected.com/ai-detectors-failing-to-differentiate-between-ai-generated-and-human-generated-content-ae878baa69c3 [https://perma.cc/6CK2-AL8F].

  305. Id.

  306. See Geoffrey A. Fowler, We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student., Wash. Post (Apr. 3, 2023, 9:47 AM EDT), https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin [https://perma.cc/8NQ9-JWXW].

  307. Annie Chechitelli, AI Writing Detection Update from Turnitin’s Chief Production Officer, Turnitin (May 23, 2023), https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer [https://perma.cc/CY7D-XFXY] (“As a result of this additional testing, we’ve determined that in cases where we detect less than 20% of AI writing in a document, there is a higher incidence of false positives.”).

  308. See id.

  309. Id.

  310. See Geoffrey A. Fowler, What to Do When You’re Accused of AI Cheating, Wash. Post, Aug. 14, 2023, 6:00 AM EDT), https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/ [https://perma.cc/N23E-52U3].

  311. AI Content Detection: How ChatGPT and AI-Generated Text Is Found, SEO.ai (May 16, 2023), https://seo.ai/blog/ai-content-detection [https://perma.cc/55R3-XTTK].

  312. Id.

  313. Luigi Oppido, The Risks of Using ChatGPT for School: Could You Be Caught?, wikiHow (Oct. 13, 2023), https://www.wikihow.com/Can-You-Be-Caught-Using-Chat-Gpt#:~:text=Yes%2C you can get caught,OriginalityAI being the most common [https://perma.cc/VP7R-LUCX].

  314. Stuart A. Thompson & Tiffany Hsu, How Easy Is It to Fool A.I. Detection Tools?, N.Y. Times (June 28, 2023), https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html?searchResultPosition=1 [https://perma.cc/R8TV-7D79].

  315. Wiggers, supra note 295.

  316. See id.

  317. See id.; Mujezinovic, supra note 292 (“[M]aking minor tweaks to AI-generated text is enough to pass with flying colors.”).

  318. See Justin Gluska, Can ChatGPT + Quillbot Create Undetectable AI Content?, Gold Penguin (June 4, 2023), https://goldpenguin.org/blog/can-chatgpt-and-quillbot-create-undetectable-ai-content/#:~:text=To20use it%2C visit their,to make it completely undetectable [https://perma.cc/V86L-ZJ6C].

  319. Mujezinovic, supra note 292; Kyle Wiggers, OpenAI’s Attempts to Watermark AI Text Hits Limits, TechCrunch (Dec. 10, 2022, 7:15 AM CST), https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits [https://perma.cc/EFP8-PM7S].

  320. Wiggers, supra note 319.

  321. Id.

  322. Id.

  323. Natalie O’Neill, Texas Professor Flunked Whole Class After ChatGPT Wrongly Claimed It Wrote Their Papers, N.Y. Post (May 18, 2023), https://nypost.com/2023/05/18/texas-professor-flunked-whole-class-after-chatgpt-wrongly-claimed-it-wrote-their-papers [https://perma.cc/DUB4-RV2B].

  324. Id.

  325. Id.

  326. Id.

  327. E.g., Teaching Center Doesn’t Endorse Any Generative AI Detection Tools, Univ. Times (June 22, 2023), https://www.utimes.pitt.edu/news/teaching-center-doesn-t [https://perma.cc/5AFM-MMSB] (University of Pittsburgh); Michael Coley, Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector, Vanderbilt Univ. (Aug. 16. 2023), https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector [https://perma.cc/DY3X-F7T5]; Kristin Palm, “The Only Winning Move Is Not to Play,” Univ. Mich.-Dearborn (June 5, 2023), https://umdearborn.edu/news/only-winning-move-not-play [https://perma.cc/2GZ3-TLJG].

  328. See generally Melissa Bezanson Shultz, Professor, Please Help Me Pass the Bar Exam: #NEXTGENBAR2026, 71 J. Legal Educ. 141, 169–73 (2021) (urging law schools to consider curricular and assessment changes ahead of the NextGen bar exam); O.J. Salinas, Secondary Courses Taught by Secondary Faculty: A (Personal) Call to Fully Integrate Skills Faculty and Skills Courses into the Law School Curriculum Ahead of the NextGen Bar Exam, 107 Minn. L. Rev. 2663 (2023) (urging law schools to increase skills training and experiential learning in advance of the NextGen bar exam); Wanda M. Temm, Legal Education and NextGen: Recommendations for Transitioning to a New Assessment Model, The Bar Exam’r (Summer 2023), https://thebarexaminer.ncbex.org/article/summer-2023/my-perspective-summer-2023/ [https://perma.cc/9SHG-4T9T] (urging each 1L faculty member to incorporate at least one NextGen bar exam-style assessment into their exam).

  329. NCBE Bar Exam Content Scope, supra note 33, at 4; NextGen Bar Exam Sample Questions, supra note 214.

  330. For a summary of the debate on whether to stop teaching objective writing through the use of a formal objective office memo, see Davis, supra note 6, at 473–74 & nn.5–13, and Thomson, supra note 43, at 188–89 & n.32.

  331. See Bone, supra note 86, at 6; Bharadwaj, Shaw, NeJame, Martin, Janson & Fox, supra note 2, at 24 (“As student use of generative AI tools increases and assessment evolves, institutions will need to support educators in adjusting how writing and other assignments are designed, completed, and evaluated in and out of class.”).

  332. Jones, supra note 37, at 101.

  333. See, e.g., Lasso, supra note 47, at 91–93.

  334. See supra Part II(A)(1).

  335. See Semire Dikli, Assessment at a Distance: Traditional v. Alternative Assessments, 2 Turkish Online J. of Educ. Tech. 13, 18 (2003), www.tojet.net/articles/v2i3/232.pdf [https://perma.cc/QQ73-AC8T]; VanZandt, supra note 39, at 347.

  336. Dikli, supra note 335, at 13–14.

  337. Id. at 14.

  338. See id. at 14–15.

  339. See supra Part II(B).

  340. Mike Hart & Tim Friesner, Plagiarism and Poor Academic Performance—A Threat to the Extension of E-Learning in Higher Education?, 2 Elec. J. e-Learning 89, 94 (2004).

  341. Terry, supra note 2.

  342. Id.

  343. This article does not include all possible assessment tools. See Munro, supra note 10, at 134–35 (listing a variety of small-scale assessments that can be used to gauge student learning).

  344. See NextGen Bar Exam Sample Questions, supra note 214.

  345. Susan M. Case & Beth E. Donahue, Developing High-Quality Multiple-Choice Questions for Assessment in Legal Education, 58 J. Legal Educ. 372, 374 (2008) (“Research has consistently demonstrated a high correlation between scores on essays and multiple-choice questions: examinees who do well on a well-crafted essay exam in a particular field also tend to do well on a well-crafted multiple-choice exam in that field, and those who do poorly on essays likewise tend to do poorly on multiple-choice questions.”).

  346. Danielle Bozin, Felicity Deane & James Duffy, Can Multiple Choice Exams Be Used to Assess Legal Reasoning? An Empirical Study of Law Student Performance and Attitudes, 30 Legal Educ. Rev. 1, 3 (2020) (emphasis omitted). For key concepts regarding multiple-choice exam questions to test legal reasoning, see id. at 7–10 and Case & Donahue, supra note 345, at 374–86.

  347. See Cynthia B. Schmeiser & Catherine J. Welch, Test Development, in Educational Measurement 307, 318 (Robert L. Brennan ed., 4th ed. 2006).

  348. Bozin, Deane & Duffy, supra note 346, at 17. See generally Carolyn V. Williams, The Silent Scream: How to Help Law Students with Dysgraphia, the “Invisible Disability” Few People Know About, __ UMKC L. Rev. ___ (2024) (forthcoming) (describing challenges for students with writing disabilities).

  349. Thomas M. Haladyna, Developing and Validating Multiple-Choice Test Items 3–4 (3d ed. 2004).

  350. See id. at 2.

  351. Id. at 187.

  352. See id. at 97–127. “The typical law school multiple-choice item begins with a fact problem (called the root), followed by a question (called the stem) and then the answer choices (called the options).” Janet W. Fisher, Multiple-Choice: Choosing the Best Options for More Effective and Less Frustrating Law School Testing, 37 Cap. U. L. Rev. 119, 126 (2008). For more detail on these elements, see id. at 126–31. For more in-depth instruction on creating multiple-choice questions, see generally Patti Shank, Write Better Multiple-choice Questions to Assess Learning (2021) and Haladyna, supra note 349.

  353. See Joy E. Herr-Cardillo & Melissa H. Weresh, Team-Based Learning in an Online Teaching Environment, in Law Teaching Strategies for a New Era: Beyond the Physical Classroom 121 & n.**, 124–25 (Tessa L. Dysart & Tracy L.M. Norton eds., 2021). The author of this article has been using TBL in her classroom since 2018.

  354. For a thorough discussion of all the steps of TBL that might be used in legal writing classrooms, see generally Sophie M. Sparrow & Margaret Sova McCabe, Team-Based Learning in Law, 18 Legal Writing 153 (2012) and Herr-Cardillo & Weresh, supra note 353. There is also a helpful diagram of the TBL process in Larry K. Michaelson & Michael Sweet, Team-Based Learning, New Directions for Teaching and Learning, no. 128, Winter 2011, at 41, 42.

  355. Sparrow & McCabe, supra note 354, at 158.

  356. Id.

  357. Id.

  358. See Herr-Cardillo & Weresh, supra note 353, at 126 & n.13.

  359. The following examples are taken from RAQs written by Professor Joy Herr-Cardillo of the University of Arizona James E. Rogers College of Law and that are on file with the author.

  360. Case & Donahue, supra note 345, at 373.

  361. See generally Thomson, supra note 43, at 176–77.

  362. See Dennis D. Kerkman & Andrew T. Johnson, Challenging Multiple-Choice Questions to Engage Critical Thinking, 9 Insight: J. Scholarly Teaching 92, 92–93 (2014); 2 Michael Josephson, Learning & Evaluation in Law School: Test Construction; Scoring, Grading and Ranking; Institutional Policies 382 (1984); David K. Dodd & Linda Leal, Answer Justification: Removing the “Trick” from Multiple-Choice Questions, 15 Teaching Psychol. 37, 37 (1988); see also Bozin, Deane & Duffy, supra note 346, at 18 (“Many students are critical of multiple-choice exams, where they perceive that they could justify their answers to questions if they could just write a few words to explain their answer, rather than selecting one answer amongst viable alternatives.”).

  363. See Kerkman & Johnson, supra note 362, at 93.

  364. Id.

  365. Herr-Cardillo & Weresh, supra note 353, at 125.

  366. Kerkman & Johnson, supra note 362, at 95.

  367. Id.

  368. Id.

  369. See John Dunlosky, Katherine A. Rawson, Elizabeth J. Marsh, Mitchell J. Nathan & Daniel T. Willingham, Improving Students’ Learning with Effective Learning Techniques: Promising Directions from Cognitive and Educational Psychology, 14 Psych. Sci. Public Interest 4, 29 (2013), available at https://journals.sagepub.com/doi/epdf/10.1177/1529100612453266 [https://perma.cc/XC3Y-9864].

  370. Id. at 35 (“On the basis of the evidence described above, we rate practice testing as having high utility.”).

  371. Jaclyn Walsh, Why You Should Take Practice Tests Before an Exam, Peterson’s (Jan. 7, 2020), https://www.petersons.com/blog/why-you-should-take-practice-tests-before-an-exam [https://perma.cc/HWN3-HN7B]; Dunlosky, Rawson, Marsh, Nathan & Willingham, supra note 369, at 29–30.

  372. Dunlosky, Rawson, Marsh, Nathan & Willingham, supra note 369, at 30.

  373. Walsh, supra note 371.

  374. Sara J. Berman, Integrating Performance Tests into Doctrinal Courses, Skills Courses, and Institutional Benchmark Testing: A Simple Way to Enhance Student Engagement While Furthering Assessment, Bar Passage, and Other ABA Accreditation Objectives, 42 J. Legal Pro. 147, 152 & n.21 (2018) (compiling sources arguing for using performance tests as a law school teaching tool).

  375. Kaci Bishop & Alexa Z. Chew, Turducken Legal Writing: Deconstructing the Multi-State Performance Test Genre, 26 Legal Writing 113, 117 (2022).

  376. Id. at 157.

  377. See id. at 159–61.

  378. See ALWD/LWI Individual Survey, supra note 91, at 32–54. The ALWD/LWI Individual Survey does not include assessment options such as a final exam, test, or MPT, and only a handful of the professors’ comments note MPT or quizzes as an “other” assessment. See id. at 52.

  379. See Christine Coughlin, Joan Malmud Rocklin & Sandy Patrick, A Lawyer Writes 74–76, 82 (3d ed. 2018).

  380. See id. at 76–82.

  381. Id. at 85–90.

  382. See Herbert N. Ramy, Moving Students from Hearing and Forgetting to Doing and Understanding: A Manual for Assessment in Law School, 41 Cap. U. L. Rev. 837, 877 (2013).

  383. Id. at 877.

  384. Olympia Duhart, “It’s Not for a Grade”: The Rewards and Risks of Low-Risk Assessment in the High-Stakes Law School Classroom, 7 Elon L. Rev. 491, 493–94 (2015).

  385. Id. at 492.

  386. Id. at 493. For examples of low-risk formative assessments, see id. at 517–26.

  387. Id. at 504.

  388. See id. at 505.

  389. Id.

  390. Id. at 506.

  391. See id.

  392. Legal Writing Sourcebook, supra note 64, at 135 (stating most LRW programs require one or two trial or appellate briefs and an oral argument in the spring semester).

  393. Stabler, supra note 92, at 445.

  394. Of course, as part of this decision, professors should understand the challenges of evaluating oral arguments, such as bias that may affect the evaluation; traditional expectations of oral advocacy that favor “the dress, demeanor, and delivery associated with white men” and thus burden women and people of color; stereotype threat experienced by some marginalized students; and the lack of consistency in the oral argument experience for each student. Stabler, supra note 92, at 448–64.

  395. See generally O.J. Salinas, Beyond the Socratic Class: Helping Prepare Practice-Ready Students by Incorporating Client Interviews in the 1L LRW Class, The Learning Curve at 7 (Summer 2015) (explaining a simulated 1L client interviewing experience in a LRW course).

  396. See generally Cheryl Bratt, Livening up 1L Year: Moving Beyond Simulations to Engage 1L Students in Live-Client Work, 33 The Second Draft 21 (Sept. 1, 2020) (explaining a 1L client interviewing experience wherein the students work with lawyers to interview clients, present the client’s issue during case rounds discussions, and summarizes the interviews in emails to the legal team handling the case).

  397. See NCBE Bar Exam Content Scope, supra note 33, at 2–3.

  398. See generally Kay Elkins-Elliott & Frank W. Elliott, Settlement Advocacy, 11 Tex. Wesleyan L. Rev. 7, 8 (2004) (advocating for teaching settlement negotiations to law students).

  399. See Lasso, supra note 36, at 35.

  400. See Amanda L. Sholtis, Say What?: A How-to Guide on Providing Formative Assessment to Law Students Through Live Critique, 49 Stetson L. Rev. 1, 7 (2019).

  401. Katrina Robinson, Let Them Lead: Professional Identity Formation in Student Conferences, 30 Persps.: Teaching Legal Rsch. & Writing 8, 11 (Spring 2023).

  402. See id. at 11.

  403. For detailed how-to guides to live critique students, see generally Sholtis, supra note 400, at 12–24; Patricia G. Montana, Live and Learn: Live Critiquing and Student Learning, 27 Persps.: Teaching Legal Rsch. & Writing 22 (2019); and Anna P. Hemingway & Amanda L. Smith, Best Practices in Legal Education: How Live Critiquing and Cooperative Work Lead to Happy Students and Happy Professors, 29 The Second Draft 7 (Fall 2016).

  404. Sholtis, supra note 400, at 1.

  405. See Daniel L. Barnett, ''Form Ever Follows Function": Using Technology to Improve Feedback on Student Writing in Law School, 42 Val. U. L. Rev. 755, 765 (2008).

  406. See id.

  407. Id. at 766.

  408. See id.

  409. Id.

  410. See e.g., Jack Shepherd, Is Document Collaboration in Law Firms Just Too “Old School”?, Medium (Sept. 17, 2020), https://jackwshepherd.medium.com/is-document-collaboration-in-law-firms-just-too-old-school-3371c024ddbf [https://perma.cc/3THC-7J8D] (describing the ways lawyers collaborate on writing).

  411. See Corey Seemiller & Meghan Grace, Generation Z Goes to College 1, 6 (2016); Laura P. Graham, Generation Z Goes to Law School: Teaching and Reaching Law Students in the Post-Millennial Generation, 41 U. Ark. Little Rock L. Rev. 29, 37 & n.59 (2018). Digital natives are people who have used multiple forms of technology from a young age and have never known a world without such technology. See Iantha M. Haight, Digital Natives, Techno-transplants: Framing Minimum Technology Standards for Law School Graduates, 44 J. Legal Pro. 175, 192 (2020) (proposing a list of technological competencies to prepare law students for practice).

  412. Moppett, supra note 44, at 99.

  413. Nathan Polk, Group Assignments Solve Cheating, Crimson White (Feb. 20, 2020), https://thecrimsonwhite.com/62249/opinion/group-assignments-solve-cheating/#:~:text=Group work doesn’t only,comprehension of the assigned material [https://perma.cc/9L8L-3DPD].

  414. See Celia Chui, Maryam Kouchaki & Francesca Gino, “Many Others Are Doing It, So Why Shouldn’t I?”: How Being in Larger Competitions Leads to More Cheating, 164 Org. Behav. & Human Decision Processes 102, 111–12 (2021).

  415. Eberly Ctr., What Are the Benefits of Group Work?, Carnegie Mellon Univ., https://www.cmu.edu/teaching/designteach/design/instructionalstrategies/groupprojects/benefits.html [https://perma.cc/J3UD-BDC3] (last visited July 5, 2023).

  416. See J. Christopher Rideout & Jill J. Ramsfield, Legal Writing: The View from Within, 61 Mercer L. Rev. 705, 710–11 (2010).

  417. Kristen E. Murray, Peer Tutoring and the Law School Writing Center: Theory and Practice, 17 Legal Writing 161, 171 (2011) (quoting Lisa Ede, Writing as a Social Process: A Theoretical Foundation for Writing Centers? 9 Writing Ctr. J. 3, 7 (Spring–Summer 1989)); see also Carol McCrehan Parker, The Signature Pedagogy of Legal Writing, 16 Legal Writing 477, 477 (2010) (“[W]riting is a process and it is social.”); Rideout & Ramsfield, supra note 88, at 56–61.

  418. Katharine Janzen, The Perceived Efficacy of Cooperative Group Learning in a Graduate Program, 12 Canadian J. Scholarship Teaching & Learning, Art. 61, at 1, 1 (2021), https://ojs.lib.uwo.ca/index.php/cjsotl_rcacea/article/view/14206/11415 [https://perma.cc/E5NN-A7XT ] (citing Karl A. Smith, Shari D. Sheppard, David W. Johnson & Roger T. Johnson, Pedagogies of Engagement: Classroom-Based Practices, 94 J. Eng’g Educ. 87, 91–92 (2005)). “Cooperative learning generally refers to learning approaches in which peer interaction plays a significant role.” Id. at 2 (quoting Darlene Bay & Parunchana Pacharn, Impact of Group Exams in a Graduate Intermediate Accounting Class, 26 Accounting Educ. 316, 318 (2017)).

  419. Schwartz, Sparrow & Hess, supra note 38, at 6; Vernellia R. Randall, Increasing Retention and Improving Performance: Practical Advice on Using Cooperative Learning in Law Schools, 16. T.M. Cooley L. Rev. 201, 204 (1999); Clifford S. Zimmerman, “Thinking Beyond My Own Interpretation”: Reflections on Collaborative and Cooperative Theory in the Law School Curriculum, 31 Ariz. St. L.J. 957, 988–93 (1999) (collecting and tracing studies).

  420. Janzen, supra note 418, at 1.

  421. Xigui Yang, A Historical Review of Collaborative Learning and Cooperative Learning, 67 TechTrends 718, 723 (2023), available at doi.org/10.1007/s11528-022-00823-9.

  422. Why Use Cooperative Learning?, Starting Point: Teaching Entry Level Geoscience (Jan. 23, 2023), https:// serc.carleton.edu/introgeo/cooperative/whyuse.html [https://perma.cc/ASF6-FCWW]; see Zimmerman, supra note 419, at 994 (“The cooperative model ‘results in higher achievement than does a competitive goal structure.’”) (quoting Marla Beth Resnick, A Review of Classroom Goal Structures 2 (1981)).

  423. Duhart, supra note 44, at 540; see Robert S. Adler & Ed Neal, Cooperative Learning Groups in Undergraduate and Graduate Contexts, 9 J. Legal Studies Educ. 427, 428–29 (1991); Zimmerman, supra note 419, at 999.

  424. Why Use Cooperative Learning?, supra note 422; see Janzen, supra note 418, at 7.

  425. Why Use Cooperative Learning?, supra note 422.

  426. Munro, supra note 10, at 126.

  427. See id. at 126, 148.

  428. James M. Lang, Small Teaching 142 (2d ed. 2021).

  429. See Rachel Dunn & Richard Glancey, Using Legal Policy and Law Reform as Assessment, in 1 Critical Perspectives on the Scholarship of Assessment and Learning in Law 139, 156 (Alison Bone & Paul Maharg, eds., 2019).

  430. See id. at 154.

  431. See id.

  432. Duhart, supra note 384, at 512.

  433. Munro, supra note 10, at 126–27; see Dunn & Glancey, supra note 429, at 155.

  434. Duhart, supra note 384, at 512.

  435. Julie Ross & Diana Donahoe, Lighting the Fires of Learning in Law School: Implementing ABA Standard 314 by Incorporating Effective Formative Assessment Techniques Across the Curriculum, 81 U. Pitt. L. Rev. 657, 675 (2020); McConlogue, supra note 45, at 43–44; James H. McMillan & Jessica Hearn, Student Self-Assessment: The Key to Stronger Student Motivation and Higher Achievement, 87 Educ. Horizons 40, 40–41 (2008).

  436. “[M]etacognition involves actively engaging oneself in examining one’s learning process to understand what strategies are working or not working, what could be done to improve one’s learning, and what adjustments are needed to support learning, all the while showing a willingness and ability to make successful change.” Jennifer A. Gundlach & Jessica R. Santangelo, Teaching and Assessing Metacognition in Law School, 69 J. Legal Educ. 156, 159–60 (2019).

  437. “Self-regulated learning refers to one’s ability to understand and control one’s learning environment,” including the ability to set learning goals, self-monitor, self-instruct, and self-reinforce. Tanya Shuy et al., Ofc. Vocational & Adult Educ., Teaching Excellence in Adult Literacy, Self-Regulated Learning, TEAL Center Fact Sheet No. 3: Self-Regulated Learning 1 (2010), https://lincs.ed.gov/sites/default/files/3_TEAL_Self Reg Learning.pdf [https://perma.cc/M8X3-SWCW].

  438. Joi Montiel, Empower the Student, Liberate the Professor: Self-Assessment by Comparative Analysis, 39 S. Ill. U.L.J. 249, 258 (2015); Niedwiecki, supra note 44, at 175, 181–83.

  439. See Gundlach & Santangelo, supra note 436, at 196.

  440. See id. at 156–57.

  441. Self-Assessment, Cornell Univ. Ctr. For Teaching & Innovation, https://teaching.cornell.edu/teaching-resources/assessment-evaluation/self-assessment [https://perma.cc/C5MZ-7LVT] (last visited June 26, 2023).

  442. Katherine A. Gustafson, From Self-Assessment to Professional Effectiveness: Five Steps to Teaching Students to Effectively Self-Assess, 88 Miss. L.J. Supra 49, 60 (2019); see also Setiawan Adi Prasetyo & Aninda Nidhommil Hima, Student-Based Assessment vs. Teacher-Based Assessment: Is There Any Consistency?, 4 ELT-Echo 158, 164 (2019), https://syekhnurjati.ac.id/jurnal/index.php/eltecho/article/view/4948 [https://perma.cc/66LS-W8J7].

  443. Niedwiecki, supra note 44, at 185–86. Dean Anthony Niedwiecki suggests that portfolios consist of four types of self-assessment tools and explains each in depth. Id. at 184–93.

  444. Montiel, supra note 438, at 251.

  445. Id. at 253. Prof. Montiel explains that the use of the term “good” memo is important to distinguish it from a “model” memo or “sample” memo so that students will not view a “good” memo as a template to always copy. Id. at 253 n.19.

  446. Id. at 251.

  447. Id. at 253.

  448. Nanine A.E. van Gennip, Mien S.R. Segers & Harm H. Tillema, Peer Assessment as a Collaborative Learning Activity: The Role of Interpersonal Variables and Conceptions, 20 Learning & Instr. 280, 280 (2010); Ross & Donahoe, supra note 435, at 676; Cassandra L. Hill, Peer Editing: A Comprehensive Pedagogical Approach to Maximize Assessment Opportunities, Integrate Collaborative Learning, and Achieve Desired Outcomes, 11 Nev. L.J. 667, 671 (2011).

  449. McConlogue, supra note 45, at 107.

  450. Ross & Donahoe, supra note 435, at 676; see also van Gennip, Segers & Tillema, supra note 448, at 280.

  451. McConlogue, supra note 45, at 99–100.

  452. See id. at 100–01; van Gennip, Segers & Tillema, supra note 448, at 280.

  453. See McConlogue, supra note 45, at 100, 104.

  454. Hill, supra note 448, at 669, 671–72.

  455. Id. at 673.

  456. Id. at 674.

  457. Id.

  458. Prasetyo & Hima, supra note 442, at 164; see McConlogue, supra note 45, at 102–03; Ross & Donahoe, supra note 435, at 678. For an explanation of how to train students to give peer feedback, see Ross & Donahoe, supra note 435, at 680–83 and McConlogue, supra note 45, at 104–07.

  459. For a more detailed explanation of how to design an effective peer evaluation exercise, see generally Hill, supra note 448.

  460. See McConlogue, supra note 45, at 110.

  461. Hill, supra note 448, at 684–85.

  462. See van Gennip, Segers & Tillema, supra note 448, at 281; see also McConlogue, supra note 45, at 102–03.

  463. Workshop Activity, Moodle.org (May 31, 2023, 10:24), https://docs.moodle.org/402/en/Workshop_activity [https://perma.cc/YVK4-RAGL].

  464. Learn More about Eli Review, Eli Review, https://elireview.com/more_eli/[https://perma.cc/Q3AY-5BU8 ] (last visited June 24, 2023).

  465. McConlogue, supra note 45, at 108.

  466. Id.

  467. Moppett, supra note 44, at 100–02; see Haight, supra note 411, at 195–219 (proposing a list of technological competencies to prepare law students for practice).

  468. Generation Z Lawyers—Digital Natives Who Embrace Visual Tools, Artificial Lawyer (Nov. 25, 2019), https://www.artificiallawyer.com/2019/11/25/generation-z-lawyers-digital-natives-who-embrace-visual-tools [https://perma.cc/U2R4-2WQB].

  469. Haight, supra note 411, at 195.

  470. Id.

  471. See id. at 212–14.

  472. See NCBE Bar Exam Content Scope, supra note 33, at 4.

  473. LexisNexis, supra note 252, at 11.

  474. Wellen, supra note 254.

  475. Moppett, supra note 44, at 96.

  476. Id. at 100.

  477. See Melissa Heikkilä, AI Literacy Might Be ChatGPT’s Biggest Lesson for Schools, MIT Tech. Rev. (Apr. 12, 2023), https://www.technologyreview.com/2023/04/12/1071397/ai-literacy-might-be-chatgpts-biggest-lesson-for-schools [https://perma.cc/6PLW-HMMJ].

  478. Shawn Helms & Jason Kieser, Copyright Chaos: Legal Implications of Generative AI, Bloomberg L. (Mar. 2023), https://www.bloomberglaw.com/external/document/XDDQ1PNK000000/copyrights-professional-perspective-copyright-chaos-legal-implic [https://perma.cc/ANT4-9SUU].

  479. Mearian, supra note 11.

  480. Luca CM Melchionna, Bias and Fairness in Artificial Intelligence, N.Y. State Bar Ass’n (June 29, 2023), https://nysba.org/bias-and-fairness-in-artificial-intelligence [https://perma.cc/C86D-L6EW].

  481. When Westlaw and Lexis release GenAI chatbots with LLMs trained on their respective databases, this assessment may need to be adapted. The cost of those proprietary tools may be so expensive, however, that, after law school, not all graduates will have access to them and may turn to products with LLMs that are not trained on legal databases, and thus, this training would still have value.

  482. See Davis, supra note 258; Choi, Hickman, Monahan & Schwarcz, supra note 137, at 398–400.

  483. Choi, Hickman, Monahan & Schwarcz, supra note 137, at 398.

  484. Id. at 398–99.

  485. Id. at 399.

  486. See generally Wei, Wang, Schuurmans, Bosma, Ichter, Xia, Chi, Le & Xhou, supra note 273; Kojima, Gu, Reid, Matsuo & Iwasawa, supra note 273.

  487. Lau, supra note 131 (explaining how to manage an account user’s data and chat history).

  488. Schwarcz & Choi, supra note 265, at 10–18.

  489. Id. at 28–30.

  490. See id. at 30–31.

  491. A podcast is an audio program in digital format that people can download over the internet and listen to on personal devices. Podcast, Wikipedia (last modified Nov. 15, 2023, 23:38 UTC), https://en.wikipedia.org/wiki/Podcast [https://perma.cc/C52Y-DEW5]. Students can create a podcast by recording an audio file using a digital voice recorder or free digital audio editor and recording programs. Id.

  492. For a discussion on how games can help law students’ learning, see generally Daniel M. Ferguson, The Gamification of Legal Education: Why Games Transcend the Langdellian Model and How They Can Revolutionize Law School, 19 Chapman L. Rev. 629 (2016).

  493. Video discussion forums work much the same way as discussion boards in online learning platforms. Students upload a video of themselves discussing topics or answering a prompt. See Using Video Discussion Boards to Increase Student Engagement, Teaching Online Pedagogical Repository (last modified Aug. 10, 2020, 18:48), https://topr.online.ucf.edu/using-video-discussion-boards-to-increase-student-engagement [https://perma.cc/5P4U-TLN3]. Depending on the parameters, other students and the professor can respond with their own video comments. Id. Some popular video discussion forums are Flipgrid on Canvas and VoiceThread on Blackboard.

  494. “A screencast is a digital recording of computer screen output, also known as a video screen capture or a screen recording, often containing audio narration.” Screencast, Wikipedia (last modified Sept. 16, 2023, 4:58 UTC), https://en.wikipedia.org/wiki/Screencast#.# [https://perma.cc/ZT8K-BTHV]. A screencast can capture anything a user can put on a computer screen: a PowerPoint, a website, a document, videos, etc. See id.

  495. Karen J. Sneddon, Square Pegs and Round Holes: Differentiated Instruction and the Law Classroom, 48 Mitchell Hamline L. Rev. 1095, 1131 (2022).

  496. Id.

  497. Duhart, supra note 384, at 514.

  498. Joy E. Herr-Cardillo, Escape the Ordinary: How to Close Out Your Semester with a Challenging “Escape Room” Competition, 33 The Second Draft 34, 34 (Spring 2020).

  499. See id. at 35.

  500. See Jeopardy Labs, https://jeopardylabs.com [https://perma.cc/A6NE-9RYH] (last visited Oct. 22, 2023).

  501. Bryan Adamson, Lisa Brodoff, Marilyn Berger, Anne Enquist, Paula Lustbader & John B. Mitchell, Can the Professor Come Out and Play?—Scholarship, Teaching, and Theories of Play, 58 J. Legal Educ. 481, 498 (2008).

  502. See NCBE Bar Exam Content Scope, supra note 33, at 4.

  503. This article does not address concerns about in-class quizzes or writing assignments and students’ accommodations for a disability such as having an increased time to take any quizzes or tests. See generally Suzanne E. Rowe, Reasonable Accommodations for Unreasonable Requests: The Americans with Disabilities Act in Legal Writing Courses, 12 Legal Writing 3, 8–12 (2006) (discussing accommodation requests in legal writing classrooms).

  504. For a discussion of a writing course that had students research outside of class but write inside class to combat plagiarism, see Hart & Friesner, supra note 340, at 93.

  505. Renee Nicole Allen & Alicia R. Jackson, Contemporary Teaching Strategies: Effectively Engaging Millennials Across the Curriculum, 95 U. Det. Mercy L. Rev. 1, 11 (2017) (advocating for the one-minute paper); Barbara Tyler, Active Learning Benefits All Learning Styles: 10 Easy Ways to Improve Your Teaching, 11 Persps.: Teaching Legal Rsch. & Writing 106, 107–08 (2003) (listing active learning strategies that include the one-minute paper).

  506. See Sneddon, supra note 495, at 1120; Allen & Jackson, supra note 505, at 11. For examples of minute paper questions and answers, see Ramy, supra note 382, at 874–76, and Sneddon, supra note 495, at 1120.

  507. Paul Black & Dylan Wiliam, Assessment and Classroom Learning, 5 Assessment Educ.: Principles, Pol’y & Prac. 7, 35 (1998).

  508. Peter C. Brown, Henry L. Roediger III & Mark A. McDaniel, Make It Stick: The Science of Successful Learning 227 (2014).

  509. Id.

  510. Bharadwaj, Shaw, NeJame, Martin, Janson & Fox, supra note 2, at 28.

  511. Bone, supra note 86, at 4.

  512. See id.