With the advent of new mediums and technologies, writing adapts. Whether clay, parchment, or screen, a writer’s materials affect word choice, prose, and style. These linguistic shifts are natural, making language more straightforward and accessible.
Such a shift is occurring in legal practice.
Email is now a practicing lawyer’s primary means of communicating legal analysis. Rejecting the cost, inefficiencies, and formalities of the twentieth-century memorandum, clients and supervisors now demand documents that were once ten-plus pages be concentrated into emails consisting of just a few paragraphs. The traditional legal memorandum has been succeeded by a quicker, leaner, and cheaper medium: the e-memo.
Despite some scholarly criticism, the rise of e-memos has not made fundamental legal writing skills obsolete or called for less rigorous legal analysis. Quite the opposite. E-memos have heightened the need for attorneys to write crisp, clear, and concise prose quickly. They have crystallized the importance of precise language while underscoring the difficulty of mastering fundamental writing skills. After all, when a writer has even less space to make a point, every word must matter.
As e-memos have grown in prominence, leaders in higher education simultaneously have pushed for law schools to create “practice-ready” graduates. While it is unclear how effective law schools have been at preparing students, legal writing courses have been at the forefront of the internal drive to ensure students can practice law. Legal writing professors are uniquely positioned among first-year professors to create meaningful assignments based on tested, continually refined pedagogy and backed by a national repository of collaborative professionals.
As legal writing continues to lead the first-year curriculum in preparing students for practice, most legal writing programs have made e-memo assignments a perennial exercise. Moreover, scholars have recognized the importance of making e-memo assignments sufficiently rigorous and realistic. It is imperative, therefore, that as we continue to assign e-memos to our students, our pedagogy remains dedicated to merging theory with practice.
Following Kristen Tiscione’s groundbreaking 2006 study describing the significance of e-memos in practice, this Article uses empirical research to contribute to the growing scholarship about e-memos. Further, this Article provides concrete suggestions that promote a sound pedagogical approach for teaching e-memo drafting. Part 1 reviews past scholarship on how to write e-memos and recognizes discrepancies in scholarly advice and textbook samples. Part 2 details a 2018-2019 study that asked over 100 attorneys to rank and evaluate sample e-memos. Part 3 provides the study’s results and statistical analyses. These data reveal a surprising and stark break in which sample e-memos attorneys preferred based on their age, the number of e-memos they write per month, and the size of their practice. Part 4 discusses additional research findings about attorneys’ e-memo preferences and suggests what these results mean for the legal writing classroom. Part 5 then recommends professors adopt substantive e-memo assignments that require in-depth analysis presented with pinpoint concision—what this Article terms “iceberg e-memos.”
1. E-Memos 1.0: A Literature Review
When e-memos first emerged, attorneys were left to build new structures with outdated blueprints. But as law practice embraced the e-memo, textbooks, articles, and websites laid a foundation of advice for working with this new medium. Most of this guidance remains consistent, with scholars agreeing the e-memo format must be “flexible.” Rightly, scholars have emphasized that an organic writing process must reign over a prescribed product. After all, legal writing pedagogy targets the current and future writing process, not redlining a product ex post facto. There is no boilerplate memorandum that attorneys can fill out like a Mad Libs. Still, however organic, the writing process must lead to a final product, and that product should fit within a familiar genre that meets a reader’s needs and expectations.
Therefore, while each e-memo should be flexible enough to fit its unique question presented, scholars have identified the following best practices for writing e-memos:
Start by restating the issue.
Include an analysis of the law and its application to the facts.
Rely more extensively on explanatory parentheticals than case illustrations.
Conclude with the writer’s recommendations.
Omit repeating the given facts.
Keep the product to approximately one to two traditional pages.
With the above points as a foundation, textbook authors frequently supplement their advice with samples illustrating their suggestions. These samples are vital in showing the expectations of law practice, demonstrating logical organization, and turning abstract concepts into physical specimens students can dissect and internalize. But for samples to be beneficial to the inquiring learner, they must follow best practices and be realistic. Unfortunately, as e-memo pedagogy evolved, academic advice branched in diverse directions. Many samples reveal scholarly inconsistencies in how to write e-memos, with samples presenting differing levels of analysis for ostensibly comparable research questions. Whereas some scholars advise a paradigmatic e-memo might include case illustrations and counter-analyses, others bluntly compare an e-memo to a “telegram” and provide samples answering substantive legal questions with no reference to legal authorities.
Every reader, every law firm, and every question is unique, but the contradictory samples are not merely a similar format stretching to meet distinctive needs; the differences in analytical depth from one text to another make deciphering a pattern among samples untenable. Crucially, the substantive e-memo samples vary in how they apply facts to the law—“the magic moment of law practice” for which attorneys are paid. Most texts only provide the cursory advice that there should be an application if needed.
The texts and samples further disagree about when and how to cite authority. One article recommends that e-memos should “name important statutes, refer to important cases by shorthand, and mention the jurisdiction” but not be “clutter[ed] with formal, full-form citations.” The article continues by recommending writers consider listing full citations at the end of the e-memo. The samples from several other texts echo this advice, with some samples providing no citations and others omitting pinpoint citations or id. citations. One text explicitly espouses this sentiment, suggesting writers include citations for “important authority” but “less frequently.” In contrast, other scholars observe the importance of formal citations in their samples by citing after each sentence needing support.
Samples even disagree on how and when to provide conclusions—with some samples giving the reader answers with reasoning while others offer brusque responses. More importantly, although most texts agree a conclusion should be stated at the beginning of an e-memo, one prominent text contradicts this point, urging writers to question whether to place “bad” news at an e-memo’s conclusion. Other texts do not overtly state this point but offer samples that hold off on answering the research question until the e-memo ends.
Some discrepancies among scholars is to be expected in this new field of study. Moreover, each e-memo is unique to its research question and reader. But all legal analysis should be appropriately unique—including traditional memoranda. When it comes to the widely employed e-memo, students and practitioners need more than varied samples; they need samples that follow best practices. Sound pedagogy depends upon it: Without numerous realistic samples, asking students and new practitioners to write a flexible e-memo is making “an impossible request.” The next stage in teaching e-memos then requires building on our pedagogical foundations with empirical evidence.
2. Brief Overview of Findings and Study Methodology
To continue the ongoing process of updating and formalizing best practices for drafting substantive e-memos, the study set out to discover how practicing attorneys actually write. While I had anecdotal evidence and could continue to call friends in practice for advice, I wanted more concrete findings with which to build upon the existing foundation. Therefore, I designed this study with three objectives:
To provide attorneys with differing sample e-memos and ask which samples they preferred.
To find any correlations between respondent demographics and sample e-memo preferences.
To collect general information about e-memo habits.
As fully explained in Part 3, the results show that respondents generally favored the e-memo samples that elected depth over brevity. Additionally, respondents disliked the samples with short answers and little reasoning. Thus, it is not terse conclusions with few citations that best represent modern practice. Whether the sample answered a complex client question or a simple statutory issue, respondents wanted e-memos with clear reasoning and robust recommendations.
Yet, despite some unity in preferring the more detailed samples, respondents split in their predilections towards concision. Importantly, the study reveals a surprising division between attorneys who favor longer e-memo samples mirroring traditional memoranda and attorneys who favor concision in an e-memo that still provides a complete and well-reasoned answer. Attorneys who more often value e-memos that follow a traditional format include (1) those over 40, (2) those who write less than four e-memos a month, and (3) those who work in private firms of 50 or fewer attorneys. By contrast, the attorneys who value concise prose and organization include (1) those 40 and under, (2) those who write four or more e-memos a month, and (3) those who work in private firms with over 50 attorneys.
As detailed in Part 4, the study’s findings further provide information about respondents’ general preferences and habits regarding e-memos. Of particular interest for those teaching e-memos in the classroom, the responses show writing an effective e-memo means:
Including upfront answers with detailed reasoning;
Having crisp applications with clear recommendations;
Incorporating explanatory parentheticals;
Following proper citation formatting;
Relying on citation signals;
Attaching applicable authority;
Hyperlinking to authority; and
Producing polished documents within 48 hours.
Additionally, the study’s results demonstrate that even with the rise of e-memos, legal writing courses must continue to incorporate traditional memoranda assignments. Scholars have repeatedly remarked on the pedagogical benefits of teaching learners to write traditional memoranda, regardless of whether memoranda are still heavily used by attorneys. The study confirms the need to teach traditional memoranda—not only because these writing assignments are valuable teaching tools but because traditional memoranda are indeed still part of private practice.
Before providing more specifics about the results, however, it is necessary to discuss the study’s methodology. The next portion of this Article overviews the respondent recruitment process and respondent demographic information. The Article then details my methodology when creating the sample e-memos used in the study and compares and contrasts the samples.
Respondents were a sampling of graduates from Indiana University Robert H. McKinney School of Law and the University of Missouri School of Law, plus a handful of practitioners from other schools who expressed interest in the survey. The first participants identified included attorneys I personally know. The pool of respondents then expanded to attorneys in recent contact with IU McKinney’s Office of Professional Development, those who responded to a general call through the Indianapolis Bar Association, and those who contacted me after being notified about the survey by other respondents. One hundred thirteen attorneys participated.
One hundred out of the 113 respondents identified where they were employed, with the majority of attorneys—64%—working in private law. Sixteen percent worked in city, county, state, or federal government; 10% worked in the judiciary; 5% worked for a corporation; and 5% marked “other.” Of those 64% of respondents working in private firms, the highest percentage—approximately 36%—worked in firms of over 150 attorneys.
Of the 99 attorneys who indicated their age, 26%were 30 years old or younger; 32% were 31-40; 11% were 41-50; 14% were between 51-60; and 16% were over 60.
2.2 The Samples
Creating sample e-memos for attorneys to review first required me to write sample research questions. To ensure better data, I drafted two questions of differing complexity. The first question—Question “x”—was more complicated and asked an associate to write an e-memo analyzing whether a dog qualifies as a “service animal” under the Americans with Disabilities Act (ADA). Based on a problem I created with Anne Alexander, the partner told the associate that the client has Post-Traumatic Stress Disorder and that petting and holding his dog alleviates the client’s symptoms. In addition to the specific facts, to fully answer the partner’s question, a writer needed to rely on a federal regulation and federal cases.
The second question—Question “y”—was more straightforward and required analyzing a basic statutory issue and applying the law to the client’s facts.  Based on a problem created by Melody Daily, the supervisor asked the associate to research and report whether a landowner could bulldoze tombstones located on his property or sell a wrought-iron fence surrounding the cemetery, and, if not, whether the client would be criminally penalized for such actions. The associate needed only to find and analyze statutory authority.
To make the survey manageable for participants, I created four sample e-memos to answer each question. For data purposes, the samples answering the more complex ADA research question are the “x samples”—A-x, B-x, C-x, and D-x. The samples answering the simpler statutory question are the “y samples”—A-y, B-y, C-y, and D-y.
While the four samples in each dataset were substantively similar and contained the same overall conclusions, each sample varied in length, analytical depth, use of citations, and organization. Because the survey’s goal was to test substantive discrepancies in the literature review, the samples did not investigate non-substantive, agreed-upon best practices, such as typography and document design.
To test the discrepancies highlighted in the literature review, each sample roughly reflected one scholar or group of scholars’ advice and samples—which are denoted in the chart below. For example, the A Samples (both A-x and A-y) were the longest and most “traditional.” Mirroring a traditional memorandum’s question presented and brief answer, these samples included a bolded heading for the “issue” and the “answer.” The A Samples also had the lengthiest discussions of legal authority: Sample A-x had two case illustrations and an explanatory parenthetical to a third case before applying the facts to the law; Sample A-y block quoted relevant statutory language while the other y-issue samples merely summarized the statute.
Following the advice of, among others, Kristen Tiscione, Charles Calleros, and Kimberly Holst, I created the C Samples in an attempt to balance a comprehensive legal analysis with concise prose and formatting. In this way, the C Samples best model what the Article later describes in-depth as “iceberg e-memos”—e-memos that have analytical depth but tactically omit information the reader already knows or does not need. And, as discussed in Part 3, attorneys generally favored the C Samples because of this strategic concision.
I created all of the samples to be viable options. Each is meant to be an authentic, acceptable e-memo, not a caricature of scholarly advice. Sample A-x, for example, is not overly verbose, pedantic, or mechanical.
The eight samples are attached as appendices, but below is a synopsis of each.
The 113 respondents who agreed to participate were randomly assigned to review either the four more complex e-memos (the ADA or “x-issue” samples) or the four simpler e-memos (the cemetery or “y-issue” samples). I then sent each respondent the appropriate mock research question and its corresponding sample e-memos. After they reviewed the materials, respondents proceeded to an online survey, which began by asking respondents to rank the samples and provide comments about what they liked and disliked about each.
3. Respondent E-Memo Rankings
Whether reviewing the x-issue or y-issue datasets, respondents most often gave the highest rankings to the “iceberg” C Samples and “traditional” A Samples. Directly refuting the numerous textbook samples that do not include explanation of or citation to the law, the results reveal attorneys prefer substantive e-memos to be just that—substantive. Indeed, the results from both datasets show the vast majority of respondents judged the short D Samples as the worst e-memos.
Crucially, although many respondents favored the longer, more traditional A Samples, others criticized the A Samples’ length and formality. Respondents only ranked the D samples fourth more often. In fact, higher percentages of respondents gave the A Samples a fourth-place ranking than the B and C Samples combined. The “traditional” A Samples, therefore, more than any other samples, show a schism in respondents’ preferences.
To better understand the data, it was thus necessary to do more than review how many respondents gave a particular sample the highest or lowest rank; each sample needed an aggregate score. This was achieved by assigning each ranking a numerical weight inverse to its rank—the higher the ranking, the more weight a choice received. The given weight was then multiplied by the number of respondents who assigned that weight to a particular sample. To achieve a final score for each sample, the products of the multiplied numbers were added together and divided by the number of respondents.
The final calculations established respondents most preferred the C Samples. The final scores between A-x and B-x were virtually tied because although many respondents preferred A-x to B-x, A-x’s aggregate score was reduced by the many attorneys who gave it a low ranking. Yet, the more traditional A-y outranked the more informal B-y despite the seemingly simple legal question answered in the y-issue samples and despite the number of attorneys who gave A-y a fourth-place ranking. Thus even with simple issues, it appears many attorneys prefer detailed answers to quick responses.
Comments by attorneys explaining their rankings elucidate these results. When answering what they liked and did not like about the C Samples (the iceberg e-memos), respondents generally appreciated that the C Samples balanced conciseness with ample reasoning:
“This memo had the best treatment of the case law by far for an email memo. There was no superfluous language.”
“C was the most streamlined and was the easiest to get a sense of quickly and skim.”
“Sample C was very well written and seemed to have all of the information necessary. I liked that it gave the basic information necessary but didn’t include too many analogous case facts, as it seemed from the prompt that the answer could be explained succinctly without a ton of extra information.”
“I liked the structure of C and that it struck a good balance between giving me detail and not giving me too much detail. I felt like I could read only this email and not have to read the cases to advise the client. I also liked that it did not use headings for the issue and answer, and the answer was bolded at the start of the email.”
“This was my preferred memo because it has the right balance of clearly answering the question asked and summarizing the research in a concise, organized fashion.”
In writing about the A Samples, many respondents praised the A Samples’ thoroughness:
“I appreciated the substantially self-contained analysis and support provided in Sample A, which was in my view clearly the best. I also appreciated the visually reinforced structure in that memo.”
“A is concise but provides sufficient detail to get a handle on the law.”
“Sample A was complete in its analysis and conclusion . . . .”
“A was most complete and lawyer-like.”
“Memo A was substantially better because it provided a brief, yet thorough, analysis of the relevant law and cases associated with the same. Furthermore, memo A included headings that focused my attention on what I needed to know.”
Other attorneys, however, disliked the A Samples’ formality and length:
“Memo A seemed to be clearly the worst, because of how long and formal it was. I shuddered at the thought of reading it on my phone on a break at a deposition.”
“Memo A was clearly the worst because it was too long and wordy—overkill.”
“[T]oo much information for the project. It takes more time for me to read and digest, so I have to bill the client more time than necessary. It took the associate more time than I can likely justify billing the client.”
“[F]ar too long-winded for an email memo. The analysis, application, conclusion/recommendation was overkill. I’d be concerned whether the partner would read the entire email. This sample is better suited for a traditional memo . . . .”
“A reads like a law school or traditional memo. Email is not the best form for this style.”
Many respondents enjoyed the B Samples’ simplicity and succinct conclusions, but they frequently criticized the B Samples’ lack of clear explanations. These samples, in other words, are a model of a concise memo gone wrong—one where the writer discloses too little analysis and leaves the reader floundering. As one respondent wrote, “Memo C was an improved version of Memo B. It was exactly what Memo B was missing.” Further comments echoed this sentiment:
“Concise. But not enough case support.”
“Sample B was maybe just a little bit too brief in its discussion of precedent.”
“I don’t like the string citations of case law without more of an explanation.”
“Succinct but missing key information.”
“[L]acks a thorough analysis . . . and lacks citations to credible sources.”
“There also doesn’t seem to be enough analysis and case support in this email. It’s too short. I would question this associate’s conclusion because there’s not enough support provided.”
In reviewing the analyses in the D Samples, respondents lamented that the explanations required readers to do more work to understand the writers’ answers:
“D is too cursory and does not give comfort that the associate has thoroughly researched the issue.”
“D correctly answered the research assignment but left it [to] the requesting attorney to read the attached C.F.R. and cases for a complete understanding of the legal reasoning.”
“D did not provide context for the answer or organization to help the partner understand the answer after a quick read.”
“Borders on appearing as though the associate didn’t spend enough time or take the project seriously.”
Respondents further wrote that they disliked the D Samples’ lack of more formal, in-line citations and distrusted the authors’ propositions:
“The list of citations is useless.”
“[I]ncluding legal citations at the bottom is unhelpful without more context. In practice, legal citations are mostly after sentences, and there is not an emphasis on footnote citations. Therefore, the email is unlike what lawyers see on a day-to-day basis.”
“[I]t doesn’t really give me any indication of the proposition(s) for which each authority stands.”
Finally, respondents loathed that the D Samples did not provide upfront conclusions. Conclusions, whether “good” or “bad,” must be given up front; readers can handle bad news, but they do not tolerate waiting:
“Waiting until the end to give the conclusion was a killer for this sample. It really needed to say it in the first two sentences.”
“Disliked: - No answer right off the bat.”
“D didn’t have the conclusion up front which is the FIRST thing I teach new associates. Busy partners need to be hit with the bottom line first. So do clients.”
The narrative responses provide color to the rankings, but correlating the data by demographic group exposed a wide gap in preferences between groups. Specifically, further analysis explains the respondents’ divergent views over the merits of the A and C Samples. The next three sub-sections of the Article therefore review the striking schism between attorneys based on their age, the number of e-memos they write, and the size of their law firm.
3.1 Rankings According to Age Group
In ranking either the x-issue or y-issue samples, those over 40 preferred the more traditional A Samples; respondents 40 and below favored the concise but substantive iceberg e-memos—the C Samples. As Tables 3 and 4 illustrate, more than half of respondents over 40 gave A-x and A-y the highest rank.
By contrast, respondents 40 and under significantly preferred the C Samples. Additionally, it was these “younger” attorneys who were divided over the A Samples. In fact, approximately 30% of respondents under 40 gave A-x the lowest possible rank.
This is not to state that those 40 and under are a monolith. Though respondents 40 and younger gave the C Samples the highest scores, respondents 30 and younger preferred the traditional A Samples more often than respondents age 31-40. Those age 31-40 even favored the succinct B Samples to the traditional A Samples.
The distaste for the traditional A Samples thus mostly came from those age 31-40, with the youngest attorneys exhibiting greater acceptance and tolerance for formalism. One explanation for these results is that the youngest practitioners are less confident and more likely to emulate the writing styles of senior practitioners. Similarly, they may be less prone to rebel and reject the traditional format inculcated upon them in law school.
Notably, much of these data have statistical significance, making the results more than a fluke arising from a random sampling of attorneys. To test for statistical significance, the data for the x-issue samples were run through two hypothesis tests: the Chi-Square Test for independence and the Fisher’s Exact Test for independence.
These tests are useful in determining whether the data indicate a trend, such as whether different age groups are generally inclined to prefer certain samples. Below is a breakdown of the statistically significant age data for the x-issue samples:
Attorneys over 40 tend to give Sample A-x the best rank more often than those 40 and under.
Following this trend, attorneys over 50 tend to give Sample A-x the best rank more often than those 50 and under.
Those 40 and under tend to rank Sample A-x the worst and Sample C-x the best more often than those over 40.
Those 30 and under tend to give Sample A-x the second-best rank more often than attorneys 31-40.
Those 31-40 tend to rank Sample A-x as the worst sample more often than attorneys 30 and under and more often than attorneys over 40.
Those 31-40 tend to rank Sample A-x the worst and Sample C-x the best more than those 30 and under and those over 40. Furthermore, those 30 and under and those over 40 more often tend to rank Sample A-x two levels higher than sample C-x more often than those 31-40.
Those over 40 tend to give sample D-x the worst ranking more often than those 31-40 and those 30 and under. Those 31-40 tend to assign D-x a better rank than the other groups.
3.2 Rankings According to Number of E-Memos Written Per Month
Similar to the results based on the respondents’ ages, respondents ranked the samples differently based on the number of e-memos they write per month. When comparing those who write less than four e-memos per month with those who write four or more e-memos per month, there was again a break in how respondents viewed the A and C Samples. While the C Samples received the highest overall score regardless of the number of e-memos respondents write per month, respondents who write less than four e-memos per month have a much more favorable view of the A Samples than their more prolific counterparts.
Showing their predilection for the traditional format, 64% of respondents who write less than four e-memos per month assigned Sample A-x with a first- or second-place ranking. Divergently, less than 40% of respondents who write four or more e-memos per month gave A-x such a high score. Regarding the y-issue samples, over 83% of attorneys who write less than four e-memos per month assigned Sample A-y with a first- or second-place ranking, with 0% giving A-y a fourth-place ranking. Attorneys who write four or more e-memos per month had a more favorable view of A-y than they did of A-x, but over 20% of y-issue respondents still gave A-y a fourth-place ranking Therefore, respondents who write more e-memos have a dissonant, if not negative, view of the A Samples—even more so than attorneys 40 and under.
There again is statistical significance to the data when reviewing the x-issue samples, further proving that those who spend their time writing e-memos prefer the conciseness of the C Samples to the formality of the A Samples. Below is a breakdown of the statistically significant e-memo per month data for the x-issue samples:
Attorneys who write less than four e-memos per month are more likely to give Sample A-x a higher rank than attorneys who write more than four memos per month.
Attorneys who write four or more e-memos per month are more likely to rank C-x higher than A-x more often than attorneys who write less than four e-memos per month.
As to be expected, the data further shows that those who write four or more e-memos per month are disproportionately more likely to be 40 and under. This raised the question of whether the statistically-significant results above were not because of the number of e-memos respondents wrote per month, but because respondents 40 and under generally preferred the C Samples. Put another way, it was necessary to analyze whether the data about preferences and number of e-memos written per month are valid regardless of whether respondents are 40 and under or over 40. A secondary analysis was thus completed and confirmed the above statistical findings are true, regardless of the age of respondents.
While the results in this section are useful in understanding preferences based on how many e-memos attorneys write, future studies should be done to identify the preferences of those who read and supervise e-memos. This will help determine if there is a gap between the preferences of writers and their audience.
3.3 Rankings According to Private Firm Size
There is not a significant difference in sample preferences between those who work in private practice versus those in public practice, with both groups reflecting the overall rankings expressed above. There is a difference, however, among respondents at private law firms of different sizes. Yet again, the respondents’ preferences split between the A Samples and the C Samples.
Respondents who work at firms of over 50 attorneys favored the C Samples, demonstrating a “big law” preference for substantive brevity. Respondents took a more conservative view at “smaller firms,” where the traditional A Samples received the highest ranks.
The data for the x-issue samples initially did not offer statistically-significant information that would indicate that rank distributions for Sample A-x or Sample C-x are different between practitioners at firms with 50 or fewer attorneys and practitioners from firms with over 50 attorneys. This is likely because the sample size was too small.
Therefore, to further analyze the data for statistical significance, it was necessary to specifically compare if respondents at firms of different sizes ranked Sample A-x higher than C-x or, instead, Sample C-x higher than A-x. This more-tailored comparison shows the following statistically significant data:
Practitioners at a private firm with over 50 attorneys are more likely to rank Sample C-x better than Sample A-x than practitioners at a private firm with 50 or fewer attorneys.
4. Respondent E-Memo Preferences and Habits
After ranking the samples, respondents answered questions about their preferences for writing substantive e-memos, with a focus on discrepancies from the literature review. This included asking respondents about what sample e-memos provided the “best” analytical depth and how much reasoning a writer should include in an upfront conclusion. The study then asked respondents about habits in the workplace, such as the typical turnaround time for an e-memo and what mediums supervisors use to read memoranda.
4.1 Analytical Depth Matters
When asked to rank the samples “simply [for] the depth of legal analysis and nothing else,” respondents greatly favored the A Samples: 83.33% of respondents gave A-x the top rank, while 76.19% gave C-x the second-highest rank. Again, the D Samples were plainly last.
Remarkably, though respondents overwhelmingly acknowledged the A Samples had more apparent analytical “depth,” that alone did not directly translate into the samples’ overall ranks, as seen in the results above. The C Samples, which balanced depth with concision, remained the overall favorite because of the number of respondents who gave the A Samples a third- or fourth-place ranking. Therefore, while depth is valued and recognized, respondents—especially those 40 and under, those who write four or more e-memos per month, and those working at private firms of over 50 attorneys—value it more when the analysis is also concise.
4.2 Upfront Conclusions Need Reasoning
The study next asked respondents to review sample conclusions. As seen in Appendix 11, respondents who reviewed the x-issue samples could choose between two sample conclusions, each with differing levels of reasoning, or to have no upfront answer. Those responding to the y-issue samples reviewed two samples with differing levels of reasoning and a third sample that provided no reasoning at all.
Conflicting with most textbook samples, the results show that respondents not only preferred conclusions with reasoning, they preferred the reasoning be detailed. Not one respondent stated an initial answer was unnecessary in the x-issue samples, and 90.91% of the attorneys responding to the y-issue samples gave the lowest rank to the most conclusory answer with no reasoning.
One scholar predicted such results, stating the upfront conclusion should be an “answer with reasons.” This can be a “single, short paragraph,” or “you can write the answer and give the reasons in bullet points.” This advice follows what we so often teach: Legal reasoning is the meeting of law and facts. Only when the two are combined does the reader have a true answer.
Legal analysis requires constantly answering “why” and responding with an instructive “because.” This point is particularly true for the upfront answer in an e-memo, which is read by an impatient reader who needs the major and minor premises of the writer’s conclusion articulated from the start. If a writer is to be believed, especially a novice writer, their credibility must be immediately earned with clear reasoning. After all, because the upfront conclusion in an e-memo is an adaption of the traditional memoranda’s brief answer, such initial reasoning is customary.
4.3 Applications Are Necessary
Part 1 explained there is currently little scholarship regarding how to apply facts to law in e-memos. To partially address this gap, the study asked respondents the importance of the x-issue and y-issue samples’ applications. Respondents could choose “important,” “somewhat important,” “not important,” or “other.”
As to be expected, a significant majority of respondents who reviewed the more complex x-issue samples—75%—stated the application was “important.” Not one respondent stated the application was “not important.” With the simpler y-issue samples, the application might seem a bit perfunctory; the writer’s sole contribution was that the client’s facts met a statutory definition. Even so, as seen by Table 24, over 83% of y-issue respondents deemed the application “important.” Again, not one respondent stated the application was “not important.”
When assigning students e-memo problems, therefore, no matter the research question’s complexity, professors must ensure students are including an application of facts to the law. Even in e-memos answering basic, procedural research questions, if the question references a specific (or even hypothetical) client, applications matter.
4.4 Attorneys Use Explanatory Parentheticals
Supporting the advice from the literature review and respondents’ general preference for the C Samples, attorneys rely on explanatory parentheticals when providing caselaw within e-memos. As shown in Table 25, over 42% of the 96 respondents answering this question stated explanatory parentheticals are used “very frequently” or “frequently” in e-memos.
Legal writing professors must teach the art of writing explanatory parentheticals. Having students draft and redraft parentheticals can be done side-by-side with asking students to draft case illustrations, as parentheticals are a condensed form of case illustrations.
4.5 Preferences Towards Formal Citations Are Complicated
With traditional memoranda, it is general knowledge that authority must back each sentence needing support. There is no such consensus about when and how a writer should cite in an e-memo. Not only are there discrepancies in scholarly advice, but respondents also lacked unity about the importance of formal citations in e-memos.
When asked “[h]ow important” “formal Bluebook citations” are in e-memos, just 4% of respondents selected “very important.” Seventeen percent choose “important,” 35% responded “somewhat important,” and 35% answered “not important.”
The relative lack of concern or agreement for proper citations might rest in the wording of the question and the term “formal Bluebook citations.” It also could rest with the question existing in a vacuum and not being tied to a sample document. Attorneys often view proper legal citation as a “necessary evil”—a tedious exercise in memorizing gnostic knowledge. In practice, this view is unhelpful to writers, readers, and law students; legal citation is a core convention of the law.
When respondents earlier commented about the e-memo samples, many criticized the D Samples, and to a lesser extent, the B Samples, for missing citations or crucial citation information. In contrast, not one survey respondent disparaged the A or C Samples for citing per The Bluebook’s or the ALWD Guide to Legal Citation’s rules and including pinpoints to specific pages. True, there is a clear delineation between perfect citations and unusable ones, but telling students that citations are “somewhat important,” that e-memos do not need to use “formal, full-form citations,” or that e-memo writers can cite “less frequently” creates an amorphous and unworkable standard. It is better for new writers to assume a high degree of formality in their citations.
As compared to Bryan Garner’s preferred footnoted citations, in-line citations are particularly advantageous in e-memos and other documents read on a screen. Readers using a computer, tablet, or phone should not be forced to scroll down to the bottom of a document to check the efficacy of a proposition and then back up again to continue reading. In-line citations are not, as Garner claims, “speed bumps” interrupting a reader’s thought process. Rather, properly-formatted, in-line citations within e-memos give skeptical and busy readers the proof they need and the conciseness they desire.
Respondents’ use of citation signals further highlights the importance of fashioning proper citations in e-memos. Over 60% of respondents indicated they or those they supervise use citation signals in e-memos “moderately,” “frequently,” or “very frequently.”
Citation signals might be even more important in e-memos than in traditional memoranda. Because e-memos must be exceedingly concise, squeezing more information into each sentence—and each citation—is critical. Signals are especially beneficial in e-memos then; they quickly give meaning and context to a citation, telling a reader precisely how the source supports or contradicts the writer’s proposition.
These data complement the importance of explanatory parentheticals, which are often needed in conjunction with signals to explain a source’s relevance. Professors should therefore consider teaching rudimentary signals (e.g., see, see also) and explanatory parentheticals together.
4.6 Attachments and Hyperlinks Are Useful
As seen by Table 28, the majority of respondents—53%—reported “it is common practice to electronically attach” cases and statutes to e-memos. An additional 11% agreed that although attaching applicable law is not yet common practice at their workplace, it “should” become one.
In addition to attaching applicable law, one suggestion professors might make to students is to highlight the portions of an opinion they directly quote or paraphrase in their e-memo. The highlights will save a skeptical reader time and better direct the reader than a pinpoint citation alone.
When it comes to hyperlinking citations, about half of the 76 respondents answering this question agreed that hyperlinking is or “should be” common practice. This information supports the argument that hyperlinking is becoming a general trend in legal writing. Some courts explicitly state they prefer hyperlinking in briefs and other court filings because hyperlinking allows readers to immediately verify an attorney’s argument.
4.7 Readers Rely on Computers Screens
One of the last survey questions asked how supervising attorneys read office memoranda of any kind—not just e-memos. Respondents could choose as many options as applied.
Of the 88 attorneys who responded, 88.64% stated supervisors read memos on their computers. Almost 60% of respondents stated supervisors still read memoranda in print. Phones were the third most-used medium, with nearly 40% of supervisors sometimes reading memoranda on the small screen.
Respondents next ranked the methods supervising attorneys use “the most for reading legal memoranda” and were asked to skip the question if they did not know the answer. Seventy-five attorneys responded. Sixty percent of respondents stated computer screens are the most used method for reading memoranda. Print came in second, with 32%. Only 8% of respondents said tablets or phones were the primary methods used by supervisors.
The results are much closer when reviewing the answers of the 75 respondents who indicated supervisors’ secondary means for reading memoranda. While computer screens again took the top spot (32%), print (28%) and phones (26.67%) were close behind.
4.8 E-Memos Are Quick-Turnaround Projects
Even as e-memos are rigorous writing projects, the general expectation is that an associate will research, draft, edit, and complete an e-memo within one to two days. About half of respondents—49.48%—stated the typical turnaround time for an e-memo is within 24 hours. Nearly all respondents—91.75%—said the typical turnaround time is within 48 hours.
4.9 Traditional Memoranda Are Not Dead in Private Practice
Much of the scholarly debate over e-memos has rightly focused on whether the e-memo explosion has eviscerated the traditional memorandum. According to Tiscione’s study, around 40% of her respondents wrote no traditional memoranda in a year. An overwhelming majority—75%—wrote no more than three traditional memoranda per year, and around 87% wrote no more than six. Only a dismal 4% wrote more than 20. Tiscione, therefore, posits that traditional memoranda are “all but dead.” My survey instead suggests that the demands of modern practice have merely hobbled traditional memoranda.
As seen by Table 33, of the 100 respondents answering this series of questions in my survey, just 17% stated they do not write traditional memoranda. And just a slight majority—54%—stated they write no more than five traditional memoranda per year. Further, 10% actually write 31 or more traditional memoranda per year.
In addition to writing more traditional memoranda, respondents to my survey also wrote far more non-traditional memoranda—including e-memos—than respondents to Tiscione’s survey. Thirty-five percent of my respondents wrote or assigned more than 20 non-traditional memoranda per year. In Tiscione’s survey, by comparison, approximately 25% wrote more than 20 informal memoranda, and just around 19% assigned more than 20. There is an even starker distinction when looking at those who wrote or assigned the fewest non-traditional memoranda. Only 19% of my respondents wrote or assigned five or fewer informal memoranda per year. Approximately 34% of Tiscione’s respondents, however, wrote six or fewer non-traditional memoranda per year, and approximately 47% assigned six or fewer per year.
There are a number of possibilities that might explain the differing results between Tiscione’s survey and my own, including disparities in respondent demographics and more than a decade between when we conducted the studies. Still, respondents in both studies share much in common: 69% of Tiscione’s respondents practiced private law, while 64% of my respondents are in private practice. Similarly, the greatest percent of Tiscione’s respondents worked at a firm with over 200 attorneys, and the greatest percent of my respondents in private practice worked in firms of over 150 attorneys.
And, even when singling out my respondents working in firms of over 150 attorneys, over 40% of respondents stated they wrote or supervised more than five traditional memoranda per year. Further, although traditional memoranda can be expensive, in firms of 50 or fewer attorneys, again, over 40% of my respondents wrote or assigned more than five traditional memoranda per year.
5. Curricular Implications
Adding any single e-memo assignment to first-year legal writing courses is not enough. E-memos come in a variety of vintages, including summaries of traditional memoranda and “procedural” e-memos. Ideally, students will complete these types of e-memo assignments during their studies.