Article
Legal Prompting 101
The legal practice of using generative AI chatbots
Large language models (LLMs) offer a new form of technical support for different types of intellectual work - especially for the work of lawyers - in the form of artificial intelligence that can generate text in human language on command. By skillfully using the right commands, or prompts, lawyers can save time on numerous tasks in the context of legal work - that is the promise of this new technology. In this article, we explore the question of what makes good legal prompting.
Explore Content
- Introduction: What are Large Language Models (LLMs)?
- Known weaknesses of LLMs
- Good legal prompting
- Complete example prompt
- Further tips
Introduction: What are Large Language Models (LLMs)?
Large language models are artificial intelligence (AI) systems that can generate texts in human language. They learn from large amounts of text, identify patterns and dependencies and, after a thorough training phase, are able to generate texts based on given instructions. The model determines the “most probable” answer to the instruction received but does not make any classic logical considerations. Not only the last instruction received, but also the previous conversation or other background information provided can be taken into account within certain limits.
This enables a variety of uses, such as answering questions, creating texts of all kinds and even revising, e.g. summarizing or editing texts.
Known weaknesses of LLMs
Despite their impressive performance, there are also certain known weaknesses of LLMs that need to be taken into account when using them:
- False statements and bias due to incorrect training data: LLMs learn from the data with which they are trained and reflect this learned information. If the training data is flawed or biased, the model may unintentionally reflect these flaws in its output - despite the considerable effort put in by the manufacturers to make the models as accurate and neutral as possible. The well-known rule applies: garbage in, garbage out.
- Hallucinations: Hallucinations are a phenomenon in which the model generates information that does not correspond to reality. This “hallucinated” data can be both misleading and completely incorrect without the model pointing this out to the user.
- No application of logical laws: Even if the textual output of LLMs often appears very structured and logical: The current models basically do not apply logical laws in the conventional sense when generating texts. The models do not produce a “correct” answer, but only the “most probable” answer according to their data set.
- Errors in calculations: Particular caution is therefore required with output for calculations of all kinds. Due to the way they work, LLMs - unlike the vast majority of other software tools - are not always reliable when performing mathematical operations.
Given these potential difficulties, any text generated by an LLM should always be thoroughly checked for accuracy.
Good legal prompting
The manufacturers of the models are aware of these limitations and the accuracy of the models is constantly being improved. However, users can also positively influence the output of LLMs with good prompting.
A prompt is a compilation of texts or keywords that are sent to Large Language Models as a query. If prompts are well formulated, this increases the likelihood of obtaining precise and useful results. Some trial and error may be required, which makes prompting an LLM resemble something of an art form rather than an exact science.
Legal prompting is the efficient posing of questions to an LLM in a legal context. Correct structuring and clear, precise formulation of the prompt are essential for ensuring the quality of the responses generated. Without properly structured prompts, there is a high risk that the language model will provide irrelevant information or overlook important data.
A prompt can be structured with the following elements, which should ideally be queried in the order listed:
1. Role
At the beginning, the role of the large language model should be defined, e.g. that of a lawyer. This determines how the large language model will interact and what kind of information it can provide.
Example: You are an experienced lawyer and specialist in German commercial law.
2. Target group
This refers to the person or group for whom the prompt is being developed. The target group can vary depending on age, interests, level of knowledge or other characteristics. It essentially determines the language, format and context of the prompt.
Example: You are writing an email to your client.
3. Topic or question
This is the central content of the prompt. This can be a specific question, a topic for discussion or a task. Depending on the question, it can be helpful to explicitly ask the model to present the logical steps that lead to the result.
Example: Please briefly summarize the following statement by the opposing party: ### Text from the opposing party’s statement ###
4. Format
The format determines the form in which the content is presented. This can be an open question, a request, a statement or another format.
Example: The e-mail should be written in an understandable and structured manner and in a factual tone.
5. Context
These details include specific situations, events, locations, times, or conditions. The context helps to place the original prompt within a relevant framework. Be sure to enclose examples with input and output indicators (e.g. three hashtags (###) or three dashes (---)) to signal to the LLM that this is where the example begins and ends.
Example 1: Base the output on this example: ### Example ###
Example 2: Take this text as an example --- Example text ---
6. Review
At the end, you can encourage the LLM to question the input and ask follow-up questions if necessary.
Example: Check the output for contradictions and inconsistencies.
Complete example prompt
You are an experienced lawyer and specialist in German commercial law. You write an email to your client. Please briefly summarize the following statement from the opposing party: ### Text from the opposing party’s statement ###. The email should be written in an understandable and structured manner and in a factual tone. Base the email on the following example: ### Example email ###. Check the output for contradictions and inconsistencies.
By following the order of these elements or mentioning these elements in a prompt, a structured, targeted and relevant prompt can be created that enables the LLM to generate a helpful response.
Further tips
When generating their response, current chatbots based on LLMs not only take into account the prompt they are currently receiving, but also the previous conversation. Therefore, answers can be further improved and adapted in several steps to refine the result. Finally, to keep the overview at the end of a long chat, you can ask for a summary of the chat.
However, including the previous conversation can also lead to errors, especially if completely different topics are addressed in the course of a conversation. If a chat has not led to the expected result, starting a new chat can give the LLM the chance to start from scratch.
Furthermore, complex topics can be broken down into manageable sub-questions in order to obtain more targeted answers.
Finally, it is often beneficial to ask the chatbot itself what factors to consider in order to obtain the most suitable results for a query.
Legal prompting - a new core legal skill?
At first glance, it may seem that legal prompting has become a core skill for the modern lawyer overnight and the proliferation of specialized LLMs confirms this. Large language models developed for legal applications, such as Harvey, are well on their way to becoming an integral part of legal practice. Due to their increasing user-friendliness, these tools often integrate additional prompts unnoticed, which simplifies daily use.
Despite these progressive aids, it is essential for lawyers to familiarize themselves with the basics of this technology in order to use it effectively. This requires not only a basic understanding of how the models work and how to use them, but also the ability to create well-structured prompts.
Regular practice plays a crucial role, as with many skills, the same applies here: Practice makes perfect.
The more often lawyers practice using LLMs, the more intuitive their handling becomes. It is therefore not only technological advances that are decisive for the effectiveness of an AI system, but also how skillfully and consciously it is used by users. However, the risk of the LLMs weaknesses described at the beginning of this article and the need for constant professional review of the results remain.
Authors:
Mai Anh Ma
Klaus Gresbrand
Published: Mai 2024
Recommendations
The legal implications of Generative AI
Considerations for intellectual property, data protection, contracts for corporations and Generative AI policy
Deloitte Legal Tech Fair
Insights into the evolving legal market