Table of Contents

Reviewing: introduction

Maarten Truyens Updated by Maarten Truyens

Through the power Large Language Models (LLMs), ClauseBuddy allows you to review legal documents against pre-specified rules.

Why should rules be explicitly specified?

Taking into account the knowledge already baked into LLMs such as GPT, you may be wondering why you would need to still specify reviewing rules.

After all, aren't there competing products on the market that do not require you to go through this hassle, and instead perform a simple review for you?

1. Limited information available publicly

A first reason is that LLMs are trained on information found on the public internet. Through months of training (digesting millions of web pages), they acquire knowledge about various domains of life β€” from biology to politics, sports and celebrities to legal information. Unfortunately, the legal information acquired by an LLM primarily consists of theoretical information, such as:

  • legislation, which is by definition public
  • case law, which is mostly public in most jurisdictions, even though a lot is also behind paywalls of publishers
  • limited legal doctrine, mostly in the form of blogs and newsletters from law firms, with a limited amount of publicly available in-depth articles.

What LLMs lack, however, is practical information on how to review contracts. As every legal expert knows, relatively little practical information on this topic is available in written form, let alone publicly available online. While a decent amount of tips & tricks are available for common contracts (such as NDAs), most practical information is:

  • orally communicated, learned "on the job" and taught by experienced by lawyers
  • found in small nuggets of wisdom spread across legal articles and books on specific types of contracts, almost always behind publisher paywalls
  • individually acquired through years of experience

LLMs have no access to this information, and will therefore have to be explicitly instructed on how to review contracts.

2. Internal rules

A second reason why the rules must be explicitly told, is that LLMs obviously have no access to your internal rules, i.e. mostly the written "playbooks" of legal departments in large organisations. However, in both law firms and in-house legal departments, there are also many unwritten rules on what you always or conditionally accept or reject.

Theoretically it would be possible to "feed" artificial intelligence a vast amount of examples documents (e.g., contracts with markup from both counterparties and internal experts). However, in order for the artificial intelligence to automatically deduct the internal rules of on the basis of this examples, it would require hundreds of relatively "clean" examples, which most legal teams simply do not have available. Furthermore, in practice, most examples are not "clean", in the sense that internal rules are frequently implicitly ignored in specific deals, for various non-obvious reasons (e.g., specific deal size, ignorance of the expert reviewing it, deviating instructions from management, etc.)

Even though many reviewing rules will be shared between legal teams, you will be surprised how many different approaches exist, particularly for the hotly debated areas (e.g., the liability limitation or notice period for a commercial deal). For law firms, many rules will obviously also differ between clients, types of clients and types of deals. For example, when negotiating with a large incumbent with significant market power in a certain sector, the position taken will be completely different then when negotiating with a small vendor.

3. Deal-specific information

A third reason why the rules must be explicitly conveyed to an LLM, is that it cannot read your mind. Similar to how a client or internal business user would explain various types of information to a legal expert, you must instruct the LLM on the type of deal, how management feels about this deal, how much bargaining power you have, what type of counterparty you are dealing with, which specific risks exist, etc.

This is the reason why "questions" can be asked in ClauseBuddy's reviewing conditions. Those questions will be asked to the end-user, and the answers will be submitted to the LLM, so it can do a better reviewing job.

4. Building your knowledge

A last reason why it's a good idea to store internal rules in ClauseBuddy, is that it can serve as an alternative form of "playbook" for legal teams. ClauseBuddy is not intended to replace your formal playbook β€” so don't throw it out yet β€” but introducing automated legal reviews through ClauseBuddy can be a good moment to reflect on your internal rules.

You may be surprised how subtle some rules are, and how different colleagues will have different answers to the same legal question. Any information you store in ClauseBuddy's reviewing rules will therefore help in managing your internal knowledge, and will thus help in getting better uniformity across your team, accelerating the onboarding of new colleagues, and preventing knowledge drain when legal experts leave your team.

ClauseBuddy's reviewing module is primarily targeted at contracts, so in this manual we also talk primarily about contracts. However, there are no hard constraints that would prevent it from also applying to various other documents, such as memos, letters and submission forms in DOCX format.

How does the reviewing module technically work?

Even though LLMs often feel like magic, we want to de-mistify the way the reviewing module operates behind-the-scenes. It is important to understand this, because it will allow you to write better rules.

When you ask ClauseBuddy to review a document, it will first split your Word-document into individual pieces (clauses). Next, ClauseBuddy will compile all your rules into optimised textual content, through clever prompt-engineering.

The clauses and compiled rules are then together fed to the LLM, with the request to go through the document, and cross-check each applicable rule (i.e., a rule for which the condition is met). The LLM will then send back its findings, together with references to the relevant clauses. ClauseBuddy will then present these findings to the end-user.

The LLM is not "trained" on your reviewing rules, in the sense that it would store your rules in its permanent memory. Instead, the LLM will only temporarily store these rules (and the contents of your documents) in its memory during the few seconds that it is processing your request. Afterwards, it will immediately and completely forget what you fed it with.

So don't worry about the confidentiality of using the LLMs, as explained elsewhere.

Word of warning

You should be aware that document review through LLMs is very new territory, and that it's really stretching the limits of what today's Large Language Model is capable of.

At this moment, you should consider it to be mere reviewing support, to help you get a first impression of your document, and automate some mundane aspects of the reviewing process.

In other words: do not blindly trust the output of the document reviewing process, no matter how detailed you make your rules.

It is probably also not a good idea to make the reviewing module available to users that cannot assess the legal merits of a user (such as most business users). Except for documents where the stakes are very low (e.g., a standard NDA for a low-value, low-risk contract), the risk associated with mistakes by the LLM are probably too high for most legal experts' appetite.

The good news is that this performance will automatically improve over time, when improved engines appear on the market.

How did we do?

Proofreading

Reviewing: building rules

Contact