THL-LOGO


SVAMC Issues Guidelines on Use of AI in Arbitration


By David Allgeyer

In an earlier article, I noted that SVAMC was leading the way by providing draft guidelines on use of AI in arbitration. The draft guidelines were circulated, discussed, and finalized. They have now been issued.

 What is SVAMC?

SVAMC is the Silicon Valley Arbitration and Mediation Center. It is, in many ways, perfectly positioned to formulate guidelines for use of AI in Arbitration. A large percentage of AI companies are located in Northern California, as is SVAMC, although it has members around the world.

SVAMC does not administer cases. Rather, it collaborates with leading ADR providers, technology companies, law firms, neutrals, and universities to address the merits of arbitration and mediation in resolving technology and technology-related disputes.

SVAMC also publishes the annual List of the World’s Leading Technology Neutrals known as “The Tech List®.” The Tech List is a peer-vetted list comprising exceptionally qualified arbitrators and mediators in the US and globally, all of which have experience and skill in the technology sector. Members of the list were involved in the drafting and review of the Guidelines, which you can review in full at SVAMC.org.

In the meantime, though, here is a summary of the Guidelines. Each Guideline also includes commentary. The commentary provides further background and observations useful to arbitrators and counsel dealing with use of AI in arbitration proceedings.  

The Guidelines

Defining AI

AI is ubiquitous. Microsoft Word uses it to check and correct your spelling. Your phone uses AI to give you directions, to correct—or, in some cases, ruin—your spelling, to work your camera, recognize your voice, and the like. That’s not what the Guidelines are worried about. Instead, in the Guidelines, “AI refers to computer systems that perform tasks commonly associated with human cognition, such as understanding natural language, recognizing complex semantic patterns, and generating human-like outputs.”

SVAMC provides seven Guidelines for use of this type of AI.

Guideline 1: Understanding the uses, limitations, and risks of AI application

Key risks of AI use include: (1) the “black box problem,” (2) quality and representation of the training data, (3) errors or hallucinations, and (4) augmentation of biases.

Risk 1: The black box problem

The “black box problem” arises, as the commentary explains, because AI’s “outputs are a product of infinitely complex probabilistic calculations rather than intelligible ‘reasoning’... Despite any appearance otherwise, currently available AI tools lack self-awareness or the ability to explain their own algorithms.” Thus, as much as is practical, arbitration participants are encouraged to use “explainable AI to the extent possible. “Explainable AI” is “a set of processes and methods that allows human users to comprehend how an AI system arrives at a certain output based on specific inputs.”

risk 2: Training data 

Even in the brave new world of AI, the old adage of “garbage in, garbage out” still applies. The output of AI is only as good as its inputs. Participants need to understand what data has been used to train the AI generative tool and perhaps seek a tool that is trained on a more appropriate data set.

risk 3: Errors and hallucinations

As the commentary explains, hallucinations arise because “models use mathematical probabilities (derived from linguistic and semantic patterns in their training data) to generate a fluent and coherent response to any question. However, they typically cannot assess the accuracy of the resulting output.” In other words, AI generated material can sound great, but it may be dead wrong.  

By now, we are all familiar with cases where lawyers used AI to generate briefs in which AI just made up the cases. Judges didn’t like that, and some imposed sanctions. Train yourself to utilize AI with an applicable data set and to check for accuracy. “Prompt engineering,” that is, carefully formulating the query in a way that that will formulate a correct response can also help with—but not eliminate—this problem.

risk 4: Historic biases

The training of an AI tool may augment biases. Historic discrimination may, for example, be carried into searches for individuals to perform important roles in arbitrations, including arbitrators, experts, and counsel. Users of AI need to be aware of possible bias and be very careful. This is particularly true if users don’t know what data the system was trained on or understand its algorithm.

The Guideline

Recognizing the possible problems with use of AI, Guideline 1 requires that: “All participants using AI tools in connection with an arbitration should make reasonable efforts to understand each AI tool’s relevant limitations, biases, and risks and, to the extent possible, mitigate them.”  

Guideline 2: Safeguarding confidentiality

Arbitrators generally have obligations to maintain confidentiality of arbitration proceedings. Lawyers generally have confidentiality obligations to their clients. Protective orders may also be in place requiring confidentiality of information. But many AI systems are public and use data submitted to train the system for the benefit of other users. So, use of these systems can compromise confidentiality. Other AI systems have been developed to safeguard for confidentiality.

Recognizing this, the Guidelines say that participants “should not submit confidential information to any AI tool without appropriate vetting and authorization.” The commentary advises that, “before using an AI tool, participants should assess the confidentiality policies, features, and limitations of the tool, engaging technical experts as appropriate.”

Guideline 3: Disclosure

The draft Guidelines provided alternative approaches to disclosure. The first required disclosure of use of AI when “(i) the output of an AI tool is to be relied upon in lieu of primary source material, (ii) the use of the AI tool could have a material impact on the proceeding, and (iii) the AI tool is used in a non-obvious and unexpected manner.”

The alternative approach required disclosure whenever AI was used to prepare material documents or when use of AI could have a material impact on the outcome of the proceedings. A single disclosure Guideline has now been issued after SVAMC received comments on the draft. It does not require mandatory disclosure of all use of AI, but instead requires a case-by-case analysis. It also defines what needs to be disclosed when disclosure is required. It reads:

Disclosure that AI tools were used in connection with an arbitration is not necessary as a general matter. Decisions regarding disclosure of the use of AI tools shall be made on a case-by-case basis taking account of the relevant circumstances, including due process and any applicable privilege. Where appropriate, the following details may help reproduce and evaluate the output of an AI tool: 

  1. The name, version, and relevant settings of the tool used;
  2. A short description of how the tool was used; and
  3. The complete prompt (including any template, additional context, and conversation thread) and associated output.

Guideline 4: Duty of competence or diligence in the use of AI

Of course, counsel must follow all applicable laws or rules on use of AI. And they must also be sure that all AI generated material is accurate. They are responsible for any uncorrected errors in submissions.

The commentary notes that “[t]he tribunal and opposing counsel may legitimately question a party, witness, or expert as to the extent to which AI tool has been used in the preparation of a submission and the review process applied to ensure the accuracy of the output.”  

Guideline 5: Respect for the integrity of the proceedings and the evidence

This Guideline is short but wide-reaching. It says:

Parties, party representatives, and experts shall not use any forms of AI in ways that affect the integrity of the arbitration or otherwise disrupt the conduct of the proceedings.

The commentary specifically references the dangers of deepfakes, including the expense and difficulty in detecting them.

If Arbitrators determine this Guideline has been violated, they can take appropriate action, including “striking the evidence from the record, or deeming it inadmissible), deriving adverse inferences, and taking the infringing party representatives’ conduct into account in its allocation of the costs of the arbitration.”

Guideline 6: Non-delegation of decision-making responsibilities

AI can be helpful in gathering and analyzing information, but Arbitrators must not delegate actual decision making to AI. If they decide to use AI, they must assure its accuracy. And arbitrators must use their own judgment in making decisions.

Guideline 7: Respect for due process

This Guideline is also directed to arbitrators. It says:

An arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it.

Where an AI tool cannot cite sources that can be independently verified, an arbitrator shall not assume that such sources exist or are characterized accurately by the AI tool.  

This Guideline promotes transparency through disclosure and reminds arbitrators to crucially evaluate information derived from AI to ensure accuracy.

Incorporating the Guidelines in your next arbitration

You may want to adopt SVAMC’s AI Guidelines to control use of AI in your arbitration. The Guidelines have suggested language for doing that. Here it is:

The Tribunal and the parties agree that the Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC AI Guidelines) shall apply as guiding principles to all participants in this arbitration proceeding.  

Summing up

The SVAMC AI Guidelines are well worth reviewing in depth. They explain the potential dangers of use of AI in arbitration. And they provide guidance on how to avoid those dangers and on participants’ responsibility for doing so.


David Allgeyer has served as arbitrator in over 100 commercial and intellectual property disputes. In 2018, he formed Allgeyer ADR, devoted to serving as an arbitrator and mediator. David is a Fellow in the American College of Commercial Arbitrators and included on the Silicon Valley Arbitration and Mediation Center’s list of Leading Technology Neutrals. A frequent lecturer and panelist on ADR and intellectual property matters, David’s ABA Book, Arbitrating Patent Cases, a Practical Guide is available at shopaba.org and Amazon.com.  His recent Chapter, “Mediating Intellectual Property Cases,” is included in the ABA Book, Mediating Legal Disputes: Effective Techniques to Resolve Cases.

Managing Editor
Elsa Cournoyer

Executive Editor

Joseph Satter