Panel & Fellowship
Find an arbitrator. Or become one.
8 specialist panels. Rigorous vetting. Fellowship designation for practitioners building a global practice.
Browse panel directory →
Fellowship Programme
UNIONE™ Fellowship
An institutional designation for dispute resolution professionals.
AF.UNIONE™$395 Founding
F.UNIONE™$495 Founding
SF.UNIONE™$795 Founding
Apply for Fellowship ✦
500 Founding places · $295 renewal locked
Barcelona induction · Aug 2026
The Institution
UNIONE™ — built for the world as it actually operates.
Independent governance. Global reach. Transparent rules. Built for the long term.
About UNIONE™ →
UNIONE™ News & Insights Rules & Guidance
📜 Rules & Guidance

Article 28 - How UNIONE™ Rules v3.0 Address AI-Generated Evidence in International Arbitration

AI-generated evidence is already appearing in arbitration proceedings. No major institution has codified specific rules to govern it - until now. A clause-by-clause analysis of Article 28 and what it means in practice.

⚖️
UNIONE™ Legal Team
Rules Commentary
2 August 2025
12 min read
AI in arbitration

AI-generated content is arriving in international arbitration whether the institutions are ready or not. In proceedings filed in 2024, we have seen AI-assisted expert reports, AI-generated financial projections used as quantum evidence, AI-drafted witness statements reviewed and signed by witnesses, and - in at least one case - AI-produced document sets submitted as part of a claimant's evidentiary bundle.

The arbitration community's response has been largely ad hoc. Tribunals have dealt with AI evidence issues through general evidence principles, applying IBA Rules on the Taking of Evidence by analogy, and relying on counsel's good faith. No major institution - not ICC, not SIAC, not LCIA - has codified specific rules governing AI-generated or AI-assisted evidence.

UNIONE™ Rules v3.0 changes this. Article 28 is the first codified AI evidence framework in any international arbitration institution's rules. This article analyses what it contains, why each element matters, and what it means in practice for parties, counsel, and tribunals.

The Problem Article 28 Solves

Before examining Article 28 itself, it helps to understand the problem it addresses. AI-generated evidence creates three distinct challenges that general evidence principles handle poorly.

Authentication. Traditional evidence authentication asks: is this document what it claims to be? AI-generated output raises a prior question: what process generated this output, what were the inputs, and how reliable is the methodology? A financial model produced by a trained AI system may be internally consistent and numerically sophisticated - but if the inputs were wrong, or the AI's training data was biased toward a particular outcome, the output is unreliable regardless of how convincing it appears.

The expert responsibility gap. When an expert uses AI tools to assist with analysis, questions arise about the scope of the expert's independent judgment. Has the expert verified the AI output? Has the expert disclosed the AI's role? Is the expert's signature on the report a genuine attestation of the expert's own views - or is it a countersignature on AI-generated content the expert has not independently verified?

The disclosure gap. Without a disclosure obligation, a party can present AI-assisted analysis as if it were purely human expert work. The tribunal has no way of knowing that the analysis was AI-assisted, cannot assess the AI methodology, and cannot form a view on whether the AI's limitations affect the reliability of the output.

"The question is not whether AI will appear in arbitration proceedings. It already has. The question is whether there are rules to govern it - or whether parties and tribunals are left to improvise."

What Article 28 Actually Says

Article 28 has six operative provisions. Each addresses a distinct aspect of the AI evidence problem.

28.1 - Tribunal Authority

The first clause preserves tribunal authority over AI evidence in all its dimensions: admissibility, authentication, reliability, and weight. This is deliberate. The rules do not attempt to pre-determine whether AI-generated evidence is admissible - that would be both overreaching and impractical, given how rapidly AI technology is developing. Instead, they give the tribunal the full toolkit to deal with whatever AI-related evidence issues arise.

Rules v3.0
Article 28.1
"The Tribunal shall have full authority to determine the admissibility, authentication, reliability, and weight of AI-generated or AI-assisted evidence."

28.2 - Disclosure Obligations for AI-Generated Evidence

This is the core provision - and the one with the most immediate practical consequence. A party seeking to rely on AI-generated evidence must disclose four things: that the evidence was generated or materially assisted by an AI tool; the identity of the AI system used; sufficient information about inputs, methodology, and known limitations to enable reliability assessment; and, where the AI output is central to the claim or defence, expert verification of its reliability.

The "central to the claim or defence" threshold for expert verification is important. Not every use of AI tools in evidence preparation requires expert verification - that would be both impractical and disproportionate. But where the AI output is the foundation of a key claim - a damages calculation, a technical feasibility analysis, a market valuation - the party relying on it must have that output independently verified before presenting it as evidence.

28.3 - Expert Reports Using AI

Article 28.3 addresses the expert responsibility gap directly. Expert reports that have been materially assisted by AI tools must disclose this - and must confirm that the expert has independently reviewed and verified all AI-assisted content. The expert's signature remains a genuine attestation of the expert's own independent judgment, not merely a countersignature on AI output.

⚠️ What "Materially Assisted" Means
Article 28.3 uses the phrase "materially assisted by AI tools." UNIONE™'s interpretation is that this covers any use of AI that shaped the substance of the analysis - financial modelling, statistical analysis, document review, legal research synthesis. It does not cover AI grammar or formatting tools that do not affect the substance. The distinction matters: an expert who used AI to format their report has no disclosure obligation. An expert whose quantum methodology was generated by an AI model has a full disclosure obligation under Article 28.3.

28.4 - Counsel and Party Use of AI

Article 28.4 explicitly permits counsel, parties, and the tribunal to use AI tools for legal research, document review, and case preparation - without a disclosure requirement, provided the AI use does not relate directly to the substance of evidence being adduced. This provision resolves a question that was generating significant uncertainty in the arbitration community: whether AI-assisted legal research needed to be disclosed. The answer under UNIONE™ Rules is no - unless that research directly forms the basis of evidence.

28.5 - Technical Expert on AI Evidence

Where the reliability of AI-generated evidence is genuinely in dispute - and neither party's expert has the technical credentials to address the AI methodology - Article 28.5 allows the tribunal to appoint a technical expert specifically to assess the AI evidence. This is a proportionate measure: it will rarely be needed, but its availability ensures the tribunal is never stuck with AI evidence it cannot properly evaluate.

28.6 - Living Guidelines

The final provision may be the most forward-looking. UNIONE™ commits to publishing and periodically updating Guidelines on the Use of Artificial Intelligence in UNIONE™ Proceedings. AI technology is developing faster than any rulebook can capture. Article 28.6 acknowledges this - and commits to an ongoing institutional response rather than a single fixed framework.

What This Means in Practice

For counsel preparing for UNIONE™ arbitration, Article 28 creates several practical obligations that should be built into the case preparation process from the start.

First, an AI audit of all expert reports before they are filed. Counsel should specifically ask each expert: was any part of this analysis generated or materially assisted by an AI tool? If yes - disclose and verify, as required by Article 28.2 and 28.3.

Second, maintain AI methodology documentation for any AI-generated evidence you intend to rely on. Article 28.2 requires you to provide "sufficient information about the inputs, methodology, and known limitations." If you don't have that documentation, you may find your AI evidence challenged or excluded.

Third, anticipate AI challenges to opponent's evidence. Article 28 gives opposing counsel and the tribunal the basis to challenge AI evidence on methodology grounds. Where an opponent presents sophisticated quantitative analysis, it is now appropriate to ask directly: was this AI-generated? If so, what system, what inputs, what were the known limitations?

The Competitive Context

ICC, SIAC, and LCIA have not codified AI evidence rules. Their approach - applying general evidence principles and tribunal discretion - worked adequately when AI evidence was rare. As AI-generated content becomes routine in commercial disputes, the absence of a framework creates uncertainty, asymmetry between sophisticated and unsophisticated parties, and litigation risk around admissibility challenges.

Article 28 is not the last word on AI evidence in arbitration. It is the first. As AI technology continues to develop - and as AI-generated evidence becomes more sophisticated and harder to distinguish from human expert work - the rules governing it will need to evolve. UNIONE™'s commitment to periodic guideline updates is a recognition that Article 28 is a foundation, not a ceiling.

For parties and counsel who expect AI to play a role in their disputes - and in 2025, that means almost everyone - the presence of a codified AI evidence framework is not a minor procedural detail. It is a structural advantage that reduces uncertainty, protects against unfair surprise, and gives the tribunal the tools to do its job properly.