"AI" is creating fake legal cases and making its way into real courtrooms with disastrous results

Michael Legg and Vicki Mcnamara review a series of examples from around the world where computer generated (so-called "generative AI") submissions to courts have included fictitious authorities that the computer has, itself, generated.

Contributor

We’ve seen deepfake, explicit images of celebrities, created by "artificial intelligence" ("AI"). "AI" has also played a hand in creating music, driverless racing cars and spreading misinformation among other things.

It’s hardly surprising, then, that "AI" also has a strong impact on our legal systems.

It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore a great cause for concern that fake law, invented by "AI", is being used in legal disputes.

Not only does this pose issues of legality and ethics, it also threatens to undermine f"AI"th and trust in global legal systems.

How do fake laws come about?

There is little doubt that "Generative AI" is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to carefully apply professional knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and self-represented litigants) have been caught out by "artificial intelligence".

"AI" models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisual).

Although content generated this way can look very convincing, it can also be inaccurate. This is, in part, the result of the "AI" model attempting to “fill in the gaps” when its training data is inadequate or flawed, and is commonly referred to as “hallucination”.

In some contexts, "Generative AI" "hallucination" is not a problem. Indeed, some claim it as an example of creativity.

But if "AI" "hallucinated" or created inaccurate content that is then used in legal processes, that’s a problem – particularly when combined with time pressures on lawyers and a lack of access to legal services for many.

This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational issues for the legal profession and a lack of public trust in the administration of justice.

It’s happening already

The best known "Generative AI" “fake case” is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fictitious extracts and case citations to a New York court. The brief was researched using ChatGPT.

The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another "Generative AI" chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court.

Fictituous cases have also surfaced in recent matters in Canada and the United Kingdom.

If this trend goes unchecked, how can we ensure that the careless use of generative "AI" does not undermine the public’s trust in the legal system? Consistent f"AI"lures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

The Conversation

What’s being done about it?

Around the world, legal regulators and courts have responded in various ways.

Several US state bars and courts have issued guidance, opinions or orders on "Generative AI" use, ranging from responsible adoption to an outright ban.

Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

In Australia, the NSW Bar Association has a generative "AI" guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsible use in line with solicitors’ conduct rules.

Many lawyers and judges, like the public, will have some understanding of "Generative AI" and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedly helps.

But a mandatory approach is needed. Lawyers who use "Generative AI" tools cannot treat it as a substitute for exercising their own judgement and diligence and must check the accuracy and reliability of the information they receive.

In Australia, courts should adopt practice notes or rules that set out expectations when "Generative AI" is used in litigation. Court rules can also guide self-represented litigants and would communicate to the public that our courts are aware of the problem and are addressing it.

The legal profession could also adopt formal guidance to promote the responsible use of "AI" by lawyers. At the very least, technological competence should become a requirement of lawyers’ continuing legal education in Australia.

Setting clear requirements for the responsible and ethical use of "Generative AI" by lawyers in Australia will encourage appropriate adoption and shore up public confidence in our lawyers, our courts, and the overall administration of justice in this country.

The authors:

Michael Legg Professor of Law, University of New South Wales, Sydney, Australia
Vicki McNamara Senior Research Associate, Centre for the Future of the Legal Profession, University of New South Wales, Sydney, Australia

References:

B.C. lawyer reprimanded for citing fake cases invented by ChatGPT
https://www.cbc.ca/news/canada/british-columbia/lawyer-chatgpt-fake-pre…

Litigant unwittingly put fake cases generated by AI before tribunal
https://www.legalfutures.co.uk/latest-news/litigant-unwittingly-put-fak…

A solicitor’s guide to responsible use of artificial intelligence
https://lsj.com.au/articles/a-solicitors-guide-to-responsible-use-of-ar…

Issues Arising from the Use of AI Language Models (including ChatGPT) in Legal Practice
https://inbrief.nswbar.asn.au/posts/9e292ee2fc90581f795ff1df0105692d/at…

How lawyers are using generative AI
https://www.liv.asn.au/Web/Law_Institute_Journal_and_News/Web/LIJ/Year/…

First published in The Conversation 13 March, 2024. Slight edits not affecting the import of the content have been made for clarity.


A D V E R T I S E M E N T

About this section

Opinion pieces or "Op-Eds" are the home-made bombs of the publishing world. So long as they meet editorial standards, are not intentionally offensive with a view to causing hurt or insult and are relevant to our field of endeavour, we will look at submissions.

We like contentious, we like contrarian views. We don't like pretty much any -ism . We recognise that Opinion pieces are one person's view and are not balanced (if they are balanced and reach a reasoned conclusion, they are probably more suited to the Articles section). We do not like acronyms and buzzwords.

Op-Eds are the author's personal views and do not necessarily represent the views of World Money Laundering Report or its publishers.

To submit an Opinion piece, please complete the Contact form.