David J. Marshall

July 14, 2023

David Marshall is an associate with JSS Barristers, and chair of JSS Barristers’ committee for the investigation of artificial intelligence in the practice of law.

Overview

The legal sector is grappling with the increasing use of generative artificial intelligence (GenAI) like ChatGPT. Several courts, including the Manitoba Court of King’s Bench and the Texas State Court, have issued directives for the use of GenAI in court submissions. While these court directives highlight the potential problems of GenAI and raise awareness, they may be imprecise and may conflict with existing ethical obligations of lawyers. As GenAI evolves and its usage broadens, the challenges it presents – such as generating misleading or inaccurate information – will only grow. It is crucial for legal practitioners, courts and regulators to understand the capabilities, risks and limitations of GenAI to ensure its responsible and effective use.

Recent Developments and Courts’ Responses

Amidst the ongoing hubbub of the impact of ChatGPT and other generative artificial intelligence (“GenAI”) for the legal profession, Courts and regulators are beginning to react. Several Courts, including the Manitoba Court of King’s Bench, have now enacted directives regarding the use of GenAI in written Court materials.

The impact of GenAI for lawyers is a subject of increasing discussion. Recent stories have received particular notoriety and have seemingly catapulted that discussion out of the legal profession and into the general public’s awareness. In particular, the story of Mr. Schwartz and Mr. LoDuca’s ill-fated use of ChatGPT for a legal filing in New York Federal Court made international headlines when the GenAI tool cited and generated non-existent cases.

The New York Court recently issued sanctions against Mr. Schwartz and Mr. LoDuca, directing that he pay $5,000 in fines, and notify each of the judges who were listed as the authors of the non-existent cases. These relatively light sanctions (in the opinion of this writer) no doubt reflect the harsh punishment the lawyers received in the court of public opinion.

The Texas State Court was one of the first to issue a directive regarding the use of GenAI in court submissions. Judge Brantley Starr ordered that:

All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.

The Manitoba Court of King’s Bench has followed suit, issuing a practice direction, stating:

With the still novel but rapid development of artificial intelligence, it is apparent that artificial intelligence might be used in court submissions.  While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence.  To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.

While the concerns of the Courts reflected in these orders and directives are well-founded, they are subject to some imprecision that is potentially problematic. Further, they may also be duplicative with existing ethical obligations on lawyers to supervise delegated work, and in tension with other ethical duties.

Judge Starr’s order is notably restricted to “generative artificial intelligence (such as ChatGPT …)”. Justice Joyal’s directive in Manitoba is more general, requiring that any artificial intelligence use must be disclosed. Respectfully, of these, Judge Starr’s approach should be preferred. The term “artificial intelligence” is extremely broad and could even reasonably encompass the use of Google search, full text WestLaw or CanLII searches, and any program that assists a writer with predictive text. It could even include Microsoft Word’s suggestions for grammar or sentence structure. It is not likely that Courts are concerned about the latter use of “artificial intelligence”, as lawyers have been using such tools for many years without apparent incident. Many lawyers may not even realize that such tools actually rely on embedded artificial intelligence, thus potentially putting them offside directives like that in the Manitoba Court of King’s Bench. It is unlikely that this was the intention of the Court in enacting this direction.

Imprecision: What Exactly is “Artificial Intelligence”?

However, the imprecision of what exactly artificial intelligence is, and what it does, reveals something deeper about the interaction between lawyers and technology that will have a growing significance as the use of artificial intelligence tools proliferates through the profession. The concern underlying these directives is that lawyers are unknowingly misled by artificial intelligence, including and in particular the well-known hallucination problem. That is, GenAI has a well-discussed penchant for simply “making things up”, as Mr. Schwartz and Mr. LoDuca unfortunately discovered. But how different is a full-text search on WestLaw from hallucinated content, really? Lawyers have relied on such searches in their legal research for decades, without understanding exactly how these searches function. While WestLaw or CanLII are not going to return cases that do not exist, there is no guarantee that using such searches will provide a lawyer with an accurate cross-section of the law. There is not as much daylight as one might first think between confidently but unknowingly citing a case that does not exist and confidently, unknowingly, and incorrectly describing the state of the law that a lawyer has come to understand purely through the use of non-generative artificial intelligence like full text searches and Google. As specialized GenAI tools for the legal profession become more commonplace, this daylight will shrink further. The issue is that one does not know what one does not know, and the apparent credibility, persuasiveness, and increasing specialization of these tools will fool many lawyers into believing that they do know what they do not know, more and more.

Potential Ethical Conflicts

Such orders are also arguably duplicative of already existing obligations on lawyers. Lawyers are already subject to ethical obligations to:

  • Supervise those to whom work is delegated (for example, Alberta’s Code of Conduct, section 6.1);
  • Practice competently (for example, Alberta’s Code of Conduct, section 3.1);
  • In particular, have competence regarding the use of technology in practice (for example, Alberta’s Code of Conduct, section 3.1-1(j) and (k) and associated commentary);
  • Communicate with a client about what they are doing (for example, Alberta’ Code of Conduct, section 3.2-1 and associated commentary); and
  • To act as an officer of the Court, including disclosing relevant authority and fairly representing the evidence and law (for example, Alberta’s Code of Conduct, sections 5.1-1 and 5.1-2).

As an upside, the certifications required by Manitoba and Texas shine a broader light of awareness on the interaction between legal practice and GenAI within the legal profession. There are no doubt many practitioners who have never used ChatGPT for any purpose, and perhaps have no intention of doing so. It certainly does not hurt to continue to highlight this rapidly developing issue through such Court directives, broadening the conversation to include non-adopters of this technology.

However, on the flip side, these directives may be merely duplicative of, or have the potential of being in conflict with, existing ethical obligations. The Manitoba Court of King’s Bench direction does not amount to much more than “lawyers must check their work for accuracy”, which is already required. It is also in tension with lawyers’ ethical obligations to understand and adopt new technologies when they can assist in providing better or more efficient work to their clients.

What about the use of GenAI for legal work that does not involve court submissions? The very real issues in the use of this nascent technology in legal work arise whether a lawyer is drafting a contract, a will, or a court brief. The risks to clients through the use of this technology for out-of-court legal work cannot be ameliorated through Court directives.

Regulatory Response Likely Warranted 

Of course, courts are freer to act through their inherent jurisdiction to control process than are regulators or legislatures to revise or enact conduct guidelines. However, courts likely do not have the jurisdiction to do any more than supervise legal materials filed with them and the conduct of lawyers appearing before them, so their ability to have any wider-ranging effects is limited to any notoriety their directives may achieve. Further, with no disrespect to the Courts, they may not be the best positioned to understand and assess the use and risks of GenAI by the legal profession, or to attempt to regulate its use. Justice Joyal’s directive humbly notes that very thing.

Also as noted by Justice Joyal, attempts to address these concerns engage another very real problem that this technology moves far more quickly than those charged with supervising its potential effects. While action from regulators of the legal profession is likely warranted, the task of predicting future effects of a rapidly developing problem are daunting.

In this vein, it is also interesting to contrast the Texas and Manitoba directives in their differing application to self-represented litigants (SRLs). Judge Starr’s Order applies only to attorneys; Manitoba’s directive applies to all litigants. While the implications of SRLs’ use of GenAI is outside the scope of this article, it is a certainty that the number of SRLs preparing court materials through ChatGPT and similar will increase. Those tasked with addressing the use of GenAI in law should carefully reflect on the ameliorating effect that GenAI may have on access to justice for SRLs with the very real problems occasioned by their use. These risks are very likely magnified when these tools are used by non-lawyers.

The Future of GenAI in Legal Practice

As these tools will only get better, and their use will become more widespread, the cautionary tale of Mr. Schwartz and Mr. LoDuca will be far from the last. We will no doubt soon see a similar story of a Court using GenAI to write its decisions, with similar effects. “May you live in interesting times”, indeed.