Logo

Insights -

The Pitfalls of Artificial Intelligence

This article discusses the case of Harber v HMRC. It also underscores the importance of authentic legal sources and warns about the risks of relying solely on AI-generated content, particularly for unrepresented litigants who may lack the means to verify legal advice independently.

The Case: Harber v HMRC

In the case of Harber v HMRC, Mrs Harber was appealing against a failure to “notify penalty” of £3,265.11. Judge Anne Redston presiding over the hearing held that;

Mrs Harber provided the Tribunal with the names, dates and summaries of nine First-tier Tribunal decisions in which the appellant had been successful… However, none of those authorities were genuine; they had instead been generated by artificial intelligence.

The Judge in dismissing Mrs Harber’s appeal, noted that providing authorities that are not genuine and asking a court or Tribunal to rely on them is a serious and important issue. The Tribunal went on to make the following observations:

  • Mrs Harber accepted that it was “possible” that the cases in the Response had been generated by an AI system.
  • Mrs Harber did not know the AI cases were not genuine.
  • The authorities were plausible but incorrect because the cases had semblances of genuine cases such as the same surname or area of law.
  • The use of AI generated judgments causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined.
  • The Tribunal adopted the dicta of Judge Kastel in Mata v Avianca 22-cv-1461(PKC) who said, the practice of using fake judgments “promotes cynicism” about judicial precedents, and this is important, because the use of precedent is “a cornerstone of our legal system” and “an indispensable foundation upon which to decide what is the law and its application to individual cases”, as Lord Bingham’s said in Kay v LB of Lambeth [2006] UKHL 10 at [42].
  • Mrs Harber was fortunate that her appeal concerned reasonable excuse an area of law that is settled, and routinely traversed by the Tribunals so the outcome of using AI generated authorities was likely to have less impact on the outcome than in many other types of litigation.

Unrepresented litigants are not the only Tribunal users that have been caught out by AI. In the US case of Mata v Avianca 22-cv-1461(PKC), two barristers sought to rely on fake cases generated by ChatGPT.

This case demonstrates the risks of relying on AI to produce legal arguments without cross-referencing authorities against legitimate sources such as the FTT website. The content generated by AI may appear to be well written and correct but on closer inspection are prone to errors.

Unrepresented litigants are at particular risk because AI may have been their only source of assistance/advice and they cannot independently verify legal advice/information.

The AI Judicial Guidance (December 2023) informs Judges to be aware that Tribunal users may have used AI tools. If it appears that AI may have been used to prepare submissions or other documents, then judges may inquire about this, and ask what checks for accuracy have been undertaken (if any).

In Conclusion

The case of Harber v HMRC serves as a stark reminder of the risks and consequences associated with the use of artificial intelligence in legal proceedings. While AI technology undoubtedly has the potential to aid legal research and argumentation, its misuse, as demonstrated in this case, can undermine the integrity of the judicial process.

As we move forward, it is imperative for both legal practitioners and judges to exercise caution and diligence when encountering AI-generated content. The reliance on authentic legal sources and the rigorous verification of evidence must remain paramount to uphold the principles of justice and fairness.

Furthermore, this case underscores the need for ongoing dialogue and guidance within the legal community regarding the ethical and practical considerations surrounding AI in law. By fostering transparency, accountability, and responsible usage, we can harness the benefits of AI while mitigating its associated risks.

Ultimately, the Harber case serves as a valuable lesson for all stakeholders in the legal system, highlighting the importance of maintaining trust, credibility, and adherence to established legal standards in an increasingly technology-driven world.

How can Morr & Co help?

If you have any questions or would like any further information on the content of this article, please do not hesitate to contact Morr & Co’s Dispute Resolution team, who will be happy to help.

Contact us today on 01737 854 500 or email [email protected] to make an appointment to find out more.

Disclaimer

Although correct at the time of publication, the contents of this newsletter/blog are intended for general information purposes only and shall not be deemed to be, or constitute, legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article. Please contact us for the latest legal position.


Back to listing
Print Share