In a January 2025 judgment the High Court offers a stark reminder of the ethical obligations that underpin legal practice when placing information before a court. What began as a routine application for leave to appeal escalated into a significant cautionary tale, culminating in a referral of the legal practitioners to the Legal Practice Council for investigation. The judgement highlights the perils of unverified research and the nuanced role of artificial intelligence in legal work.
At the heart of the case was a supplementary notice of appeal drafted by a candidate attorney, which contained multiple citations of case law. However, these references were not verified by the principal attorney nor by counsel before they were presented to the court. During the drafting of the judgment, the presiding judge attempted to locate these cases—only to find that most of them did not exist. Nine cases were cited, yet only two could be verified as existing, and even one of those had an incorrect citation, and the relevance of even those two was nil.
This discovery prompted the judge to invite explanations from all involved. Counsel initially claimed she relied on references provided by the candidate attorney, but she had not had sight of the cases as she was ‘overbooked’ and working under a lot of pressure. The candidate attorney, in turn, stated that the citations were sourced from law journals accessed via her university portal but was unable to provide specific details. When explicitly asked whether she had used an AI tool such as ChatGPT for her research, she denied it. The firm’s principal later appeared before the court, offering inconsistent explanations and failed to substantiate the cited authorities. Despite being given several opportunities to produce the referenced cases, no credible information was provided. Ultimately, the court concluded that the citations were likely fabricated.
The judge’s commentary was unequivocal in condemning this conduct. Legal practitioners, the judgment emphasises, bear an unyielding duty to ensure the accuracy of all materials presented to the court. This duty cannot be delegated—not to junior colleagues, not to technological tools, and certainly not to artificial intelligence. While the judgment stops short of definitively attributing the fabrication to AI, the pattern of inaccuracies mirrors the phenomenon of AI “hallucinations,” wherein generative tools produce plausible-sounding but entirely fictitious information.
The judge asserts that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional”. However, this oversimplifies the issue. While this case vividly demonstrates the dangers of treating AI as a primary source of legal knowledge, one must not preclude the appropriate and responsible use of such tools in legal practice. When employed thoughtfully, AI can augment practitioners’ capabilities, streamlining research, identifying patterns, and even, in appropriate circumstances and with the appropriate tools, generating initial drafts. But these outputs must always be rigorously reviewed and verified against authoritative sources. Of course, this requires investment by practitioners, and the law firms within which they operate, into education and into the more advanced tools available for these purposes.
We must be cognisant of the dual imperatives of innovation and accountability that these new technology places on all of us. AI is not a substitute for the judgment and expertise that define professionals. It is a tool that can enhance those qualities when used correctly. The lesson here is not to reject AI outright but to integrate it responsibly by investing in training and fostering a culture of ethical vigilance.