At the end of June 2025, the High Court once again delivered a judgment reminding us that while technology can undoubtedly make our work quicker and more efficient, it cannot replace the critical duty lawyers have towards the courts and the integrity of their submissions.
The issue arose during an urgent application when it came to light that two authorities cited by the plaintiff’s counsel in their heads of argument were entirely fictitious. On closer scrutiny, it emerged that the references had been generated by an AI legal research tool. Under the pressures typical of urgent matters, counsel had inserted these citations without independently verifying their authenticity. Counsel candidly acknowledged the error, accepted full responsibility, and apologised unreservedly to the court (a distinction from other judgments in South Africa and England).
In addressing this incident, the judge was unequivocal. Even negligence in relying on fictional case law poses serious risks, potentially undermining confidence in the legal system itself. Referring specifically to the recent English decision in Ayinde v London Borough of Haringey, the court emphasised that practitioners are now well aware, or certainly should be, of the pitfalls of generative AI. Such technology, while impressive, can produce persuasive yet entirely fabricated authorities. Crucially, the English judgment underscored that there is simply no longer any justification for failing to cross-check AI-generated materials against reliable, authoritative sources, with which this judgment strongly agreed.
The judge directed that the matter be investigated by the Legal Practice Council.
This growing trend underscores my views that while generative AI tools are increasingly prevalent and genuinely helpful, they must remain exactly that, tools. The responsibility of ensuring accuracy and integrity in our legal arguments remains firmly with us as practitioners. It is becoming clear that courts will not tolerate shortcuts that compromise trust.
It is becoming clear that we need more than just disciplinary measures; we need clear guidelines and educational support from the LPC. Lawyers need practical rules governing AI use and targeted training that clearly outlines the benefits, risks, and ethical obligations involved. Regulation alone won’t be enough because it’s the education and awareness that will embed responsible use into our daily practice.