In the rapidly evolving world of legal practice, artificial intelligence (AI) has begun to play an instrumental role, offering promising opportunities to streamline research, facilitate case analysis, and improve efficiency. However, a recent magistrates court case, Parker v Forsyth N.O. & Others, serves as a pivotal reminder of the need to use these tools with careful discretion and a healthy dose of traditional due diligence.
The plaintiff’s attorneys used ChatGPT for legal research, but unfortunately, they accepted the generated results without verifying their accuracy. This incident, rather than discouraging the use of AI in legal practice, underscores the importance of understanding how to use it properly.
The court’s commentary on this issue, as set out in paragraph 90 of the judgment, is particularly instructive:
“In this age of instant gratification, this incident serves as a timely reminder to, at least, the lawyers involved in this matter that when it comes to legal research, the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading. Courts expect lawyers to bring a legally-independent and questioning mind to bear on, especially, novel legal matters, and certainly not to merely repeat in parrot-fashion, the unverified research of a chatbot.”
AI tools like ChatGPT can be incredibly useful to, amongst other uses, explore legal concepts and possible legal arguments or replies at a high level. They provide a starting point for further research and analysis. However, they should not be the sole or final source of legal research. All information and sources, AI generated or not, must be cross-verified independently.
Furthermore, client confidentiality remains paramount. AI tools should not be privy to any client confidential data as their security may not be guaranteed.
Banning the use of AI in one’s practice is unlikely to be a practical or beneficial solution. The impressive utility of these tools, when used correctly, can be a valuable asset for legal professionals and, even with a ban, staff are likely to still try utilise it. Instead, we should focus on learning how to use AI correctly and safely, and understanding its potential and limitations to prevent further instances like the one here.