One of the major issues facing the law in an age of autonomous technology is the question of who is liable for damage caused by a machine or artificial intelligence either by physical injury or the failure of a contract. Insurers will have a major role to play.

If a machine such as a driverless car causes damage or injury to third parties because of a defect that the vehicle owner could not possibly have known about (so that they are not negligent) who is liable for the damages? The third party cannot be left with a claim against the designer or manufacturer of the machine because product liability is impossible or expensive to prove. The only practical solution is for the person operating the machine or relying on the artificial intelligence to have compulsory insurance and personal liability if the insurance is not in place.

Similar issues arise if an autonomously generated contract turns out to be void or if there is a breach of contract by the technology (hard or soft) that sits behind the transaction. It is not possible to have compulsory insurance for every contract entered into. Liability will have to be based on principles similar to vicarious liability where a person is liable for the actions of employees or agents who act on their behalf or who act in the course of employment or the mandate with the contracting party.

In the meantime insurers should be designing policies that insure injury to persons or damage to property caused by malfunctioning machines or algorithms.

We are going to need legislation to deal with these issues. Unfortunately most legislatures are far behind the advances in technology.

These are some of the issues discussed by Lord Hodge at the first Edinburgh FinTech law lecture in March 2019 which is worth reading.