Artificial intelligence

Growing litigation risk

Businesses are increasingly integrating AI into their working practices - with McKinsey recently reporting that 65% of the respondents to their “State of AI” survey are now regularly using generative AI in their organisations, nearly doubling the percentage from their equivalent survey last year. Given the pace at which AI is developing, legislators across the globe are racing to catch up. The EU AI Act became law in August and, in the UK, an Artificial Intelligence Bill is expected. The EU has also taken steps to make it easier for consumers to bring claims against companies including when they are harmed by AI, introducing the EU Revised Product Liability Directive (to become law in Member States by December 2026) which includes a reversal of the burden of proof in some circumstances such that the burden is on the defendant to show that the relevant product (which can include AI) was not defective. 

Exponential growth in AI litigation very possible 

However, to many, AI still remains a “black box” – do we really know how powerful it is? What does the AI model really know or do? Could it be hallucinating or biased? With the ever increasing use of AI, its growing complexity and increased legislation, there is clear potential for exponential growth in litigation arising out of the manufacture and use of AI. 

Litigation relating to AI has to date primarily focussed on the development of the relevant AI, including a number of claims being commenced against manufacturers on the basis of alleged breaches of intellectual property rights. However, as businesses increasingly integrate AI into their working practices, claims relating to the use of AI have also arisen under both contract and tort law. Before the English courts in Leeway Services Ltd v Amazon, Leeway Services alleged that Amazon’s use of AI systems resulted in its wrongful suspension from trading on Amazon’s online marketplace, and in Tyndaris SAM v MMWWVWM Limited (VWM), VWM argued that Tyndaris had misrepresented the capabilities of an AI-powered system. Neither of these cases have reached trial but in the recent Canadian judgment Moffat v Air Canada, Air Canada was found to have failed to take reasonable care to ensure the accuracy of responses provided to customers by its chatbot. 

The regulators are taking notice

Regulators are also increasingly active in respect of AI. For example, in the UK the Information Commissioner’s Office has published a strategic approach to AI and the Financial Conduct Authority is looking to develop its understanding of the risks and opportunities AI presents to the financial services sector. We may also be at the start of a period of increased regulatory focus on whether companies have made false or misleading public statements regarding their use of AI, with the US Securities and Exchange Commission announcing in March 2024 that it had reached settlements with two investment advisers regarding so-called “AI washing” (Delphia (USA) Inc. and Global Predictions Inc.). With greater regulatory focus, the chances of private claims piggy backing off of adverse regulatory findings increases. 

Mass claims a risk

Group litigation, in particular, is a key area of risk for both manufacturers of AI and businesses relying on it. Given the characteristics of AI, it is easy to see how a group claim could arise – for example, given the speed with which AI operates, an error could have affected a large group of people before it is even spotted. Whilst the alleged loss suffered by each individual claimant could be small, the aggregated harm across the group could potentially be very large. Whilst the English Supreme Court’s 2021 decision in Lloyd v Google may have given businesses some comfort that England is a jurisdiction in which it is difficult to pursue group claims, there are a number of ways in which such claims can be structured before the English courts, for example: 

  • The Supreme Court indicated in Lloyd v Google that a group claim could be structured such that common issues across claimants are considered during a first stage heard on a representative basis, with claimants then pursuing individually any losses they suffered in reliance on that first representative decision. Whether there are common issues will be a fact specific question and there have been a number of recent cases before the English Court of Appeal in which they have considered this question. The court approved this approach in Commission Recovery Limited v Marks and Clerk LLP, whilst rejecting it in Prismall v Google UK Ltd and DeepMind Technologies Ltd, having found that the proposed group of claimants did not share a common interest in respect of the alleged misuse of their medical data. We are currently waiting on the Court of Appeal’s decision regarding a proposal to structure a securities law claim, in which investors are seeking to recover losses suffered as a result of allegedly untrue statements contained in various public documents, in a similar manner (Wirral Counsel v Indivior PLC / Reckitt Benckiser Group). Given the potential for an increase in regulatory decisions regarding “AI washing”, the Court of Appeal’s decision in this case may be pivotal to how potential future group claims pursued on a similar basis in respect of alleged untrue public statements regarding AI could be structured.
  • Whilst cumbersome and potentially expensive, claimants can pursue a group claim by obtaining a group litigation order (GLO) that provides for the joint case management of claims which give rise to common or related issues of fact or law. However, a GLO is not strictly required and a large number of claimants could also seek to pursue their claims as individual parties in one set of multiparty proceedings. This is the approach adopted by around 620,000 claimants in the ongoing Município de Mariana v BHP Group proceedings, in which we act for BHP. 
  • We also continue to see a large number of claims being commenced under the collective proceedings regime in the UK’s Competition Appeals Tribunal (CAT). Whilst such claims must be pursued on the basis of a breach of competition law, there is an ongoing trend of parties seeking to frame what are in effect consumer-protection actions as claims for anti-competitive conduct in order to benefit from this regime and it is easy to see how this could also arise in respect of claims regarding AI. We are acting on a number of these claims, ranging from the defence of a train operator in respect of historical sales of a certain train ticket type, to a water company in the first environmental collective proceedings brought before the CAT. It is noteworthy that tech giants are also increasingly a target of such claims, with cases currently being pursued against Microsoft, Meta, Alphabet / Google and Apple - with a new high profile claim having recently been commenced by a UK consumer champion, Which?

Whichever route is adopted, it seems nothing more than a matter of time before a group claim arising out of the development or use of AI will be commenced before the English Courts. 

Who to contact
Rob Sumroy
Rob Sumroy Partner
Natalie Donovan
Natalie Donovan PSL Counsel and Head of Knowledge Tech and Digital
Rob Brittain
Rob Brittain PSL Counsel and Head of DI Knowledge

This material is provided for general information only. It does not constitute legal or other professional advice.