As the EU’s Artificial Intelligence (AI) Act comes into force, early indications suggest that the UK could miss out on a golden opportunity to follow suit, write Steven Farmer, Scott Morton, and Mark Booth.
Steven Farmer is a partner at international law firm Pillsbury Winthrop Shaw Pittman and a member of the firm’s UK/EU AI Taskforce. Scott Morton and Mark Booth are the firm’s counsel and associates, respectively, who advise global businesses on compliance with the EU’s AI Act.
All eyes have once again turned to AI, with the EU’s landmark regulation coming into force last month, starting the countdown to compliance. The first impact will be felt on 2 February 2025, with the ban on so-called ‘prohibited’ AI practices, followed by a phased implementation of its other provisions largely from August 2026.
Greeted with much fanfare, the EU AI Act has been broadly hailed as the world’s first comprehensive regulation of artificial intelligence. There is a perception that it will act as a framework for other countries grappling with regulating the rapidly advancing technology, contributing to the ‘Brussels effect.’
After all, where the EU moved first, others have sought to follow, typified by the publication of Taiwan’s comprehensive draft AI Act. The UK, notably, has not followed suit. The previous government adopted a pro-innovation position and rejected introducing comprehensive AI regulation, fearing it could stifle innovation.
Expectations that the UK would change course may, briefly, have been piqued by the result of July’s general election.
Not long after the vote, it was reported that Kier Starmer would be introducing an AI Bill, expected to be announced in the King’s Speech earlier this year.
But the bill was conspicuously absent from the list of 40 new laws set out, with the speech instead simply referring to strengthening AI “safety frameworks” and establishing “appropriate legislation: placing requirements on those developing the most powerful artificial intelligence models.”
While this could indicate a future alignment with the specific section of the EU AI Act targeting general purpose AI models, including those with “systemic risk” – a later addition to the Act motivated by the explosion in popularity of ChatGPT and other large language models – at this point, little detail has been provided.
Fighting to the front of the pack
With AI technology and the AI industry landscape evolving so rapidly, it is understandable why the UK may be reluctant to introduce comprehensive legislation at this stage. Many countries are currently jostling for the position at the forefront of the AI pack, and a pro-innovation legislative approach could win the hearts of AI developers, deployers and users alike.
But businesses also favour certainty. By setting out its stall early, the EU has provided a level of certainty as to the guardrails it will expect to see in the development and deployment of AI. This contrasts with the arguable uncertainty businesses may face in the UK and elsewhere.
What is more, the EU AI Act is seen by many as striking the right balance between fostering innovation and ensuring safety. Only the most extreme AI use cases—social scoring and predictive policing, for instance—will fall within the ‘prohibited category.’ The overarching requirements relating to bias, transparency, and AI literacy are also to be welcomed.
Or at risk of falling behind?
It is widely expected that the rhetoric of unlocking a Brexit dividend by cutting EU red tape will be softened as the new government looks to “reset” its relationship with the EU.
Despite this, the current UK government seems to be continuing the course set by its predecessor and leaving AI regulation to respective industry regulators in the UK. This approach risks creating a complex regulatory mosaic for businesses to grapple with and raises concerns about whether legacy regulators can effectively handle cutting-edge technology regulation.
In addition, given the scale and influence of the EU’s vast market of over 450 million people, UK AI companies may find it pragmatic to comply with the EU AI Act regardless of the UK’s stance.
The EU’s market size significantly incentivises businesses to align their practices with the Union’s standards. In other words, the sheer weight of numbers across the channel could undermine the UK’s attempt to cleave a new course.
This cuts to the very heart of the issue. Even if the UK adopted a different regulatory framework, it might have limited impact. Companies seeking to remain competitive on a global scale are likely to prioritise compliance with the more stringent EU regulations, potentially rendering any divergent UK regulations less impactful.
By not being quicker off the mark to propose a solid regulatory framework, the UK has arguably missed an opportunity to advance itself in the global AI regulatory race.