Whatever happened to the AI Bill?
- Author: Monica Horten
- Published: 24 July 2024
What has happened to the government’s commitment to regulate artificial intelligence (AI)? After being trailed in the pre-election manifesto and in the Kings Speech, the AI Bill disappeared without warning. We examine ongoing developments which may provide at least a partial explanation, including reports that the new DSIT Secretary of State, Peter Kyle, met with tech companies last month.
In the run up to the election, Labour said that AI would form part of its new industrial strategy. It made a manifesto promise to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”. It stressed that it would “ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector.
This was followed up in the official government briefing, introduced by the Prime Minister : “we will harness the power of artificial intelligence as we look to strengthen safety frameworks”.
As pronounced in the Kings’ speech, the government undertook to “ seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.
It therefore was a surprise that AI was missing on the list of Bills provided in the same official briefing. The absence of a Bill leaves a legislative gap, and arguably puts the UK behind on the global stage. Notably, Europe passed its AI Act earlier this year.
The new European law provides a framework of accountability for so-called high risk AI systems. High risk includes AI systems that could be used by public authorities to take decisions on, for example, immigration, educational places, access to public services, employment, and justice. Also on the ‘high risk’ list is biometrics, critical infrastructure and law enforcement.
It could just be that the government has realised that an AI Bill is complex and will need time to draft. In particular, there may be some cross-over with other Bills the government is working on, such as the Smart data Bill, the Product Safety and Metrology Bill and the Cybersecurity Bill.
The Financial Times reported that Peter Kyle, newly-appointed Secretary of State for Digital, Science and Technology, held a meeting with Google, Microsoft and Apple at the end of July in which a prospective AI Bill was discussed. The Chancellor of the Exchequer, Rachel Reeves, was also present, according to the FT report. The report indicated that the government is planning to have an AI Bill ready by the end of this year. The Bill will focus on generative AI models.
The Information Commissioner’s Office has just closed a consultation on generative AI, and is actually to complimented for its clarity of thought. Generative AI is the technology that is capable of generating text and images on its own, having ‘learned’ how to do it by trawling through extensive datasets. Generative AI was not addressed in the EU legislation, which was a mistake, because it is highly controversial and raises different issues from the high risk systems.
EU regulators are already taking action on Generative AI, using data protection laws. On 14th June this year, they asked Meta to pause training its large language models on publicly available social media content posted by users in the EU. The representation to Meta has been made by the Irish Data Protection Authority on behalf of all European data protection bodies. In a public statement on the matter, Meta hints that it is expecting the UK to take a softer line, and permit Meta to begin training its AI on publicly available content in the UK: This delay will also enable us to address specific requests we have received from the Information Commissioner’s Office (ICO), our UK regulator, ahead of starting the training.
However, a statement on the same day from Stephen Almond, ICO’s Executive Director, Regulatory Risk, suggesting a UK pause too. He says (in full): "We are pleased that Meta has reflected on the concerns we shared from users of their service in the UK, and responded to our request to pause and review plans to use Facebook and Instagram user data to train generative AI. In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset. We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected”.
Generative AI also raises issues around copyright enforcement, notably the right to use copyrighted works for training AI models. The crux of the problem is that copyright is actually about the economics of the creative industries. It is the basis of the system for the distribution of creative works and underpins the way that money is obtained for authors and publishers. For the long tail of authors who earn no money from copyright, Generative AI raises the issue of plagiarism.
Publishers are already launching law suits against AI companies regarding the use of their data, including one from Mumsnet against Open AI. This might sound trivial, but Mumsnet has a lot of data about the opinions and concerns of British women, in the form of posts by its users, which may raise data protection as well as copyright issues. Copyright was a last minute addition to the EU AI Act, as outlined by Andres Guadamuz here.
There are many other vested interests seeking changes to UK law in the AI policy space. Some are targeting data protection law, seeking changes that potentially weaken it, in order to further their own interests.
We hope this government will do rather better than the last one, which, according the UK Press Gazette, made a bit of a mess of it.
It may be just as well that the UK government takes a more cautious approach. It must be hoped that it will use the time wisely to understand the complexities around AI systems, before sitting down to draft the new Bill.
----
You might also like: AI and tech: Asks for the new government
If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.
I provide independent advice on policy issues related to online content, Please get in touch via the contact page.
- Article Views: 390
IPtegrity politics
- What's influencing tech policy in 2025?
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- EU at loggerheads over chat control
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Whatever happened to the AI Bill?
- Hidden effects of the UK Online Safety Act
- EU puts chat control on back burner
- Why did X lock my account for not providing my birthday?
- Creation of deep fakes to be criminal offence under new law
- AI and tech: Asks for the new government
- How WhatsApp holds structural power
- Meta rolls out encryption as political headwinds ease
- EU law set for new course on child online safety
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- MEPs reach political agreement to protect children and privacy
About Iptegrity
Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users. Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. Former telecoms journalist, experienced panelist and Chair, cited in the media eg BBC, iNews, Times, Guardian and Politico.
Online Safety
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Hidden effects of the UK Online Safety Act
- Why did X lock my account for not providing my birthday?
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- Online Safety - a non-consensual Act
- Online Safety Bill passes as US court blocks age-checks law
- Online Safety Bill: ray of hope for free speech
- National Crime Agency to run new small boats social media centre
- Online Safety Bill: does government want to snoop on your WhatsApps?
- What is content of democratic importance?
- Online Safety Bill: One rule for them and another for us
- Online Safety Bill - Freedom to interfere?