Skip to main content

Whatever happened to the AI Bill?

What has happened to the government’s commitment to regulate artificial intelligence (AI)?  After being trailed in the pre-election manifesto and in the Kings Speech, the AI Bill disappeared without warning. We examine ongoing developments which may provide at least a partial explanation, including reports that the new DSIT Secretary of State, Peter Kyle, met with tech companies last month. 

 In the run up to the election, Labour said that AI would form part of its new industrial strategy. It  made a  manifesto  promise to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.  It stressed that it would “ensure our industrial strategy supports the  development of the Artificial Intelligence (AI) sector. 

This was followed up in the official government briefing, introduced by the Prime Minister : “we will harness the power of artificial intelligence as we look to strengthen safety frameworks”. 

As pronounced in the Kings’ speech,  the government undertook to  “ seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial  intelligence models.  

It therefore was a surprise that AI was missing on the list of Bills provided in the same official briefing. The absence of a Bill leaves a legislative gap, and arguably puts the UK behind on the global stage. Notably,  Europe passed its AI Act earlier this year.

The new European law provides a framework of accountability for so-called high risk AI systems. High risk  includes AI systems  that could be used by public authorities to take  decisions on, for example, immigration, educational places, access to public services, employment,  and justice. Also on the ‘high risk’  list is biometrics, critical infrastructure and law enforcement.

It could just be that the government has realised that an AI Bill is complex and will need time to draft. In particular, there may be some cross-over with other Bills the government is working on, such as the Smart data Bill, the Product Safety and Metrology Bill  and the Cybersecurity Bill.

The Financial Times reported that Peter Kyle, newly-appointed Secretary of State for Digital, Science and Technology, held a meeting with Google, Microsoft and Apple at the end of July in which a prospective AI Bill was discussed. The Chancellor of the Exchequer, Rachel Reeves, was also present, according to the FT report. The report indicated that the government is planning to have an AI Bill ready by the end of this year.  The Bill will focus on generative AI models. 

The Information Commissioner’s Office has just closed a consultation on generative AI, and is actually to complimented for its clarity of thought. Generative AI  is the technology that is capable of generating text and images on its own, having ‘learned’ how to do it by trawling through extensive datasets.  Generative AI was not addressed in the EU legislation, which was a mistake, because it is highly controversial and raises different issues from the high risk systems. 

EU regulators are already taking action on Generative AI, using data protection laws. On 14th June this year, they asked Meta to pause training  its large language models on publicly available social media content posted by users in the EU.   The representation to Meta has been made by the Irish Data Protection Authority on behalf of all European data protection bodies. In a public statement on the matter, Meta hints that it is expecting the UK to take a softer line, and permit Meta to begin training its AI on publicly available content in the UK:   This delay will also enable us to address specific requests we have received from the Information Commissioner’s Office (ICO), our UK regulator, ahead of starting the training.

However, a statement on the same day from Stephen Almond,  ICO’s Executive Director, Regulatory Risk, suggesting a UK pause too. He says (in full):  "We are pleased that Meta has reflected on the concerns we shared from users of their service in the UK, and responded to our request to pause and review plans to use Facebook and Instagram user data to train generative AI.  In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset. We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected”.

Generative AI also raises  issues around copyright enforcement, notably the right to use copyrighted works for training AI models. The crux of the problem is that copyright is actually about the economics of the creative industries. It is the basis of the system  for the distribution of creative works and underpins the way that money is obtained for authors and publishers. For the long tail of authors who earn no money from copyright, Generative AI  raises the issue of plagiarism.

 Publishers are already launching law suits against AI companies regarding the use of their data, including one from Mumsnet against Open AI.   This might sound trivial, but Mumsnet has a lot of data about the opinions and concerns of British women, in the form of posts by its users, which may raise data protection as well as copyright issues.  Copyright was a last minute addition to the EU AI Act, as outlined by  Andres Guadamuz here

There are many other vested interests seeking changes to UK law in the AI policy space. Some  are targeting data protection law, seeking changes  that potentially  weaken it,  in order to further their own interests.

We hope this government will do rather better than the last one, which, according the UK Press Gazette, made a bit of a mess of it. 

It may be just as well that the UK government takes a more cautious approach. It must be hoped that it will use the time wisely to understand the complexities around AI systems, before sitting down to draft the new Bill. 

----

You might also like:   AI and tech: Asks for the new government  

If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.

I provide independent advice on policy issues related to online content, Please get in touch via the contact page.

  • Article Views: 390

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.