Skip to main content

AI and tech: Asks for the new government

Artificial intelligence will spearhead an ambitious industrial strategy for Keir Starmer’s  government.   What are the key asks for tech policy? 

 Artificial intelligence  is top of the list for Peter Kyle, the new Secretary of State for Science, Innovation and Technology. It will drive policy, following the innovation thread that ran through the Labour Party election manifesto.  In its first few days in office, Labour announced a plan to co-ordinate AI across government  and an AI Bill is expected in the Kings Speech.  An upbeat  rhetoric calls for Britain to  embrace the revolutionary potential of AI,  to drive industrial change and efficiency  in public services.   

All of this is reminiscent of  a speech from another Labour Prime Minister over 60 years ago, Harold Wilson, who famously called for the “white heat of technology” to forge Britain’s new prosperity. However, 21st century heat is arguably not white but red, and frought with dangers unthought of back then. It is the duty of government to safeguard our rights,  guard against malpractice and ensure the public can trust  systems deployed by private or State actors. Public policy should seek to avoid a digital Liz Truss moment, or a new Horizon scandal. 

AI is not a single thing, but a complex set of technologies with multiple different purposes and aims, and issues. Government needs to take it on with a long term plan. It may be a key plank of industrial strategy, but policy needs to be multi-functional, taking on board the rights and interests of all stakeholders. Ultimately AI systems must serve the people.   

An Ethical AI   
In the run up to the election, Labour made a  manifesto commitment to regulate AI,   promising to ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models.  It sounds good, but building  trust will mean:

1.    Promote  ethical AI  development. Ethical development, put simply,  refers to a “do no harm” principle and a precautionary approach. Transparency, accountability, systematic record-keeping throughout the lifecycle of the model, are important grounding principles. 
2.    Safeguard our rights:  ensure that artificial intelligence systems  function in a way that is consistent with the UK’s obligations to protect human rights, including   privacy and freedom of expression, religious beliefs, rights to work and education.  
3.    Protect in law against management failures and cover ups:  learn the lessons  from Horizon, the discredited Post Office system. AI multiplies the challenges and pitfalls.  
4.    Legislate for a strong independent regulator with effective enforcement  powers. 

A Rights-based AI 
AI can interfere with (violate) our fundamental rights. It can also facilitate interference with those rights by others.  The potential intrusion on privacy and free expression is high, not to mention the challenges of systems taking decisions about individual rights to employment or benefits.   It is the government’s duty to ensure that citizens are protected from such violation and that safeguards are in place.  It is critical to have strong data protection rules in order for AI to build public trust and thrive:

1.    Regulate all artificial intelligence systems to protect against arbitrary surveillance practices that would interfere with privacy and freedom of expression. 
2.    Ensure strong  privacy  safeguards in the face of AI-driven bulk surveillance that collects, tracks and analyses personal data, and which are at the heart of many AI models.  No longer the sole province of the intelligence services,  these techniques are  within the capability of corporate data teams.  
3.    Compliance with human rights standards for the AI-driven measures that will be required under the Online Safety Act. Safeguard against misuse when large numbers of data points are gathered for the purpose of determining illegal content, user conduct, or intent.  Mandate rigorous transparency of databases used for content screening.
4.    Regulate for procedural safeguards against over-moderation and errors. In particular, AI -based content moderation, age assurance, and any AI system taking decisions about individuals. 
5.    Prohibit the development of AI models that present a very high risk to privacy, such as  facial recognition and biometric surveillance. 
 

A trustworthy AI 

For AI to succeed, trust will be paramount: 
1.   Explicitly prohibit a  general monitoring obligation.
2.  Explicitly prohibit  AI-driven screening of content on end-to-end encrypted platforms  (which Ofcom acknowledges is not techically not feasible;  and would introduce systemic vulnerabilities and weaknesses). 
3.    Mandate regulatory  safeguards and  accountability for age assurance systems, including for third-party suppliers,  and especially where biometrics are used to assess a child’s age, or where the systems are linked to permissions for content access.
4.    Amend the Online Safety Act with provision for judicial oversight  as proposed by Labour peers in its passage through the  Lords. 
5.    Ensure the UK preserves its adequacy status on data protection and stays in line with Europe. The  DPDI Bill (currently shelved) would have put the UK’s  adequacy status with the EU at risk, and cause problems for British businesses trading in Europe.  
6.    Regulate  Generative AI  which is arguably  high risk but in a different way from the other abovementioned systems, because of the possibility that it could learn and spread disinformation, or hate, or extremist ideas, or plagiarise books and knowledge without source or citation.

---

If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.

I provide independent advice on policy issues related to online content, Please get in touch via the contact page.

  • Article Views: 345

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.