AI and tech: Asks for the new government
- Author: Monica Horten
- Published: 11 July 2024
Artificial intelligence will spearhead an ambitious industrial strategy for Keir Starmer’s government. What are the key asks for tech policy?
Artificial intelligence is top of the list for Peter Kyle, the new Secretary of State for Science, Innovation and Technology. It will drive policy, following the innovation thread that ran through the Labour Party election manifesto. In its first few days in office, Labour announced a plan to co-ordinate AI across government and an AI Bill is expected in the Kings Speech. An upbeat rhetoric calls for Britain to embrace the revolutionary potential of AI, to drive industrial change and efficiency in public services.
All of this is reminiscent of a speech from another Labour Prime Minister over 60 years ago, Harold Wilson, who famously called for the “white heat of technology” to forge Britain’s new prosperity. However, 21st century heat is arguably not white but red, and frought with dangers unthought of back then. It is the duty of government to safeguard our rights, guard against malpractice and ensure the public can trust systems deployed by private or State actors. Public policy should seek to avoid a digital Liz Truss moment, or a new Horizon scandal.
AI is not a single thing, but a complex set of technologies with multiple different purposes and aims, and issues. Government needs to take it on with a long term plan. It may be a key plank of industrial strategy, but policy needs to be multi-functional, taking on board the rights and interests of all stakeholders. Ultimately AI systems must serve the people.
An Ethical AI
In the run up to the election, Labour made a manifesto commitment to regulate AI, promising to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”. It sounds good, but building trust will mean:
1. Promote ethical AI development. Ethical development, put simply, refers to a “do no harm” principle and a precautionary approach. Transparency, accountability, systematic record-keeping throughout the lifecycle of the model, are important grounding principles.
2. Safeguard our rights: ensure that artificial intelligence systems function in a way that is consistent with the UK’s obligations to protect human rights, including privacy and freedom of expression, religious beliefs, rights to work and education.
3. Protect in law against management failures and cover ups: learn the lessons from Horizon, the discredited Post Office system. AI multiplies the challenges and pitfalls.
4. Legislate for a strong independent regulator with effective enforcement powers.
A Rights-based AI
AI can interfere with (violate) our fundamental rights. It can also facilitate interference with those rights by others. The potential intrusion on privacy and free expression is high, not to mention the challenges of systems taking decisions about individual rights to employment or benefits. It is the government’s duty to ensure that citizens are protected from such violation and that safeguards are in place. It is critical to have strong data protection rules in order for AI to build public trust and thrive:
1. Regulate all artificial intelligence systems to protect against arbitrary surveillance practices that would interfere with privacy and freedom of expression.
2. Ensure strong privacy safeguards in the face of AI-driven bulk surveillance that collects, tracks and analyses personal data, and which are at the heart of many AI models. No longer the sole province of the intelligence services, these techniques are within the capability of corporate data teams.
3. Compliance with human rights standards for the AI-driven measures that will be required under the Online Safety Act. Safeguard against misuse when large numbers of data points are gathered for the purpose of determining illegal content, user conduct, or intent. Mandate rigorous transparency of databases used for content screening.
4. Regulate for procedural safeguards against over-moderation and errors. In particular, AI -based content moderation, age assurance, and any AI system taking decisions about individuals.
5. Prohibit the development of AI models that present a very high risk to privacy, such as facial recognition and biometric surveillance.
A trustworthy AI
For AI to succeed, trust will be paramount:
1. Explicitly prohibit a general monitoring obligation.
2. Explicitly prohibit AI-driven screening of content on end-to-end encrypted platforms (which Ofcom acknowledges is not techically not feasible; and would introduce systemic vulnerabilities and weaknesses).
3. Mandate regulatory safeguards and accountability for age assurance systems, including for third-party suppliers, and especially where biometrics are used to assess a child’s age, or where the systems are linked to permissions for content access.
4. Amend the Online Safety Act with provision for judicial oversight as proposed by Labour peers in its passage through the Lords.
5. Ensure the UK preserves its adequacy status on data protection and stays in line with Europe. The DPDI Bill (currently shelved) would have put the UK’s adequacy status with the EU at risk, and cause problems for British businesses trading in Europe.
6. Regulate Generative AI which is arguably high risk but in a different way from the other abovementioned systems, because of the possibility that it could learn and spread disinformation, or hate, or extremist ideas, or plagiarise books and knowledge without source or citation.
---
If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.
I provide independent advice on policy issues related to online content, Please get in touch via the contact page.
- Article Views: 345
IPtegrity politics
- What's influencing tech policy in 2025?
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- EU at loggerheads over chat control
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Whatever happened to the AI Bill?
- Hidden effects of the UK Online Safety Act
- EU puts chat control on back burner
- Why did X lock my account for not providing my birthday?
- Creation of deep fakes to be criminal offence under new law
- AI and tech: Asks for the new government
- How WhatsApp holds structural power
- Meta rolls out encryption as political headwinds ease
- EU law set for new course on child online safety
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- MEPs reach political agreement to protect children and privacy
About Iptegrity
Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users. Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. Former telecoms journalist, experienced panelist and Chair, cited in the media eg BBC, iNews, Times, Guardian and Politico.
Online Safety
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Hidden effects of the UK Online Safety Act
- Why did X lock my account for not providing my birthday?
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- Online Safety - a non-consensual Act
- Online Safety Bill passes as US court blocks age-checks law
- Online Safety Bill: ray of hope for free speech
- National Crime Agency to run new small boats social media centre
- Online Safety Bill: does government want to snoop on your WhatsApps?
- What is content of democratic importance?
- Online Safety Bill: One rule for them and another for us
- Online Safety Bill - Freedom to interfere?