Skip to main content

What's influencing tech policy in 2025?

2025 will be a pivotal year for tech policy as law-makers seek the benefits of AI but struggle to regulate its dark side. We sketch out how big tech is pressing for weaker rules and using soft power plays to grow influence within government, looking through the lens of political economy factors.

The power of money 

The UK government  has appointed a senior Microsoft executive to a key role in shaping Britain’s  industrial strategy. Was this wise to bring in a US trillion-dollar corporation so close to key areas of tech policy?  Power relations do matter.

Clare Barclay, President of Microsoft Enterprise and Industry,  Europe, Middle East and Africa,  has been appointed  Chair of the Industrial Strategy Advisory Council (ISAC).  In that role, she will  be engaged in framing  strategic policy directions and  will hold a pivotal role in liaising between industry, the Chancellor of the Exchequer,  and the Secretary of State.  

As with the other big four global tech companies, Microsoft is benefitting from a 10 year  high is in its share price, driven by AI fever. Microsoft’s market capitalisation ($3.13 trillion)  is not far off the UK’s GDP ( in USD $3.34 trillion), but this is not a marriage of equals. In a world in which big tech is well experienced in leveraging its economic power on a global scale to manoeuvre national governments, Microsoft has a lobbying power across international borders that the UK government may well envy.

Microsoft  is the grand master of anti-competitiveness. Its sharp business practices are legendary, like when it got the better of IBM in a contract for the early personal computer operating system known as MS-DOS  [see the story here ] Microsoft has been taken to task by the EU more than once for anti-competitively bundling software, most recently,  over the bundling of its TEAMS video calling software with its Office package.   

 Many people might be surprised at Microsoft’s size  because Microsoft does not have the profile in the public consciousness that smaller and more flamboyant tech companies have. In fact, it is second only to Apple, which has a market capitalisation of $3.79 trillion.  Contrary to perception, it is larger than either Google  [$2.32 trillion] or Meta [$1.48 trillion].  The social media platform X, formerly Twitter, is tiny by comparison, and is currently valued at a mere $12.5 billion, reflecting a  71.5 per cent  drop,  according to a note from the investment firm Fidelity [as reported in The Guardian] .   

At the very moment when the government wants to implement the Online Safety Act, having  big tech so close may put it in an uncomfortable position,  entrenching big tech power and making it harder for regulators whose role is to demand accountability.  

The geo-politics of tech 

The UK government wants to re-set its relationship with Europe. The move is a response to changes in the geo-political context, such as the war in Ukraine and ascendancy of Donald Trump to the US Presidency. A key objective is a Security Co-operation Agreement. This was originally in the Brexit divorce agreement – correctly known as  the EU-UK Trade and Co-operation Agreement – agreed by Theresa May, but was knocked out by Boris Johnson. It should be relatively straightforward to revive it, as European governments would welcome Britain as a security partner.

However, the ease or difficulty in achieving a Security Co-operation  Agreement will be determined by UK government choices on tech law.  Alignment on tech policy will matter.

A Security Co-operation Agreement will  embed requirements around technology. It will of course include AI which is increasingly becoming a component of military as well as civilian government systems.  It’s also likely to include database sharing by law enforcement authorities,  which immediately raises the matter of alignment with EU GDPR. 

In fact, GDPR alignment  is already in the EU-UK Trade and Co-operation Agreement. However, AI raises new ethical questions which include privacy but go beyond it.  The EU has already put in place the AI Act, which is a starting point for the governance of AI systems. The UK government is apparently advising UK businesses to follow the EU law, in the absence of a domestic AI framework.  

Trade is of course the other area to watch. The UK sits on a wobbly see-saw between the EU and the US. The UK is keen not to rock the boat with the incoming Trump administration and  as we’ve seen with the Australia and New Zealand trade agreements, the UK  is  a weak trade negotiator. A US trade agreement could try to force our hand on legal changes that diverge from EU tech law, in order to protect the interests of tech companies.

US  tech industries are a major force not only in Washington, but also  Brussels, and they supported by large intra- and inter-company networks reaching across borders and around the globe in a way that national governments can never match.  The EU is large enough to stand up to US pressure, and, as we in the UK know, it is a tough negotiator.  When the GDPR was being legislated, the US tech industry lobbied to get the clause governing international data transfers dropped – and almost succeeded -  but ultimately the clause went back in the text.

Right now, US corporations are throwing their weight around. They don’t like the new EU  and UK laws designed to regulate their businesses. Meta’s Mark Zuckerberg’s thinly veiled threat  in this video message indicates they will cosy up to President Trump in order to obtain his support internationally. Elon Musk's interference with UK and German politics are likely to stem, at least in part,  from a similar motivation. X has already been charged by EU under the Digital Services Act (as reported by Politico) for allowing the spread of disinformation and hate speech. 

Keir Starmer says the UK can set its own path, according to his own article in the Financial Times, but the evidence suggests he should be mindful of the geo-political interests and how they are played by corporate actors.

 Regulating Code in the AI era 

AI is of course an area where the UK government and the tech companies have an interest – but is it a  common interest?  

The UK government has announced its AI action plan, saying it wants the UK to be an AI sweet spot. AI should be “mainlined into the veins” of the country.  The action plan was drawn up by a venture capital investor, Matt Clifford. It was welcomed by Microsoft and other tech companies – this is usually code for it was something they lobbied for.

AI can open up new frontiers. It may well help to run  government, but it can also be a blunt instrument. Nothing like it was ever previously envisaged by legislators and it certainly raises complex ethical and legal dilemmas, such as the spread of online hate speech.   Legislators need to get to grips with the underlying legal issues and technologies with a view to a balanced but also firm and effective legal framework.

All the big tech companies have a significant  stake in AI regulation – or rather, in minimalizing it. AI is a significant  technology shift and is an opportunity for those who don’t like the accepted balance to try to steer legislators away. They’ll either get on the inside, like Microsoft, or throw their toys out of the pram, like Mark Zuckerberg.   They don’t like GDPR. They don’t like the expense of content moderation, especially where they are required to implement procedural safeguards against their own mistakes. They don’t want copyright laws to encroach on what they are doing.

Ofcom has timidly published a letter saying AI systems are within scope of the Online Safety Act, which may actually be correct. However, there is little sign of the UK legislating a framework AI law as EU has done. Instead, it has backed itself into the corner of copyright and probably does not realise what it is taking on [but if the government wants to learn more about corporate influence on copyright policy, it may wish to read my books].  Of course, we don’t know that the UK’s decision not to legislate has anything to do with big tech lobbying, but experience tells me that they had a hand in it.

My tip for 2025 -  keep an eye on the geo-politics of AI. 

---

I provide independent advice on policy issues related to tech policy, online content and AI. Please get in touch via the Contact page.

If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.

  • Article Views: 102

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.