Skip to main content

Why the Online Safety Act is not fit for purpose

The UK’s flagship law has come under fire after viral disinformation led to real-world consequences on Britain’s streets. Notably the Mayor of London, Sadiq Khan said it is not fit purpose. This article examines that question. 

In my opinion , it isn’t fit for purpose. It does need fixing, but first we have to open it up, understand what it really does and be honest about its flaws.

It’s a law that was driven by moral panic and advocated by Hollywood style choreography. The Parliamentary scrutiny of its detail focussed largely on the horrors of the harm done, with little technical understanding of how policy could constructively intervene. While tech companies were endlessly excoriated in public fora and media, there was little effort to build the depth of engagement with technology experts that was needed.  Most of the time, experts were  shut out of the room, and only invited occasionally for ‘balance’. None of this was helpful for good law-making.

A key function of the Online Safety Act  is to use technology to arbitrate  speech.  It asks online platform providers to use their systems to process determinations about the lawfulness of speech. It also contains built-in assumptions about how the technology disseminates speech and contributes to the spread of harmful speech. It tries to tackle those assumptions without sufficient information on how the underlying code works. As a consequence, the law does not have the right levers to address the issues that policy-makers and stakeholders want.

Code is binary, and as such it struggles to take the complex contextual decisions that are required by the Online Safety Act when judging the harmfulness or illegality of speech.   For example, they are required to assess the mens rea or intention of the individual posting the content. The code cannot do this without gathering a vast number of additional datapoints. The process relies on artificial intelligence (AI) . It raises issues around data protection and the necessity of collecting so much data. It would be classified as high risk under data protection legislation and requires a special impact assessment. It should also be classified as high risk  for the purpose AI regulation due to the potentially serious implications of determining criminal intent. None of this has been recognised in the drafting of the Act.

Under the Act, platforms are  given absolute discretion to make determinations of illegality. Not only is this contrary to principles of British justice, but  the definition of illegal content  in the Online Safety Act is complex and opaque. The Act sets out  a laundry list of  offences that reflect mindset of a criminal lawyer, but is alien to the expectations of the people who use online platforms. Importantly, it fails to address the genuine concerns of the stakeholders who wanted this law.

The Online Safety Act  governs how online content that is deemed illegal should be suppressed or restricted. The measures to suppress content requires one form of technology to enforce against another. In order to do their job of policing users’ online posts, providers will use content moderation systems  driven by artificial intelligence (AI) . These systems will interact with recommender systems, that are also AI-driven. Conflicts of interest will inevitably occur and that does not promote good  outcomes.

Content moderation systems use techniques such as  blocking, filtering, demotion, de-listing, content removal, account termination, and even seemingly innocuous measures such as preventing the sharing of links which results in content not  being seen. Errors do occur resulting in the suppression of lawful posts.  

The Act does not include the necessary safeguards against errors. Such safeguards are governed by human rights law, specifically, Article 10, paragraph 2  of the European Convention on Human Rights. This provides the balancing framework  for States. Where freedom of expression could be interfered with, there must be process for the individual user to appeal and obtain redress. The Online Safety Act contains a last-minute amendment that enshrines Article 10, but does not offer redress to users where AI  errors affect individual rights.

Then there are the risk assessments. I don’t think the drafters can have realised how these will serve to protect the interests of the platform providers, and not the users. The Act imposes a  liability on providers which establishes the wrong  incentives. A rights-based perspective would have come up with a better governance framework. 

Overall, the Act fails to address  the underlying structures that support the platforms we use everyday. It  fails to recognise that the Internet is an inter-connected eco-system. That inter-connectedness that brings the benefits of a global communications system to all British citizens. However, the Act ignores the cross-border essence of the Internet’s underlying structures. Rather like Brexit, it is going to put barriers in the way for the UK. To give one example, the Act  introduces licence payments to Ofcom which breaks the most fundamental principle of that eco-system. We can expect to see negative cross-border effects.

Getting back to the starting point of this article, the Online Safety Act does not really address disinformation. This issue was raised in the context of the recent London riots, where it was claimed – and I have no reason to dispute those claims – that the technology deployed by online platforms served to spread false information very quickly, and that this false information served to incite the violence seen on our streets. It was an instance where the correlation and likely causation between online disinformation and offline harmful events could be identified.

Disinformation was deliberately not addressed in the Online Safety Act because at the time of drafting, the controversy over the Cambridge Analytica scandal was current. The impact of disinformation on the outcome of the 2016 Brexit referendum was fresh in the minds of policy-makers, and the  previous Conservative government was afraid to talk about it. The Parliamentary DCMS Select Committee had gathered considerable  evidence. As someone who has read quite a bit of it, I can verify that  the Committee did a good job. However, disinformation disappeared from the policy agenda.  

The Online Safety Act ended up with a weak provision for a committee on disinformation to advise Ofcom and produce periodic reports. The  first of these  reports should be in 18 months after it has been set up.  So it’s unlikely to help in the current situation.

The Act  does not effectively address other issues that many of its supporters wanted, among other things, the intimidatory trolling of politicians and political campaigners, and threats of violence against women. There is a need for  a policy intervention, but it should be more rounded and thoughtful one than this law provides.

There is no technical silver bullet that will fix all these issues raised by stakeholders. Ultimately, all that an online platform can do is  to suppress and restrict content.  Most of the issues that were central to the discourse around this Act  require actions in social, healthcare and law enforcement spaces that are outside the online realm.

So in summary, this is a massive piece of law in which each element was only sketched out by Parliament and the details were ignored.  Critically, it was the sidelining of the foundational principles of Internet technology and lack of understanding of its limitations that has resulted in an Act of Parliament not fit for purpose.

---

I provide independent advice on policy issues related to online content. If you would like a briefing, please contact me to discuss via the Contact page.

If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.

  • Article Views: 236

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.