Looking for help with the Online Safety Act - Ofcom consultation & guidelines? Please get in touch.
End-to-end encrypted services out of scope
Targetting of individuals or groups only with “reasonable suspicion” and judicial warrant plus judicial oversight of hash lists
No mandatory age verification for app stores and communications services
The European Parliament's Civil Liberties Committee has today formally adopted a political compromise on its proposed new law to tackle child sexual abuse online. In legislative terms, it is a significant breakthrough on this very sensitive issue that had risked becoming dead-locked. This new proposal offers a new way forward that protects children and privacy. It has the potential to be agreed by a wide range of stakeholders.
**UPDATE 22 NOVEMBER This position has been formally adopted by the European Parliament. Negotiations with the Council of Ministers can begin.**
New draft rules for Internet platforms have been released this week by Ofcom. The rules tackle a wide range of content which the government now says is illegal. They address unlawful migration, harassment, online fraud, and even public protests.
This is the first in a suite of expected announcements over the next nine months which will tell the Internet and tech industries what content moderation and other actions they are expected to take.
So what is in the announcement?
A political agreement in the European Parliament hopes to resolve the heated discourse over end-to-end encryption and protecting children online. It's a pragmatic way forward that aims to both safeguard against child abuse and maintain confidentiality of communications. It's not yet over the line but from a human rights perspective, it's heading in the right direction.
TL;DR The Online Safety Act officially becomes law today. It’s an Act to address some very serious public policy issues that have arisen as the Internet reached maturity, but the divisive politicking employed to get it over the line means that implementation will be a challenge, to say the least. This is not because people don’t want to protect children or tackle abuse – we all do – but because the methods proposed in the Online Safety Act for implementing that protection do not work with the existing global infrastructure.
I was delighted to be a participant in the seminar organised by the European Data Protection Supervisor on the EU CSAM Proposal: The Point of No Return? It was amazing to be with so many luminaries in privacy and free expression policy. I spoke about the UK's Online Safety Bill and how it differs from the approach taken by the EU in the Child Sexual Abuse Regulation.
Image: an empty House of Commons debating the Online Safety Bill on 12 September 2023.
TL:DR On the same day as the UK Parliament approved the Online Safety Bill (soon to be Online Safety Act), a US court blocked a law to protect children when they access the Internet on grounds that it violates the First Amendment of the US Constitution which protects free speech.
When you run the Online Safety Bill through the mirror of this US court ruling, you get a remarkable set of findings. Whilst US free speech law operates differently from UK law, it is very likely that the Online Safety Bill will also be in breach of freedom of expression under the Human Rights Act which enshrines the European Convention on Human Rights. This article takes a comparative look.
The Online Safety Bill has had a difficult relationship with freedom of expression as its main premise to to remove content. For that reason it was a pleasant surprise to see the House of Lords amend the Bill with explicit support for free speech as a right under the Human Rights Act and European Convention on Human Rights (ECHR).
Until now, this support has been missing from the Bill. This is therefore a positive outcome from the House of Lords which will redress the balance between content removal and free speech.
TL;DR: The government is to establish an £11 million Online Capability Centre to seek and identify small boats content from people smugglers. The centre will be run by the National Crime Agency in co-operation with social media platforms.
In order to protect freedom of expression, the government must precisely identify the specific content to be removed. The examples published by the government on Twitter / X give some clues. It is not a technological silver bullet to solve the question of people arriving on UK shores in small boats.
And it is concerning for British democracy to have law enforcement working so closely alongside the companies who run our public conversational spaces, with the power to restrain publication, and no independent oversight.
TO READ THE FULL ARTICLE, CLICK HERE:
I'm delighted that my paper 'Algorithms patrolling content: where's the harm?' An empirical examination of Facebook shadow bans and their impact on users' has been published in the International Review of Computers, Law and Technology.
It has been a lot of hard work to get this to publication, but now that it's out, I hope it will inform academics, students and policy-makers about an obscure aspect of content moderation, that has a very real impact on individuals who are active on social media. The ghosting experience of their Pages and accounts by shadow banning is not soft enforcement option but a significant interference with their freedom of expression.
It is making its way into law and policy with hardly a blink of the eye as policy-makers adopt the belief that regulating 'behaviour' is a good way to deal with harmful content. Yet, as I argue in the paper, suppressing the dissemination of content on the basis of the account 'behaviour' can interfere with freedom of expression to almost the same extent as taking it down. It takes no account of whether the content is lawful. If there is no requirement to notify the user, then how are they even going to be able to appeal such a restriction on their rights?
TL;DR They say they do, but the Bill is not clear. The government has been quite shifty in its use of language to obscure a requirement for encrypted messaging services to monitor users' communications. If they do comply with this requirement, they will have to break the encryption that protects users' privacy, and users risk being less safe online. However, they will also be conflicted in their legal duties to protect users' privacy, as will the regulator Ofcom. Private messaging services are important to millions of UK users. Their obligation under the Online Safety Bill needs clarification and amendment.
***UPDATE 24 May 2022 Quietly behind the scenes, there is confirmation that this is exactly what the government wants to do.***
TL;DR A puzzling feature of the UK Online Safety Bill is the special protection it gives to 'content of democratic importance'. It asks the large online platforms to give special treatment to such content, in cases where they are taking a decision to remove the content or restrict the user who posted it. However, the term appears to have been coined by the government for the purpose of this Bill, and what it means is not clear. There is no statement of the policy issue that it is trying to address. That makes it very difficult for online platforms to code for this requirement.
TL;DR The UK government's Online Safety Bill creates a double standard for freedom of expression that protects large media empires and leaves ordinary citizens exposed. It grants special treatment to the large news publishers and broadcasters, who get a carve out from the measures in the Bill so that headlines like the notorious "Enemies of the People" get special protection from the automated content moderation systems. They even get a VIP lane to complain. Foreign disinformation channels would also benefit from this carve-out, including Russia Today. Content posted by ordinary British people could be arbitrarily taken down.
TL;DR Social media companies will be required by the government to police users' posts by removing the content or suspending the account. Instead of a blue-uniformed policeman, it will be a cold coded algorithm putting its virtual hand on the shoulder of the user. The imprecise wording offers them huge discretion. They have a conflicted role - interfere with freedom of expression and simultaneously to protect it. Revision is needed to protect the rights of those who are speaking lawfully, and doing no harm, but whose speech is restricted in error.
TL;DR Key decisions will be taken behind Whitehall facades, with no checks and balances. The entire framework of the Bill is loosely defined and propped up by Henry VIII clauses that allow the Secretary of State (DCMS and Home Office) to implement the law using Statutory Instruments. This means that Ministerial decisions will get little or no scrutiny by Parliament. This will include crucial decisions about content to be suppressed and compliance functions required of Internet services. Standards for automated detection of illegal content will be determined by the Home Secretary. The concern is whether these powers could ever be used to block lawful but inconvenient speech.
TL;DR The government's Impact Assessment calculates that this Bill will cost British businesses over £2billion to implement. By its own admission, 97 per cent of the 24,000 businesses in scope, are a low risk of having illegal or harmful content on their systems. Only 7-800 are likely to be high risk, and the real target, the big global platforms, only number around half a dozen. It is hard to see how the draft Bill of May 2021 could be justified on this basis. The Bill should focus on the real aim of tackling the global mega-platforms, and the high risk issues like child sexual abuse. For 97 per cent of the 24,000 small British businesses, there is no evidence that they entail any risk and the cost and regulatory effort is disproportionate to the aims.