Skip to main content

Why we should not let terror destroy online values

In seeking a response to terrorism, how can we protect our democratic values online as well as offline? Theresa May says there should be no safe spaces for online extremism, but attacking online platforms, and laying the blame entirely at their feet, is at best unhelpful and fundamentally problematic. How should we, as a society, ensure safeguards against the unintended consequences of such measures?

The British people's response to the atrocities of Manchester and London Bridge is typically stoic, underpinned by the shared belief that we must not let terror destroy our values and our way of life. Britain is rightly proud of its democratic heritage and its long-held values of openness, tolerance, pluralism, entrepreneurship and free speech - a tradition that goes back over 300 years to the lapse of the Licencing Act in 1695. These are values which are also shared by Internet users all around the world, and on which the Internet was founded.

However, there is an apparent contradiction between the stoical response of British society offline, and the political attack on the online world. The Prime Minister Theresa May is calling for private policing of cyberspace, and demanding restrictive actions and monitoring of users' content. In clamping down on online intermediaries, those very values that we hold dear could be put in jeopardy .

In this post, I suggest that our response to terrorism should protect our values online as well as offline. Attacking online platforms, and laying the blame entirely at their feet, is at best unhelpful and potentially a retrograde step.

The government argues that online measures are necessary in light of the attacks, because terrorists use Internet-based platforms and services to solicit and recruit for their cause. The policy debate is framed around the nature of the content as the justification for why something needs to be done. However, the root of the policy problem concerns the process for dealing with it. The issue is not whether illegal content should be removed, but rather the unintended consequences of removal by a badly designed process.

The solution lies in a policy that grasps all angles of this problem in their full complexity. It should ensure that content removals are governed by a law that will enable the illegal content to be addressed, and and that guarantees and safeguards for citizens and for the intermediaries that carry or host the content. There are discussions happening internationally, in which I have been involved, to explore the options. But it will not happen by simply bullying the platforms.

The conundrum was expressed plainly by Nick Clegg, speaking on Radio 4 World at One on 6 June, who said: "We cannot expect Mark Zuckerberg to be the great censor in the cloud - if content is illegal that is a matter for lawmakers".

The legal framework that governs Internet intermediaries is the E-commerce directive and the Telecoms Framework. The E-commerce directive established that Internet Service Providers (which run the networks) are 'mere conduits' who transmit the content, but they do not know or care what it is. Hosting providers are exempt from liability for content provided that they have no 'actual knowledge' of it, and that on obtaining 'actual knowledge', they expeditiously take it down. The E-commerce Directive also allows for a court or an administrative authority to require and ISP or hosting company to 'terminate or prevent an infringement'. The wording is slightly problematic in the current political context, because 'infringement' is specific to copyright. It has enabled the entertainment industries to bring injunctions against ISPs.

There is no EU procedure specified for the taking down of content, although the Directive would not prevent Member States from establishing one. Previous attempts by the EU to bring in a law on notice and take-down have been frought with inter-industry disputes that mostly concerned copyright. (See Notice and action: the EU Commission's Damocles moment ). The EU is now considering the possibility of guidelines for a take-down process ( See the DSM Mid-term Review). This might be helpful. The proposed EU guidance recommends a counter-notice so that people whose content is taken down in error can complain and get redress.

The balance established by the E-commerce directive enabled Internet start-ups and innovation to flourish, at the same time as giving rights-holders a way to enforce their rights. The balance was established in the context of an inter-industry battle between technology companies and copyright owners, and it was done with significant input from the British government. It meant that content could be uploaded without permission and without gatekeepers and it is this permission-less notion that has led to the enormous growth in Internet-based services. It has the effect of protecting free speech rights for citizens.

However, there have been big changes since the E-commerce directive was drafted in 2000. Automated systems enable the scaling up of content removals to deal with the millions of pieces of content that are uploaded every day. The mass-scale filtering systems target content using algorithms, which are pieces of computer code.

Asking social media companies to install automated systems for removal of content for British people raises serious issues concerning transparency. Who are the people writing these take-down algorithms? In which country are they located and what are their values? What criteria are they using?

There are legal problems too, because automated filtering is contrary to the E-commerce directive Article 15, which forbids the State from given a general obligation to monitor. This has been established in caselaw of the European Court of Justice and in national courts. The Internet companies are correct in saying that any government order telling them to filter, would be illegal.

And so governments are seeking ways to carve out the E-commerce directive, and one way is to ask Internet intermediaries to filter on a 'voluntary' basis . However, instead of being a simple response, voluntary or self-regulatory measures raise additional legal questions. When social media companies take down content in accordance with their own terms and conditions, it is not necessarily the same as the doing it in accordance with the law. They will remove content that is legal or leave content that is illegal, if that action fits with their own terms. This has been illustrated by the recent case of the 'napalm girl' photograph, that was removed by Facebook and then reinstated after a public outcry. It was also illustrated by a case in the French courts concerning a painting called L'Origine du Monde'. Facebook suspended the user's account, even though this painting is on display in the Musee d'Orsay.

As we explore the notion of voluntary measures by Internet companies, we raise the prospect of the great censor in the cloud.

The Council of Europe provides some guidelines in its Recommendation on Internet Freedom. From a human rights perspective, the key issue is to safeguard freedom of expression for the majority of citizens who are doing nothing wrong and whose content is legal. This is about are necessity and proportionality, as per Article 10.2 of the European Convention on Human Rights. Any measures by States to restrict content must be prescribed by law, they must be necessary, in order to meet a legitimate policy aim in a democratic society and they should be be proportional to that aim. That means measures should be targeted to address the specific problem. Mass monitoring or filtering would not meet the proportionality test.

ECtHR caselaw states that there must be a law in place, so that citizens can know what kind of conduct is legal or illegal and regulate their behaviour accordingly. That tends to knock out the idea of voluntary content removal. Where content is manifestly illegal, it may be taken down, but this requires a process and an authority to determine the illegality. Legal experts say that it's rare to find content that is manifestly illegal at first sight and that's where due process comes in.

In Britain, there is a process for taking down extremist or terrorist content. The Counter-Terrorism Internet Referral Unit assesses content against the Terrorism Acts 2000 and 2006, and operates a process for forwarding take-down requests to the online platforms and Internet Service Providers. The unit was set up by the National Police Chiefs Council (formerly ACPO). This process is resulting in tens of thousands of pieces of content being taken down. If it is not sufficient to meet the policy aim, then this should be resolved by the government, with proper transparency, accountability and legal safeguards.

There is also a system for oversight of terrorism measures. Why not put measures for content removal under a similar oversight system?

Due process is an important safeguard for users. It protects against content being wrongly removed. It offers a means of appeal and redress. I was presenting at a Council of Europe conference in April (see Balancing freedom of expression online: insights from copyright cases) where there was discussion about how due process could be implemented in this context. The question is how to ensure that the courts have oversight of the process and that there is a procedure to deal with appeals and counter -notices. Various ideas are being put forward in European fora and there will be opportunities for our government to contribute - but it will have to be a more consensus-seeking approach.

We want to protect our British values and our democracy, but we must ensure that any powers to remove or take-down content are operated within the jurisdiction of the State and not of private actors acting alone, and that the measures have rigorous and accountable judicial oversight. This is especially important where automated monitoring, filtering and removal methods are being considered. Otherwise, the risk is that private actors, with no public accountability, will end up as the policemen of our content, with no legal framework to provide safeguards.

Hence, pressuring Internet companies to remove content on the basis of their own terms and conditions is highly problematic. Content removal is undeniably a matter for lawmakers, and to have Mark Zuckerberg or anybody else as the great censor in the cloud is arguably not acceptable in a country with a 300 year heritage of free speech.

---

If you liked this article, you may also like my book The Closing of the Net which discussed policy that governs the Internet, including chapters on content restrictions and surveillance.

If you cite this article or its contents, please attribute Iptegrity.com and Monica Horten as the author.

Photo: Monica Horten: Theresa May speaking at Maidenhead count, UK general election, 9 June 2017

  • Article Views: 26790