Social media ban for kids: simple message, tough choices
-
Author: Monica Horten
-
Published: 08 April 2026
A “social media ban” is an appealingly simple idea but the sceptical view suggests it will not be so easy to implement.
The concept is to restrict children under a certain age from holding a social media account. It appeals to parents concerned about their children’s smartphone use. Policy-makers like it because it is clear and easy to communicate. However, this tempting policy solution fails to acknowledge the unintended consequences of the systems that will enforce it.
A social media ban simplifies the message but it is not the policy panacea that governments want. It stands to give even more discretion over our online activity to the tech companies. In the UK, one option currently on the table in the House of Lords, could mean that children are banned from platforms like Wikipedia and Spotify. A proposal to age-gate virtual private networks [VPNs] threatens security for businesses as well as consumers.
This article explores the social media ban using a recent Australian law as a mirror for new UK proposals.
The Australian ban – where it all started
The idea is based on a new Australian law that restricts the ability of children below the age of 16 to hold a social media account. It offers a real life experiment from which data and case studies can be derived. [The Online Safety Amendment (Social Media Minimum Age) Bill 2024 [Australia] Indeed, the UK government is keen to speak to experts who are involved with the Australian process. However, the Australian law is widely considered to be flawed, having been drafted in a hurry, and passed within just a few days of being laid before the Canberra Parliament.
The appeal of the Australian law lies in its simplicity: it requires social media platforms to refuse accounts to anyone under 16. It mandates that information collected for the purpose of identifying under 16’s must be subject to privacy protections. A social media platform is broadly defined as a “service that allows end users to interact with some or all of other end users”. The law states that “Providers of certain kinds of social media platforms must take reasonable steps to prevent children who have not reached a minimum age from having accounts.” Children can sign up on their 16th birthday.
There is a short list of social media companies that must comply, published by Australia’s e-Safety Commissioner: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter), and YouTube. https://www.esafety.gov.au/about-us/industry-regulation/social-media-age-restrictions/which-platforms-are-age-restricted However, all children remain free to use the Internet, including search engines or community sites such as Wikipedia. And that’s it.
It is a bi-partisan policy, with support from both the Labor government led by the Australian Prime Minister, Anthony Albanese, and the Liberal Party opposition, as well as the governments of all Australian States and Territories. Mr Albanese has admitted that this law will not provide an absolute block on children’s social media access, but, he says, it will send a signal to children and back up their parents: “we’ve acknowledged this process won’t be 100 per cent perfect. But the message this law sends will be 100 per cent clear,” he said in an official statement. https://www.pm.gov.au/media/protecting-australian-kids-social-media-harm
The concept is so simple and easy to explain that it has caught the imagination not only of the general public and the media but also of governments around the world. It is a policy gone viral, with not only the UK but also Spain, France, Italy, some American States, and the EU considering it.
That however, is where the praise ends. The law fails to explain how the age verification systems necessary for compliance – the so-called age-gates - will be implemented. As such it fails to tackle the complexities of the new technological ecosystem it is ultimately setting up.
Where it’s going - the UK ban
The UK government is consulting on a social media ban, with a deadline of 26th May. Meanwhile, the issue has been escalated in Parliament, in a battle between the House of Commons and House of Lords. The Lords is urging the government to hurry up but the government wants to pause for thought via the consultation. It does seem that both are acting under pressure from stakeholders. As in Australia, parents want government back-up to control children’s social media activity, according to Justine Roberts , founder of Mumsnet, speaking on BBC Radio 4 Today [21 March].
The government has paved the way via amendments tabled to the Children’s Wellbeing and Schools Bill [Amendments 37 and 38] that will enable it to restrict children’s access to specified social media services. Restrictions may address all or part of a service, or the amount of time per day that children can be online. It leaves open the age limit.
Peers in the House of Lords have tabled rival amendments seeking a more aggressive timetable, and a much broader scope that will mean children under 16 being banned not only from social media platforms, but from all websites and platforms within the remit of the Online Safety Act [OSA]. LINK Websites and services such as Wikipedia and Spotify, which are within the OSA remit, would fall under the ban. Children’s use of search engines could also be restricted, which, depending on the precise implementation, could limit their ability to access legitimate information online.
This would turn the narrow and specific Australian-style social media ban into a wide-ranging and restrictive measure that touches the very heart of the Internet. It demands in-depth scrutiny. However, that will be possible under either government or Lords amendments which grant the government sweeping powers to bring in the ban by amending the UK’s Online Safety Act under Secondary Legislation. These are the so-called “Henry VIII powers” that allow lawmakers to bypass Parliamentary process. The legislation could be slipped in very quietly. Both sets of amendments state that the affirmative procedure applies, but to be clear, it does not mean proper scrutiny, in fact, it falls well short of that objective.
Both government and the Lords seek to impose mandatory age verification to enforce the ban, but there seems to be a conflict with the measures in the Online Safety Act. The social media ban applies up to the age of 16 [in the Lords Amendment] but the Online Safety Act requires children’s access to content to be restricted up to 18 [priority content harmful for children]. Clarification is needed on how the new age limits will work and how platform responsibility be expected to operate in this context.
Moreover, the Lords want to extend age verification to virtual private networks (VPNs), inserting an amendment for “Child VPN Prohibition”. Providers of VPNs would be liable for monitoring their customers’ age, and ensuring that no-one under 18 in UK can use their services. The amendment states that the provider must verify the age of “any person seeking to access its service”. VPN providers are not bound by the Online Safety Act and the aim is to block circumvention of the Act by users – children or adults.
It is a salient point that age verification does not apply just to children. It touches all users, including adults over 18. Everyone will have to verify their age, including users in business, government and industry, as well as consumers. Both sets of amendments fail to fully recognise the privacy-intrusiveness of the systems that will be built to enforce a social media ban. A dramatic increase in user profiling by tech companies can be predicted. The government proposal amends UK- GDPR with regard to children’s consent, but otherwise fails to address the potential vulnerabilities around cyber-security and privacy.
Finally, neither the government nor the Lords tackles provider regulation. This is a serious failing. Far from reducing our reliance on big tech, it does the opposite, granting procedural discretion to providers. The widespread use of age verification systems will only serve to entrench the power of tech companies https://www.eff.org/deeplinks/2025/09/age-verification-windfall-big-tech-and-death-sentence-smaller-platforms making even more difficult to regulate them.
Government has a duty to act for the benefit of society as a whole, and like it or not, it is the right thing to consult. This apparently simple solution will require uncomfortable trade offs and tough choices.
-
Article Views: 23
IPtegrity politics
- Social media ban for kids: simple message, tough choices
- How could they ban X?
- Grok AI images: can compliance be enforced?
- AI and copyright – an author’s viewpoint
- UK climb-down over Apple back-door was foreseeable
- Copyright wars 3.0: the AI challenge
- Why would the UK take on Apple?
- What's influencing tech policy in 2025?
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- EU at loggerheads over chat control
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Whatever happened to the AI Bill?
- Hidden effects of the UK Online Safety Act
- EU puts chat control on back burner
- Why did X lock my account for not providing my birthday?
About Iptegrity
Iptegrity.com is the website of Dr Monica Horten, independent policy analyst: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users. Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. Former telecoms journalist, experienced panelist and Chair, cited in the media eg BBC, iNews, Times, Guardian and Politico.