Why did X lock my account for not providing my birthday?
- Author: Monica Horten
- Published: 23 July 2024
Recently, my account on X, formerly known as Twitter, was locked without warning because I had not supplied my date of birth. In this article, I investigate some of the limits on access that are coming to our online spaces. Now I've left it for a calmer, friendlier space.
The account was locked on 19 July. When you get a notice like that, it releases emotions of anger and annoyance at being cut out of the conversation. How will get my news? Keep up with the expert views? Talk to people in the communities I follow?
I spoke about it with friends. One said that her X Account had been closed and she was off social media. Others, ordinary women in different walks of life, suggested entering a fake date. Just do it and get on with my life. However, as a policy advisor on online safety issues, I felt this was not an appropriate response.
I’ve experienced the frustration that any user would feel at seeing that notice. Just as I write this, I heard on the BBC Radio that the same thing happened to the British Army account on X. I have to speak up.
It is an opportunity to talk about the impending new rules in the UK where online platforms and others will be legally required to verify the age of their users. There are massive implications of this law which are likely to mean much more widespread blocking of accounts and content. I’m talking about the Online Safety Act which was passed into law in October 2023, and is now subject to consultation by Ofcom. A consultation on how to treat children within the remit of the Online Safety Act has just concluded.
In this case, I had taken the precaution of downloading my data from Twitter – I did it before it changed to X but after Elon Musk took over [22 October 2022]. Mostly the data is empty, or in javascript – not very helpful to the average user. It provides an online archive with a limited amount of information. The archive does verify that I have held the account since 2011. It states that my age is between 13-54, which I would take as evidence that it knows me to be an adult and not a child.
This turns on an important question – what is the requirement that is being asked of the online platform? If it has establish that I am not a child, then I would argue it has already done that, and the additional information it is asking for – my date of birth – is superfluous to the requirement. X does not state why it needs my date of birth. Under law, I think it should do. If it is not necessary for compliance purposes, the platform has no need to ask me to provide it. Its action to lock my account is disproportionate. It should immediately unlock my account.
X refers me to its enforcement terms which state that it may require an account owner to verify ownership with a phone number or email address. It has both of those and has verified many times that they are genuine. X further states that enforcement actions, such as locking an account, are taken where a user has repeatedly violated policies that cause significant risk or pose a threat to other users. I have not violated its policies and do not pose a threat to other users. I only post about public policy.
Elsewhere in the help section, X suggests an alternative motive for wanting the date of birth. It will be used to "customize your X experience. For example, we will use your birth date to show you more relevant content, including ads." So there we have it - there is no altruistic motive to protect children rather a financial motive to get more money from you.
Other concerns that the platform X is not being entirely straight with its users have been expressed by the Digital Legacy Association, which worries that relatives of a deceased person may lose access to their loved-one's account. Interestingly, it cites text extracted from X, which suggests the change is about "helping brands" - in other words, advertising.
Certainly, there is nothing in the Online Safety Act that requires a platform to lock people out of their services for not having given a date of birth. Indeed, a law that did so would immediately engage the right to freedom of expression under Article 10 of the European Convention on Human Rights, because it represents interference with the right. The likely outcome is an immediate chilling effect, if large numbers of smaller accounts are blocked when it is not necessary to do so (because the platform already know the account is not operated by a child).
In its consultation on protecting children online, Ofcom clarifies that the Online Safety Act does not state that services have a duty to specify minimum age requirements. Ofcom further explains that the Act does not require services to operate any particular processes to enforce any such minimum age requirements. (See Ofcom: Protecting children from harms online: 15.313). Instead it relies on whatever platforms say in their terms and conditions.
Ofcom seems to be in a bit of quandary as to what it should mandate with regard to age verification. It is not sure whether self-declaration - which is what X is asking me to do – is the right way to go (See Ofcom: Protecting children from harms online: 15.29 and 15.308). It also expresses doubts about the technology for age verification and age assurance and it does acknowledge the negative impacts on privacy rights (See Ofcom: Protecting children from harms online: 15.314, 15.318 and 16.59).
There is an outstanding question about the timing. According to Ofcom’s published roadmap, the childrens code is due to come into force at the earliest in the fourth quarter of 2025. Ofcom also indicates a report on age assurance technology that is not due till the third or fourth quarter of 2026. A further report by Ofcom on content harmful to children is due in the fourth quarter of 2026. In other words, the actual requirements will not be established for another year to 18 months.
At the precise time my account was locked, the Ofcom consultation on protecting children online was still live. I make this point to underscore that although the Online Safety Act is in force as an Act of law, the measures in it are still being consulted on. This is because the Act is merely a skeleton law, with most of the detail being left to Ofcom. The measures will only come into force when the Codes of Practice have been signed off by the Secretary of State and are published by Ofcom.
There is an enormous amount of political pressure on the online platforms, but is it right for them to jump ahead of the regulatory process? Of course, users can vote with the feet, and many, like me, are exiting to Blue Sky. Find me at @iptegrity.bsky.social
---
You might also like my paper on Facebook restrictions and shadow bans: Algorithms Patrolling Content: Where's the Harm?
And please see my article on age verification in the Online Safety Act, published jointly with Electronic Frontier Foundation (EFF)
If you cite this article, kindly acknowledge Dr Monica Horten as the author and provide a link back.
I provide independent advice on policy issues related to online content, Please get in touch via the contact page.
- Article Views: 985
IPtegrity politics
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- EU at loggerheads over chat control
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Whatever happened to the AI Bill?
- Hidden effects of the UK Online Safety Act
- EU puts chat control on back burner
- Why did X lock my account for not providing my birthday?
- Creation of deep fakes to be criminal offence under new law
- AI and tech: Asks for the new government
- How WhatsApp holds structural power
- Meta rolls out encryption as political headwinds ease
- EU law set for new course on child online safety
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- MEPs reach political agreement to protect children and privacy
- Online Safety - a non-consensual Act
About Iptegrity
Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users. Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. Former telecoms journalist, experienced panelist and Chair, cited in the media eg BBC, iNews, Times, Guardian and Politico.