Skip to main content

Grok AI images: can compliance be enforced?

The Grok AI tool on “X”, formerly known as Twitter, is generating non-consensual sexual imagery at scale -  potentially at the rate of one per minute. The shockwaves  are rattling governments with more force than new technology developments usually do. 

 In summary, Grok AI  includes image manipulation functionality that can alter photographs of real women by stripping off clothing and making the image overtly sexual. It does this in response to prompts entered by users. These prompts may include asking for a woman to be shown in a bikini, or in a sexually explicit or violent pose.  This is done without her consent. It raises concerns about the use of real women’s images to create pornography, to spread misogynistic messages, and to threaten, hurt or blackmail women, and girls.

The trend is very new, and apparently started in December last year, but has already escalated to the level where AI trackers are seeing streams of sexualised imagery manipulated by Grok-AI. According to one research lab, Copyleaks, these images are  being generated at a rate of “roughly one per minute”.   In one particularly egregious example,  cited in  The Guardian, the AI tool Grok was used to undress an image of the woman killed this week in the US  by Immigration and Customs  Enforcement (ICE) agents. 

We should be in no doubt that this is a deeply troubling technical development that should repel every moral fibre of all right thinking individuals. It makes a  powerful case for the regulation of generative AI tools,  manifestly illustrating to governments and policy makers what they have so far failed to recognise.  

The matter is sparking a dynamic media debate. As I write this, I’m listening to James O’Brien on LBC who is asking why is Elon Musk allowing this software and the appalling use of it to continue on this platform. I would suggest the reason is that there is no effective enforcement  in place to stop him. Please read on.

 

Sexual content: law and policy  

In assessing how the law might apply, I would start with the law as applied to the actual content that is created or shared online by these AI models.

The Online Safety Act S. 187 and S.188 address the sharing of intimate images. S. 187 is the so-called cyberflashing clause which bans the sharing of images of someone’s genitals with intent to cause “distress, alarm or humiliation”. S.188,  also known as the “revenge porn” clause, makes it an offence to share images of someone in “an intimate state” without their consent and knowing that it will cause “distress, alarm or humiliation” and / or use it to threaten the person or generate fear in them. There are exemptions to enable images to be shared with medical professionals.

These provisions are embedded in the Sexual Offences Act, S 66 A-D. They are also embedded in the  embedded in the Criminal Justice Act 2003, Schedule 15.  

 There has already been a  conviction for  cyber-flashing that  was handed out by  Southend Crown Court on 19th March 2024

The government has also enacted a provision to ban the creation of non-consensual intimate images. This goes further than the previous law, which only criminalised the sharing of material, in that it bans the taking of a photograph or making of a video. It matters, because, for example, there could be different individuals involved in the making of a video from those who are actively sharing it. This provision is S.138 of the Data (Use and Access) Act and it is not yet in force. It would seem to apply in the matter of Grok because this is about using software that is specifically designed and intended to create such images. 

Then there is also the Sexual Offences Act S.67    which tackles voyeurism. It’s not in the Online Safety Act.  Instinctively, though, it seems this type of use of AI would be a form of voyeurism,  but as I am not a lawyer, I would have to defer to legal experts. It certainly seems to me that questions should be asked.

The law on child sexual abuse would  also seem to apply where the AI model is used to create and share  nude or explicit images of persons under 18. It is an offence under the law not only to engage, but also to incite, and to possess images. This is covered various laws that have been passed and updated over the years, including  the Obscene Publications Act 1959,  Protection of Children Act 1978, Criminal Justice Act 1988, Sexual Offences Act 2003,  and the Serious Crime Act 2015. Offences in these laws are plugged into the Online Safety Act, Schedule 6.

In the EU, Member States will be required to  enact laws on image-based sexual violence by May 2027.  This  follows  the Directive on combating violence against women and domestic violence of 14 May 2024. The Directive criminalises the production or dissemination  of deep fakes and the digital manipulation of images to make it appear as though someone is engaged in sexually explicit activity.   Details can be found in this  Policy Paper:  Non-consensual sexualising deepfakes – threats and recommendations for legal and societal action, from  CEE Digital Democracy Watch.

 

Who makes the decisions? 

The next thing to consider is what the Online Safety Act actually allows online platform providers to do about this content. It’s sets out measures asking them to remove it from the platform. This is in S.10 which mandates online platforms, like X, to remove this material and significantly, to prevent users from engaging with it. 

This is a deeply problematic provision in the law, because it leaves all of the decision-making to the platform providers. So – in this situation – X, the platform,  is in effect, marking its  own homework. It’s not merely allowed to do so, it is legally mandated to do so. Under S.10, X  is told it has to determine what content on its platform meets the criteria for being illegal under the offences I’ve noted above, and then it has to decide for itself what to do about them.

What about Ofcom, you may ask? Ofcom does not have authority to determine  whether any particular piece of content  is illegal, or not, but it is told to kick these platforms into doing something, with actually very little effective power. And that really is the heart of the problem.

As an illustration of why it is ineffective, today, as I write this,  X has withdrawn the functionality from public use, but it is still  available on verified "blue tick" accounts.  These are subscription-based  paid-for accounts. According to another report on MSN, the functionality is still available for general use on the xAI standalone app.  Far from stopping the trend, X is giving its most prolific users a de facto monopoly, with some of those " blue tick"accounts able to  get a revenue-sharing deal. 

Action to tackle  child sexual abuse material is dealt with in the UK by the Internet Watch Foundation. It does have the ability to remove such material.  According to a report by the BBC, it has seen  images likely to have been created by the Grok AI tool, and which would meet the criminal threshold, circulating on the dark web.  

What’s the enforcement process? 

The enforcement process as mandated by the Online Safety Act is part of problem why Ofcom appears to be sitting on its hands, and why the government’s reliance on Ofcom is misplaced.  Ultimately, Ofcom is given powers to seek a court order against non-compliant platforms but it is told to go through a process mandated in the Act, which is unwieldy, cumbersome and slow and arguably fails to enable the swift targeting of problems of this complexity. 

It would be helpful to study the template offered by blocking orders obtained for copyright infringement. These orders enabled the filtering of infringing content by broadband providers.   I actually sat in the High Court on one of these cases, and  I’ve written about the blocking orders in my book The Closing of the Net.  As determined by the court, a blocking order must be narrow and specific to content to be blocked, and the block must be justified by demonstrating the content is illegal.  It does mean that the precise URLs of the content have to be supplied, and that the illegality is determined by the court and not by the platform provider.  It’s not ideal  but on balance, I think  it does offer a more effective and transparent process to address the ethically difficult and legally complex issues around sexual content. .

---

I wrote this as a quick response to a dynamic policy question that is in the news today, and will revise the article as and when.  I am happy to have feedback. However, I do want to make one thing very clear. I am critiquing the process. These images that we are talking about here are abhorrent, and I am in no way  seeking to defend the individuals who do this or the platforms that enable and carry this content. What I am attempting to do is to raise questions from a public policy perspective, in order to help find a solution. 

If you like my work please feel free to browse the website, buy my books or get in touch. 

If you are affected by the issues in this article, help is available by contacting the Revenge Porn Hotline. 

And one more thing - you are welcome to cite this article, just please remember to provide a back-link and cite me as "Dr Monica Horten, Iptegrity.com" 

  • Article Views: 210

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy analyst: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.