We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Grok rolled out an 'edit image' feature last month, which was soon being used for sinister purposes Alamy Stock Photo

How is Grok creating sexualised images of women, and what should you do if you're affected?

A new feature from the chatbot Grok is the latest in a line of ‘nudification’ tools.

ELON MUSK’S AI bot Grok has started flooding the social media platform X with non-consensual sexual images of women, children and men.

The phenomenon began last week after Grok rolled out an “edit image” button, a tool which was designed to give users the power to modify online images by inputting specific prompts.

But though the initial intention may have been – in Grok’s own words - to give users something that could make “quick, fun edits” on images, it wasn’t long before the tool was being used for sinister purposes.

Now the European Commission has said it is “very seriously looking” into complaints about the tool in what is the latest controversy surrounding the use of AI to create non-consensual sexual images of women and children.

What are ‘nudify’ apps?

Deepfake technology that allows people to create nonconsensual sexual images and videos has been around for a number of years, but until recently, it was not widely available and required a level of technical expertise that was out of reach of most people. 

However, that has changed with the proliferation of nudification apps, which can primarily be downloaded on app stores or accessed via a web browser using a URL.

Certain bots on the messaging app Telegram also offered nudification services, in the same way Grok was doing last week.

The apps encourage users to upload a photo of any woman, and offer to produce a new, deepfake version of the same image in which the person appears without clothes.

The apps are thought to have been trained using open-source artificial intelligence models in which the underlying code is freely available for anyone to copy, tweak and use for whatever purpose they want if they have the skills to do so.

In the case of nudification apps, the artificial intelligence works by creating new images that are based on attempts to replicate existing images that they have been trained on.

The artificial intelligence does not distinguish when requests are made to process an image of a person who is underage, nor do they fail to do so just because such images are illegal.

The apps create realistic looking images by creating new images that match the body type and skin colour of the person in the original image, so that they appear as if they are completely undressed – though they tend to only reproduce AI versions of women’s bodies.

Last year, The Journal uncovered thousands of targeted ads for nudification apps, which claimed that apps can “erase” or “see through” the clothes of any woman, that were pushed to Irish social media users on Facebook and Instagram on an ongoing basis.

Our revelations prompted concerns from the Dublin Rape Crisis Centre and the Children’s Ombudsman about the burgeoning deepfake economy.

Recent revelations about Grok have prompted Rape Crisis Ireland (RCI) to call for the government to ban AI-based functions that can produce deepfake sexual images.

How is Grok ‘nudifying’ images? 

Like its better-known counterpart ChatGPT, Grok is a generative AI chatbot that was developed by Elon Musk’s xAI. It was launched in November 2023.

The tool has already been the subject of criticism for spreading misinformation and controversial statements about global topics, including the genocide in Gaza and the deadly mass shooting in Australia last month.

At the end of December, it rolled out an “edit image” button that allows users to alter any image on X; for example, if someone wanted to change the background of a photograph to show them sitting on a beach, they could enter a prompt telling Grok to do that.

Within days, X was flooded with requests from people asking Grok to partially or completely remove clothing from women and underage girls.

Users replied to posts containing real images of women on X using prompts such as “put her in a bikini” or “remove her clothes”.

Grok would then post its version of the image in another reply below the post within minutes, allowing anyone who saw the post to also see the nonconsensual image.

The bot also can’t tell when a request is being made to digitally alter an image of a child, which has prompted accusations that Grok is being used for generating and disseminating child sexual abuse images.

Given the behaviour of many internet users, none of this is overly surprising. Since the turn of the century, experts have warned about how new technologies are constantly seized upon by pornographers, particularly in the online era.

Grok’s AI image edit function has the capacity to create other kinds of false and misleading photos.

In a post re-shared by Elon Musk, one X user showed how they used the tool on an image of ousted Venezuelan President Nicolas Maduro, who was captured by the United States on Saturday, to make it look like he was wearing a prison uniform after being captured.

Other users have also used the tool to create AI videos of themselves meeting multiple celebrity icons in unlikely places.

It’s also not the first time Grok has come under fire for creating nonconsensual images of women, following a controversy last summer when the tool was used to produce sexually explicit clips of the singer Taylor Swift.

Can anything be done about it?

Grok claims it is taking action, with a statement from the bot last week saying it has “identified lapses in safeguards and [is] urgently fixing them”.

The bot no longer appears to comply with requests to completely edit images so they show women without any clothing at all.

However, while some requests by users for Grok to remove clothes simply result in the same image being re-posted by the bot, others continue to show women in underwear or in similar states of undress. 

Requests by users for Grok to show women partially clothed, such as in lingerie or swimwear, are also regularly complied with.

The Journal spotted dozens of requests and resulting images being posted on X every minute, showing that the problem has no sign of abating, even if Grok is no longer completely nudifying images.

Engagement on these images vary, with X analytics showing that some were seen by just a handful of people while others – particularly those showing famous women – had more than 100,000 views.

What action is being taken?

The European Commission says it is now “very seriously looking” into complaints that Grok is being used to generate and disseminate sexually explicit childlike images.

Under the Digital Services Act, the European Commission is responsible for the oversight of very large online platforms with their requirements to assess and mitigate risks that their services may create in relation to the proliferation of illegal content online.

This includes the protection for children.

The European Union is already seeking to criminalise AI-generated child sexual abuse material and remove a statute of limitation on child abuse crimes across Europe.

The Irish media regulator Coimisiún na Meán, which is also responsible for regulating X under the DSA, said last night that it is engaging with the commission and noted that the “sharing of non-consensual intimate images is illegal”.

Other jurisdictions have also weighed in.

The UK’s media regulator Ofcom also said yesterday that it had made “urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK”.

Last Friday, Indian authorities directed X to remove sexualised content that was shared by the tool, while prosecutors in Paris have also expanded an existing investigation into X as a result of the latest controversy.

John Evans, the digital services commissioner at Coimisiún na Meán, told RTÉ Radio One’s programme News at One today that it is illegal to share non-consensual sexual images or child sexual abuse images.

He urged anyone who was affected to contact gardaí, as well as the commission itself to help with its investigations.

If you have been affected by any of the issues mentioned in this article, you can reach out for support through the following helplines:

  •  Dublin Rape Crisis Centre - 1800 77 8888 (fre, 24-hour helpline) 
  •  Samaritans - 116 123 or email jo@samaritans.org (suicide, crisis support)
  •  Pieta - 1800 247 247 or text HELP to 51444 – (suicide, self-harm)
  •  Teenline - 1800 833 634 (for ages 13 to 19)
  •  Childline - 1800 66 66 66 (for under 18s)

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds