We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Elon Musk has pushed back against criticism of X's Grok 'nudification' tool. Alamy Stock Photo

Hazel Chu Grok creates fake, sexualised images without consent and government is OK with that?

The Green Party councillor says the government’s decision to turn a blind eye to AI generation of sexual imagery without consent is not innovation, it’s negligence.

LAST UPDATE | 20 hrs ago

NO ONE SEEMED all that surprised that Grok, the AI chatbot developed by Elon Musk’s xAI had recently landed itself in controversy, hitting a new low with its editing images of individuals and putting them in “see-through bikinis”, “covering them in oil” or “stripping them down to their underwear”.

With no safeguards from the platform in place, Grok complied with user prompts and, as such, began the mass creation of non-consensual, sexually explicit deepfakes; the overwhelming majority of which were of women and, in some cases, children.

Analysts from the Internet Watch Foundation (IWF) charity in the UK found “criminal imagery” of girls aged between 11 and 13, which “appears to have been created” using Grok. CNN found that Grok fulfilled a user’s request to generate an image of the body of Renee Nicole Good, who was shot and killed by an ICE agent in Minneapolis, into a bikini.

the-grok-app-on-an-iphone-against-the-backdrop-of-search-results-displayed-on-the-social-media-platform-x-formerly-twitter-on-a-laptop-in-london-the-prime-minister-has-said-x-need-to-get-their-a The Grok app on an iPhone, against the backdrop of search results displayed on the social media platform X. Alamy Stock Photo Alamy Stock Photo

The Rape Crisis Centre, and various legal experts have called for an outright ban on AI features that enable the creation of such deepfakes and apps that would “nudify” individuals. To the everyday public, this seems like a very common sense approach and request.

Yet one that Government seems not to entirely agree, on the back of the Grok controversy, Minister of Enterprise Peter Burke stated that the laws were “robust” whilst his colleague Minister of Media, Patrick O’Donovan claimed that the onus lies primarily on users rather than platforms, stating “Ultimately, at the end of the day, it’s a choice of a person to make these images”. He today insisted his comments were taken ‘out of context’, but not before the minister had deleted his own account from X. 

patrick-odonovan-minister-of-state-with-responsibility-for-the-office-of-public-works-speaks-during-a-press-conference-inside-dublin-zoo-on-wednesday-16-june-2021-in-dublin-ireland-photo-by-a Patrick O’Donovan was criticised when he said that X is not responsible for making child sexual abuse images. Alamy Stock Photo Alamy Stock Photo

At least their Government colleague, Minister of Artificial Intelligence, Niamh Smyth, had the good sense to acknowledge that the breach by X was “not legal and wrong”. Is it any surprise that Big Tech thinks they are above the law when those who govern are so confused?

Interestingly, the Minister of Media had also argued that technology is moving too fast for the law to keep pace with, given that this Government has passed less legislation than any of its predecessors since its inception, it is not surprising that he would take this approach.

‘Blame the user’

Legislation often regulates fast-moving, high-risk industries by setting clear boundaries, and the net effect is safer for the public. Seat belts did not kill the car industry. Food safety laws did not destroy food producers or hospitality. Safeguards do not stifle progress; it is simply setting a standard of what’s acceptable and ensuring no harm is done. Allowing AI to generate sexual imagery without consent is not innovation; it’s negligence.

The fact is that AI systems are not neutral tools, generative AI does not merely “assist” user behaviour, it interprets, elaborates and produces content. And the companies that have ownership of these systems decide how the models are trained, what prompts are blocked, what outputs are allowed, and what safeguards are omitted. When an AI system easily complies and generates sexualised images of individuals, including minors, that is not just user misconduct. It is a design failure. And sometimes it is an intentional one.

Musk has long advocated against “woke” AI models (those with safeguards) and even insisted on a “spicy mode” for Grok. According to various media outlets, internally at xAI, Musk has pushed back against guardrails for Grok. And when the issue with Grok first appeared last week, Musk asked Grok to modify an image of him and put him in a bikini. When a company and those who run it know of an issue and do nothing to fix it, then they are complicit.

Legal blind spot

The “blame the user” argument adopted by our Government and other commentators also ignores where harm actually occurs. With Grok, it was not just the sharing of said content and the breach of privacy, as well as harassment of individuals, it was the creation of said materials without the consent of those appearing in it.

As it stands in Ireland, the Harassment, Harmful Communications and Related Offences Act 2020, aka Coco’s Law, provides for penalties on the sharing of harassment communications after the fact. It does not contain provisions to address the creation of such materials that lead to abuse. It does for minors under the act governing Child Sexual Abuse Materials under the Child Trafficking and Pornography Act 1998.

This is something organisations like the Rape Crisis Centre and legal experts have called for to be remedied for some time now. It is a legal blind spot that can and should be fixed. The proposed Protection of Voice and Image Bill 2025 at the second stage is a welcome first step, but the Grok debacle shows that incremental reform is not enough.

There is a reason the UK is bringing into effect a law criminalising the creation of non-consensual intimate images this week, after Ofcom, the UK regulator for communications, launched an official investigation into X. We in Ireland need explicit legal recognition that the non-consensual creation of deepfakes is a harm in itself, and we need to provide for mandatory safeguards at the point of AI image generation.

Women and children

Finally and crucially, we must call out the gendered reality of the abuse since it is so commonly ignored. The majority of sexualised AI deepfakes target women and girls. According to the UN, AI-powered deepfake technology is being weaponised on a large scale, up to 95 per cent of online deepfakes are non-consensual pornographic images, and 99 per cent of those targeted are women. While the UNWomen report in 2024 showed that 300 million children have been affected by online child sexual exploitation and abuse in the 12 months of reporting.

This is not accidental. It is a continuation of gender-based violence, repackaged as “technological innovation”. Digital abuse doesn’t just stay on screens, online attacks quickly spill into real life, escalating in severity. It is not surprising that the recent violations by Grok show the deep-seated threat of misogyny and violence against women already reported across social media, especially on X. We need to stop treating these issues as niche policy issues and face them for what they are: an ongoing concern of public safety.

brussels-belgium-06th-jan-2026-the-grok-logo-is-displayed-on-a-mobile-phone-with-the-eu-flag-visible-seen-in-the-background-in-this-photo-illustration-taken-in-brussels-belgium-on-6-january-20 Alamy Stock Photo Alamy Stock Photo

In the immediate, every politician, political party and state agency needs to come off X. It feels very wrong that Dublin City Council is using the platform to promote library times when that very platform is generating and sharing Child Sexual Abuse Material or that the HSE is paying X for a premium account to communicate advice on women’s health when the same platform is literally stripping women of their mental health and privacy.

It’s why, as a Dublin City Councillor myself and my Green colleagues submitted a motion that all Dublin City Council sections need to stop using X immediately.

The Government now faces choices. They can continue to work on the margins of outdated laws while insisting that users are the ones responsible. Or they can address the reality that powerful technologies demand equally powerful governance, no matter how deep-pocketed the companies behind them are.

They can also stop choosing the whims of attention-seeking Big Tech owners over the safety of our public. It’s time for the Government to show some leadership and make those right choices.

Hazel Chu is a Dublin City Councillor, and she is the Green Party Spokesperson for Public Expenditure, Digitalisation and Media.

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds