We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Elon Musk's AI tool Grok has hit the headlines over its ability to generate sexual imagery without consent. Alamy Stock Photo

Professor Mary Aiken Grok is a warning shot, not a one-off scandal

The cyberpsychologist says that Grok’s ‘nudify’ controversy is not just a ‘bad app’ issue — it is a global governance challenge.

LAST UPDATE | 3 hrs ago

HAVING WORKED IN cyber safety for almost two decades, I have often heard colleagues say that if the public truly understood the horrors of child sexual abuse imagery, there would be a widespread outcry. We have now reached that moment.

The public response to Grok’s ‘nudification’ of ordinary photos and the generation of sexualised imagery of real people has been intense and justified. But if we treat this as a single product scandal, we will miss what it is really telling us, which is that the gap between technological capability and governance has widened into a chasm.

When a consumer-facing tool can be repurposed, at scale, into a harassment engine in minutes, harm is no longer an edge case. It is a predictable outcome of deploying powerful systems without sufficiently robust guardrails, accountability and enforcement pathways. 

Online harm: volume, velocity and variety

Ireland already has strong laws against image-based abuse. Sharing an intimate image without consent is criminal. Threatening to do so is criminal. Producing or distributing child sexual abuse material (CSAM) is criminal.

However, online harm has become a “big-data” problem, vast in volume, instantaneous in spread and endlessly variable in form. This avalanche of harmful content has now outpaced the capacity of police forces worldwide, leaving automated detection and intervention systems no longer a choice, but a necessity.

The Grok controversy, and the broader ‘nudification’ trend, mark the industrialisation of sexual harm: it is fast, scalable, cross-border, and often driven by anonymous or pseudonymous accounts, making investigation difficult. Restricting one tool leads to the emergence of others. When platforms tighten access, content shifts elsewhere. When one jurisdiction enforces penalties, offenders evade them. This is not just a “bad app” issue – it is a global governance challenge.

Nudification apps are trained on vast image datasets. It has been estimated that 99% of sexually explicit deepfakes depict women and girls. Importantly, most jurisdictions are now attempting to tackle AI-generated sexual imagery of children, yet paradoxically permit the continued deployment of AI models capable of producing it. Focusing solely on Grok is not the point – it may be today’s headline, but it is not the only system capable of producing sexualised, degrading or illegal imagery.

The wider ecosystem includes mainstream generative AI tools, smaller “nudify” services, and open-source models that can be modified, redistributed, or subjected to jailbreak techniques that bypass built-in safety controls and can prompt an AI system to generate content it is designed to refuse. Regulation that chases one brand name at a time will always fall short.

Regulatory frameworks still operate as if harm occurs slowly. Traditionally, a complaint is filed, an incident is investigated, and responsibility is assigned. However, internet-scale harm spreads instantly, is endlessly replicated, and can be automated. A single prompt can generate hundreds of variants, and a single upload can reach multiple platforms before victims are even aware.

Jurisdictional differences further complicate the issue. Our recent International State of Safety Tech report notes that regulatory divergence persists. The US generally upholds a free-speech and market-led approach at the national level, while the EU and UK pursue precautionary, risk-based frameworks focused on transparency and age assurance. This divergence is significant because platforms operate globally, yet accountability remains mostly territorial.

The First Amendment is central to this discussion. In the US, freedom of speech is fundamental, and state efforts to regulate speech often face constitutional challenges. Nonetheless, the current US administration is focused on legal measures to address AI-generated deepfakes and non-consensual intimate imagery. Ireland and the EU are also working to prevent and address harms such as sexual exploitation, non-consensual intimate imagery and child abuse. However, many platforms, products and policies are designed within a US legal and cultural context, then exported to jurisdictions with different regulations and responsibilities.

The core issue is not “free speech versus safety.” Instead, we must ask whether we can establish shared minimum safety-by-design standards that apply across borders, especially to protect the most vulnerable.

Anonymity: Essential in principle, dangerous by default

As a cyberpsychologist, I am cautious about simplistic views on anonymity. Anonymity is essential in certain contexts, such as whistleblowing, escaping domestic abuse, exploring sexuality, political dissent and reporting wrongdoing. Eliminating anonymity entirely risks silencing those we aim to protect.

However, default anonymity at internet scale predictably accelerates harmful behaviour. While online anonymity is often seen as a fundamental human right, it is neither ancient nor absolute; it is a modern internet feature that must be paired with responsibility and accountability.

It enables the ‘Online Disinhibition Effect’, reduced empathy, reduced accountability and increased risk-taking when identity is untraceable. Recent reports estimate that nearly 40% of accounts engaging in pornographic or sexually explicit content are anonymous. Even if this percentage changes, the behavioural patterns remain consistent: when people feel unaccountable, harmful behaviours can emerge.

A universal “real name policy” is not the solution. Instead, a tiered identity assurance approach is needed, applying the appropriate level of verification to the level of risk. The Safety Tech ecosystem already offers privacy-preserving age and identity assurance methods and technologies that can reduce anonymity without enabling mass surveillance.

AI has changed the threat model – especially for children

The most urgent concern is how generative AI has fundamentally changed the economics of abuse. AI systems can now generate synthetic child sexual abuse material, removing the need for offenders to have direct access to a child in order to create exploitative imagery. This significantly lowers barriers to harm while allowing content to be produced at scale, quickly overwhelming already stretched moderation systems and law-enforcement resources.

Even more concerning is that AI systems optimised for engagement can drift toward increasingly extreme outputs, a pattern seen in the online monetisation of attention visible in everything from disaster imagery to simulated violence. When applied to AI-generated child sexual abuse material, this dynamic risks producing ever more extreme synthetic abuse. This progression is not victimless; such material normalises and escalates harm, and may contribute to real-world offending. Grok should therefore be seen as a critical inflexion point in more ways than one.

The good news – solutions exist, but they’re not being deployed fast enough

I served as an expert reviewer on a UK government-funded project to prevent the sharing of known child sexual abuse material on end-to-end encrypted messaging platforms without compromising user privacy. The project was successful, resulting in the development of Cyacomb Safety – a game-changing software in the fight against harmful online content. However, despite its privacy-preserving design, deployment has been hindered by privacy concerns. This highlights a broader societal challenge, balancing privacy, collective security, and technological innovation. Elevating any one of these above the others ultimately weakens all three.

A key point sometimes overlooked is that Safety Tech is a vibrant, global and expanding sector. As one of the founding members of the sector, I am committed to providing technology solutions to address problematic, harmful and criminal behaviours facilitated by technology. Ireland is one of Europe’s most active Safety Tech hubs. This is not a niche industry, but an ecosystem that can help platforms and regulators move from aspiration to implementation.

The taxonomy outlined in our Safety Tech report includes concrete capability areas that map directly onto the Grok-style problem set:

  • Countering illegal content and behaviour, including CSAM detection and removal 
  • Identity and age assurance, including privacy-preserving methods 
  • User protection and personal safety, explicitly including support for those affected by intimate image abuse
  • Information integrity and authenticity, including AI-generated content detection and deepfake detection.

Equally important, the report defines modern safety as assurance, making risk assessments, safety-by-design, and pre- and post-deployment testing essential when AI systems can cause harm at scale.

Why does this gap persist? Because capability does not guarantee adoption. Platforms implement cyber safety measures only when regulation, liability, procurement and incentives are aligned.

We need ‘air and sea’ thinking for cyberspace

Cyberspace is a shared environment. Like airspace and international waters, it is used by all, crosses borders by default, and cannot be effectively governed by a single state. In aviation and maritime sectors, we accept fundamentals such as common standards, incident reporting, operator obligations, licensing, regulation and cross-border coordination. Safety is not considered optional.
 
The Grok controversy reveals a persistent flaw in online safety governance:  over-reliance on voluntary commitments, internal platform controls and reactive enforcement. This approach is no longer credible at the scale of AI. Encouragingly, regulators and multilateral bodies are beginning to align on shared standards, coordinated oversight, and interoperable safety signals to identify and contain harm before it spreads.
 
This is the right direction, but it requires political momentum.

Ireland can do more than react. We can lead.

  • Cross-party collaboration that frames online harm as a public health & safety issue that transcends political divides.
  • Ongoing involvement of subject-matter experts in AI, child protection, safety engineering, trust & safety operations, cyberpsychology and digital forensics should be embedded in policymaking, not only consulted after scandals.
  • Develop a practical roadmap for Safety Tech adoption, including pilots, secure data-sharing mechanisms, and evidence-based evaluation to ensure regulation is implementable, not merely aspirational. 
  • Mandate safety-by-design for generative AI, with red-teaming and deployment gating when systems are found to generate illegal or harmful outputs. 

The Grok incident should not conclude with a single investigation, temporary restriction, or promises to do better. It should prompt recognition that the governance model inherited from an earlier internet is no longer suitable for an AI-driven era. 
 
If the Grok AI ‘nudify’ controversy teaches us anything, it’s this: reactive fixes after public outrage are not a strategy. Ireland should push at home, through Europe and indeed worldwide for a coherent, shared-space approach to cyberspace governance: minimum global standards, safety-by-design, and enforcement that matches internet speed.

Cyberpsychologist Dr Mary Aiken is Professor and Chair of the Department of Cyberpsychology at Capitol Technology University, Washington DC. She is a Professor of Forensic Cyberpsychology in the Department of Law & Criminology at the University of East London (UEL). Prof. Aiken is a Member of the INTERPOL Global Cybercrime Expert Group and is an Academic Advisor to Europol’s European Cyber Crime Centre (EC3).

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds