We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Factfind: Who, if anyone, can be held legally responsible for abuse images on Grok and X?

While the harms of abusive images are clear, the law is far murkier.

A SCANDAL INVOLVING AI-generated non-consensual images depicting real people including woman and children has led to widespread political condemnation and public outrage.

But statements by authorities on what specific offences were committed and who is legally responsible have been vague and often contradictory.

Following the introduction in December of an edit button on the AI tool Grok, fabricated images of children in bikinis, as well as non-consensual, explicitly sexual and sometimes violent imagery of women, flooded the social media app X.

Both Grok and X are part of xAI, a company founded by Elon Musk, and the chatbot is tightly intertwined with the social media network, where users can generate images and text by interacting with the account.

The Journal spoke with experts in relevant areas of law for their opinions on which (if any) laws were broken, who is legally liable, and whether new legislation is needed.

Minister of State Niamh Smyth and MEP Nina Carberry have both said that if X fails to abide by the law, it should be banned. Carberry also wrote to the EU Commission to ask “why the EU is not currently using all the weapons in its arsenal?”, while Smyth met with X executives on Friday to express “serious dismay” at the platform.

However, it is unclear whether the law has been broken, at least in a way that could be prosecuted in Ireland.

Child Sexual Abuse Material

The foremost legal question is whether any of the images produced by Grok meet the definition of child sexual abuse material under Irish law.

A Garda representative told a Dáil committee on Wednesday that the force is investigating 200 reports of suspected child sexual abuse material linked to Grok.

He also said that AI-generated images of children would be treated “exactly the same” as real images.

However, legal experts who spoke to The Journal indicated that it was not clear whether crimes had occurred under strict legal definitions, despite a large number of reports that images of children in bikinis were created.

The Child Trafficking and Pornography Act of 1998 defines visual “child pornography” as showing “sexually explicit activity” or a “genital or anal region”.

(The term “child pornography” is generally considered outdated as it does not “reflect the abhorrence of the sexual abuse in this type of material”, in the words of the Justice Minister. However, it remains the legal term and is used in media when referring to crimes defined in that Act. A bill that will update the term in Irish law is being drafted).

Even when images can be shown to have been generated for “a sexual purpose”, they may not meet the criteria of the legislation, Róisín Costello, a practising barrister who teaches at Trinity College Dublin, told The Journal.

Under the law, the mass creation of images of real children in bikinis by strangers may fall outside the technical definition of a crime.

“Images of children in their underwear does not, of itself, meet the definition of child pornography as it currently stands,” barrister Michael O’Doherty, the author of a leading text on Internet law in Ireland, told The Journal.

“That does not mean, however, that the images are not unlawful – they can be ‘indecent’ or ‘grossly offensive’, without reaching the threshold of being pornographic.”

Other crimes

Although such images likely cannot be prosecuted as “child pornography”, this does not mean they are legal. In addition, not all the images generated were of children.

Legal experts who spoke with The Journal noted that publishing AI-generated images of real people could fall afoul of the Harassment, Harmful Communications and Related Offences Act 2020, also known as Coco’s Law. 

This act, which is often described as outlawing “revenge porn”, makes it a crime to distribute or publish an “intimate image without consent”.

Unlike the stricter “child pornography” definition, the act deals with “intimate images”, which can also include nude pictures, or images of “underwear covering the person’s genitals, buttocks or anal region and, in the case of a female, her breasts.”

“The wording is quite flexible,” Costello said of the law, before noting that the legislation defines such images “in a way that could conceivably capture this kind of blended AI generated/manipulated and real content.”

However, unlike laws against child sexual abuse images, Gardaí are likely only to pursue these cases after being contacted by a victim.

“The difficulty with this act is that for the non-consensual sharing of images to be a prosecutable offence, it appears that the person whose image is used must make the complaint at first instance,” O’Doherty told The Journal.

“The act provides that the publication of the image ‘seriously interferes with that other person’s peace and privacy or causes alarm, distress or harm to that other person.’

“So it is not sufficient for society to be generally outraged by images of children who are not identified for the offence to be completed – the person who is the subject of the image (or their parent or guardian) must complain about it for a prosecution to be possible.”

Sexual Offences

However, O’Doherty also suggested a third law that might be applicable to Grok.

“I’m surprised that no-one has mentioned the possibility of prosecution under the more general offence of section 45(3) of the Criminal Law (Sexual Offences) Act 2017, which provides that ‘A person who intentionally engages in offensive conduct of a sexual nature is guilty of an offence’,” O’Doherty suggested.

“This offence can be completed when ‘any person’ becomes aware of the behaviour, rather than the victim of the behaviour themselves,” he said.

“This was used in the prosecution of a man for the offence of ‘upskirting’ at the Dublin Pride Parade in 2019, where he was prosecuted for simply taking a photograph up a woman’s skirt, without ever having further published the image to a wider audience.

“It strikes me that this offence is flexible enough to capture the behaviour at the centre of this controversy.”

However, other legal professionals were unsure whether prosecutions under that law would be successful.

“The section is designed as a flexible, catch‑all provision to cover a broad range of sexually inappropriate public conduct (for example, “upskirting” or public sexualised acts) that do not fit more specific offences but still harm or disturb others,” barrister Michelle Smith de Bruin told The Journal, noting it was not designed to include online offences.

Smith de Bruin was involved in the prosecution of the first case brought to court under this legislation.

“You could try to prosecute a person under this section for publishing a non-consensual sexualised deepfake image,” Smith de Bruin said. “But in my view, you would be trying to fit a square peg into a round hole.”

Is xAI liable?

Legal experts broadly agree that users who post abusive images should be held responsible for their actions. However, legal liability is murkier regarding the company that enabled them to create and easily share that material with potentially millions of viewers.

Minister for media Patrick O’Donovan last week said that X was not responsible for making child sexual abuse images, but that responsibility lies with the people who use the website to make the images.

O’Donovan has since said that his comments were “taken out of context”, though he still refused to apportion blame to Elon Musk’s X for the generation of the images.

Social media companies are given wide ranging protections over what appears on their sites.

“Millions of people are publishing content on online platforms on a daily basis and it would be impossible for platforms to prevent every harmful, or illegal post being placed online,” Smith de Bruin told The Journal.

The protections afforded these companies are referred to as “hosting”, “caching” and “mere conduit” defences.

In effect, the law does not impose an obligation on social media platforms to proactively moderate content, Costello explained. However they are required to take it down when they do become aware of it.

“If the manipulated image does not reach the threshold of an offence but is still bullying,
humiliating, or otherwise meets the risk‐test for “harmful online content” (for example, it
seriously humiliates the person or foreseeably causes significant mental harm), it falls into the ‘harmful but not illegal’ category,” Smith de Bruin said, citing the Online Safety and Media Regulation Act 2022.

Affected users must be given tools to report harmful content and, if they believe the platform is not meeting its obligations, to file a complaint with Coimisiún na Meán. 

This media regulator is in charge of overseeing the Online Safety Code and the Digital Services Act, which is the set of EU-wide laws dealing with large online platforms.

Coimisiún na Meán has powers to compel compliance, including imposing significant administrative fines —in some cases, up to 10% of annual worldwide turnover.

In response to The Journal’s queries on whether they were this scandal, Coimisiún na Meán said, ambiguously: “Our current investigations into X, TikTok and LinkedIn which we commenced last year are ongoing.”

Their email noted they did not have the power to remove illegal or harmful content, and said that people who see such content should report it to the Gardaí, or Hotline.ie, as well as to the platforms themselves. 

“If a user has difficulty reporting, or if they are unhappy with a platform’s response, they should contact us,” Coimisiún na Meán said. 

X is designated as a video‐sharing platform service and is bound by the Online Safety Code, which requires such companies to prohibit harmful content and take “appropriate measures” to protect users, particularly children.

(X is currently in courts challenging this code. A decision is expected on the case next week).

Despite the different rules given to social media companies in general, more straightforward prosecutions against X are not necessarily ruled out.

“The issue here is who is responsible for the ‘publication’ of the images,” O’Doherty said.

“An AI machine which, in response to a user’s prompt, creates any response — be it in text or image format — is legally responsible for the publication of that content.

“The user who requested it may also be responsible for any further publication of the content — if they upload it to social media, for example — but the AI platform bears initial responsibility for its publication.

“To suggest otherwise betrays a striking misunderstanding of not just the technology, but also the law.”

O’Doherty noted that while X, the social media platform, had a “general immunity” under the law, this did not apply to the AI tool that it was intertwined with.

“Grok is not ‘hosting’ these images — it is creating them, so the operators of Grok bear primary responsibility as publishers of this material, and I believe would not be able to avail of the hosting defence,” he said.

However, this view was not unanimous among the legal professionals that spoke with The Journal.

“As the law stands, Grok could not reasonably be held liable for every deepfake posted online,” said Smith De Bruin, comparing the situation to someone being caught driving a Ferrari at 200kmph and blaming the car company.

“We choose not to ban car manufacturers from making fast cars that can exceed the speed limits,” she said. “Instead, we make it an offence to drive at reckless speeds.”

New legislation

Many of the legal experts that spoke with The Journal suggested that the ambiguity and confusion over the legal status of the scandal was a sign new legislation was needed.

In addition, Garda sources who spoke to The Journal said that there were ongoing debates among An Garda Síochána as to which existing legislation could be used to prosecute such cases.

“There is no doubt that our statutes need updating in this area,” John Maher, a barrister and expert in media law, told The Journal, noting that the current “child pornography” legislation does “not envisage the type of images that have led to the current controversy.”

Similarly, while Michelle Smith de Bruin was skeptical of using the law used to prosecute offences like “upskirting”, for cases like the Grok scandal, she did suggest some small amendments could expand that law to cover intentionally sexualised deepfaked images.

“It can be difficult for people affected by harmful online conduct to know which regime applies to their case,” Smith de Bruin said. “It must be nigh-on impossible for the average person to know whether their case falls into the category of illegal content, or ‘harmful but not illegal’. And the remedies are very different.”

All this shows that confusion about whether current legislation could successfully be prosecuted in such cases, is not limited to the general public. Evidently there is no consensus by legal professionals on how the current law should be applied.

“In general, judges are much more up to date with technology than people appreciate,” Maher said, “but a judge’s hands are tied if an offence is defined, in a statute, in a way that does not cover the processes and systems now available for generating images.

“Prosecutors cannot be expected to rely on courts developing novel interpretations of older laws.

“It gives great reassurance to a Judge when updated legislation is enacted, so that the judge knows that the Oireachtas has considered the matter, and created an updated version of the relevant offences to meet the new circumstances.”

Whether the law changes or not, there is no sign of a pause in the harms made possible by technological advancement.

“It will not become easier to deal with this issue going forward,” Smith de Bruin warned, “particularly having regard to the speed at which AI is moving.”

The Journal’s FactCheck is a signatory to the International Fact-Checking Network’s Code of Principles. You can read it here. For information on how FactCheck works, what the verdicts mean, and how you can take part, check out our Reader’s Guide here. You can read about the team of editors and reporters who work on the factchecks here.

Readers like you are keeping these stories free for everyone...
It is vital that we surface facts from noise. Articles like this one brings you clarity, transparency and balance so you can make well-informed decisions. We set up FactCheck in 2016 to proactively expose false or misleading information, but to continue to deliver on this mission we need your support. Over 5,000 readers like you support us. If you can, please consider setting up a monthly payment or making a once-off donation to keep news free to everyone.

Close
JournalTv
News in 60 seconds