Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Hal 9000, the villainous AI in Stanley Kubrick's 2001: A Space Odyssey (1968) Alamy Stock Photo
AI

'This is a not a thinking machine': Ireland's experts react to AI industry doomsday warnings

Proclamations coming from the industry in recent months would have people believe the robot revolution is just around the corner.

OPPENHEIMER IS OUT in cinemas this week, a reminder that tales of catastrophe – both real and imagined – have always been a source of human fascination. 

AI, the ominous acronym that can’t help but conjure up Kubrickian nightmares and utopian domestic dreamscapes, is something that, unlike Oppenheimer’s atom bomb, holds a place in the public imagination that predates its invention and introduction to the world. 

It’s a pair of letters that comes with a certain amount of baggage carried over from a wealth of science fiction material, so much so that some people working in fields that fall under the Artificial Intelligence umbrella prefer not to use it. Ironically, Ireland’s AI ambassador Patricia Scanlon is one of them. 

Scanlon, the founder of educational voice recognition software company Soapbox Labs believes the term is “overused” in the industry, mostly because there are only a handful of companies that have developed this kind of technology while many more simply piggyback off their models.

Dr Giovanni Di Liberto, an assistant professor at Trinity College Dublin, the ADAPT centre, and the Trinity College Institute of Neuroscience, who uses large language models (LLMs) to study linguistics and speech perception, says that while the recent developments in the field are “very exciting”, the idea that these machines are intelligent in the common sense of the word is inaccurate. 

“This is not a thinking machine,” he says pointing to the ChapGPT application open on the screen in his campus office at the School of Computer Science and Statistics . 

“I think it’s a great tool, and the kind of output that it gives us sounds smart in many situations, but I don’t think there is any thinking per se, even in any philosophical interpretation.”

“I prefer to call these algorithms machine learning. And machine learning is generally considered a subset of artificial intelligence. But to be really fair, this (ChatGPT) is machine learning, because this is an input output model. It’s nothing else. Artificial intelligence can be a little bit broader than that.”

Without wishing to stray too far into the realms of philosophy, Di Liberto believes it’s safe to say that there are no “thinking” machines as it stands. AI tools at the moment are narrow in function and the kind of AI we see in sci-fi films, artificial general intelligence (AGI), is likely a long way off, if it’s even possible. 

“It’s this transition to general artificial intelligence that freaks people out, we are still far from that,” he says. 

“The technology to do AGI, that’s the super super intelligence that’s going to surpass humans and learn without us, that hasn’t been invented yet and I don’t think anybody thinks it has been invented yet,” says Scanlon. 

However, proclamations coming from the industry in recent months would have people believe the robot revolution is just around the corner.

Prophets of doom

A public statement issued by the Centre for AI Safety on 30 May this year declared that AI poses a risk to humanity on the same level as pandemics and nuclear war. 

That short, vague pronouncement was signed by a long list of industry professionals, journalists, academics and policy makers. 

Another statement, this time in the form of an open letter from the modestly named Future of Life Institute, called for a six-month pause in the development of AI systems. It was signed by the likes of Elon Musk and Apple co-founder Steve Wozniak among many other big names in the tech business.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked. 

Interestingly, Musk just announced the founding of his own artificial intelligence company, xAI, on 8 July, with the stated mission of “understanding reality”. 

In early May, the “godfather of AI” Geoffrey Hinton, a pioneer in the development of neural networks and deep learning, left his position at Google citing concerns about the speed and direction of developments in the field. 

While he acknowledged the risks that advanced chatbots could pose to public trust in the media and communication more generally, his main concern, he told the BBC, is the “existential risk of what happens when these things become more intelligent than us”.

But if AGI is so far away, as Scanlon, Di Liberto and many other experts have said, what is the basis for all the talk of human extinction? 

Scared customers

The broad consensus across the field seems to be that regulation is needed in order to head off the foreseeable and already apparent issues that LLMs and other generative AI systems present. 

Those problems most prominently include the effect LLMs and image generation algorithms may have on jobs, the possibility for manipulating information and undermining public trust, biased datasets that can produce harmful societal outcomes, accountability and plagiarism. 

There is plenty of evidence and good reason to believe these are real risks worth getting ahead of before AI technologies (and the companies behind them) become even more powerful, but do the apocalyptic, science fiction style prognostications of industry leaders serve to help or hinder that effort? 

In the opinion of Khurshid Ahmad, professor of computer science in the School of Statistics and Computer Science at Trinity, who has worked in the area for decades, about 90% of these pronouncements are promotional.

“All industries do that. If I scare you, you become a better customer.”

When asked if company executives are borrowing from the language of science fiction, he replies, “No, I don’t think so. It’s commercial fiction.” 

He believes it’s a sign of  “an immaturity of the industry at this point in time.”

For Ahmad, there are parallels between the current wave of hype and hysteria around AI and the frenzy that accompanied the mainstream emergence of the internet at the beginning of the century. 

“Look at the IT bubble in 2000, which was accompanied by the same sort of people making pronouncements of it being good or bad. I would keep an eye on the floatation of ChatGPT on the New York Stock exchange. 

“They’re like internet companies in the 2000 bubble, there is no difference. Money’s pouring in, speculators are getting in, people are riled up. And also I think we are worried about the shift of power in the world towards China, India and other places. So that’s one thing that worries people in the West.”

For someone who built his first chatbot decades ago, Ahmad finds it surprising that so many people feel qualified to weigh in on the topic now. He also says the media is partly responsible for facilitating them.  

“I’m very surprised a very large number of people are participating in this debate. And when I look at their credentials, I’m not sure. What is it they did in previous lives to be able to make comments? Maybe you guys like chatty people so it serves you well, I don’t know. The media is to blame for it.” 

Uncertain futures

Scanlon and Di Liberto see the “extinction” message in a more sympathetic light, believing that those making these declarations are doing so in order to draw much needed attention to the efforts aimed at regulating AI products. 

While Scanlon says she tends to be “more of a realist” when it comes to the topic, she still sees some method behind the heightened rhetoric. 

“The problem is if you don’t put in the guardrails today, much like social media and other big crises we’ve had, it’s very hard to introduce it later once it’s everywhere.

“What I believe the effect these statements are having is it’s garnered a lot of attention. If they hadn’t gone as far as they did in their statements, would we be talking about it today?” 

She sees the challenge of bringing the public’s attention to the issue as similar to that of climate scientists advocating for policies that address climate change.

“Look at what’s happened over the last couple of decades with the climate crisis. It’s very hard to get people to believe that things are a threat. And a threat is a potential threat, it doesn’t mean it’s going to happen.

“There are always people who say humans aren’t causing global warming. Now, if they’re wrong, you are talking about an extinction level threat. If the people on the side of advocating for a cleaner planet are wrong, the worst thing that’ll come is a cleaner planet.”

Previous attempts, or lack thereof, to regulate novel products are also instructive, according to Scanlon. 

“If you look at social media, we’ve failed completely to regulate that. It’s been shown to be damaging to teenage girls’ mental health and it’s still not regulated. We totally missed the boat, or dropped the ball, whatever analogy you want to use.” 

“So I feel like maybe they had to go that far in order to get any attention or to get anybody agreeing to eye down big tech and say regulation is important.”

For Scanlon, whose voluntary role involves arguing for regulating the industry, it seems prudent to play it safe, especially since the development of the latest AI models has come about so quickly and the future of the industry is impossible to predict long-term. 

“The conversation on AI has become highly polarised. Some champion and advocate for the pursuit of unrestricted AGI development, believing it will be solely beneficial to humanity. Yet, their optimism lacks credibility.

“Others believe AGI potentially threatens our existence—a prediction that is possible but also uncertain. What is certain, though, is that AGI doesn’t have to end humanity to cause serious harm. Hence, regulations are needed now, to guard against present and future risks.”

Di Liberto describes the intellectual jump from acknowledging the recognisable risks that come with LLMs and other AI systems to predictions of the end of human civilization as “a stretch”, but like Scanlon he thinks there may be some merit to grabbing public and political attention.

“I suppose it’s good that some worries are raised because maybe sometimes you need to be a little excessive in the worry to encourage the regulations to be produced.

“So maybe it’s not as apocalyptic as you would think, as people say, but maybe that is a necessary worry for doing that. Then if you really want to be philosophical about that, or looking at the long term, it’s really hard to know where this is going.

“But at present, the thing that we have to do is think about the problems that this thing is generating now.” 

Your Voice
Readers Comments
24
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel