TheJournal.ie uses cookies. By continuing to browse this site you are agreeing to our use of cookies. Click here to find out more »
Dublin: 2 °C Sunday 23 November, 2014

Is a Terminator-style robot apocalypse a possibility?

And why are scientists afraid to talk about it?

WORKING ROBOTICISTS NEED to indulge the public in sci-fi scenarios.

I thought it’d be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover, but four big names in the area declined to talk to me. A fifth person with robo street cred told me on background that people in the community fear that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.

terminator-6

Just how realistic is a 'Terminator' scenario?

Source: Terminator

This is a problem. A good roboticist should have a finger on the pulse of the public’s popular conception of robotics and be able to speak to it. The public doesn’t care about “degrees of freedom” or “state estimation and optimization for mobile robot navigation,” but give a robot a gun and a mission, and they’re enthralled.

More importantly, as I heard from the few roboticists who spoke to me on the record, there are real risks involved going forward, and the time to have a serious discussion about the development and regulation of robots is now.

robots fighting

Robots fight during Japan's Robo-One Championships in Tokyo

Source: Screenshot

Most people agree that the robot revolution will have benefits. People disagree about the risks.

Author and physicist Louis Del Monte told us that the robot uprising ”won’t be the ‘Terminator’ scenario, not a war. In the early part of the post-singularity world — after robots become smarter than humans — one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We’ll see the machines as a useful tool.”

louis-del-monte

Louis Del Monte

Source: Screenshot

But according to Del Monte, the real danger occurs when self-aware machines realize they share the planet with humans. They “might view us the same way we view harmful insects” because humans are a species that “is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses.”

Frank Tobe, editor and publisher of the business-focused Robot Report, subscribes to the views of Google futurist Ray Kurzweil on the singularity, that we’re close to developing machines that can outperform the human mind, perhaps by 2045. He says we shouldn’t take this lightly.

I’ve become concerned that now is the time to set in motion limits, controls, and guidelines for the development and deployment of future robotic-like devices.

“It’s time to decide whether future robots will have superpowers — which themselves will be subject to exponential rates of progress — or be limited to services under man’s control,” Tobe said. “Superman or valet? I choose the latter, but I’m concerned that politicians and governments, particularly their departments of defense and industry lobbyists, will choose the former.”

Kurzweil contends that as various research projects plumb the depths of the human brain with software, humankind itself will be improved by offshoot therapies and implants.

“This seems logical to me,” Tobe said. “Nevertheless, until we choose the valet option, we have to be wary that sociopathic behaviors can be programmed into future bots with unimaginable consequences.”

ryan-calo

Ryan Calo

Source: Screenshot

Ryan Calo, assistant professor of law at the University of Washington with an eye on robot ethics and policy, does not see a machine uprising ever happening:

Based on what I read, and on conversations I have had with a wide variety of roboticists and computer scientists, I do not believe machines will surpass human intelligence — in the sense of achieving ‘strong’ or ‘general’ AI — in the foreseeable future.

“Even if processing power continues to advance, we would need an achievement in software on par with the work of Mozart to reproduce consciousness.”

Calo adds, however, that we should watch for warnings leading up to a potential singularity moment. If we see robots become more multipurpose and contextually aware then they may then be “on their way to strong AI,” says Calo. That will be a tip that they’re advancing to the point of danger for humans.

Calo has also recently said that robotic capability needs to be regulated.

Andra Keay, managing director of Silicon Valley Robotics, also doesn’t foresee a guns a’ blazin’ robot war, but she says there are issues we should confront: ”I don’t believe in a head-on conflict between humans and machines, but I do think that machines may profoundly change the way we live and unless we pay attention to the shifting economical and ethical boundaries, then we will create a worse world for the future,” she said. “It’s up to us.”

shutterstock_161078552

Most people agree that the robot revolution will have benefits. People disagree about the risks.

Source: Robot via shutterstock

In contrast to this, Jorge Heraud, CEO of agricultural robotics company Blue River Technology, offers a fairly middle-of-the-road point of view: “Yes, someday [robots and machines] will [surpass human intelligence]. Early on, robots/machines will be better at some tasks and (much) worse at others. It’ll take a very long while until a single robot/machine will surpass human intelligence in a broad number of tasks. [It will be] much longer until it’s better in all.”

When asked if if the singularity would look like a missing scene from “Terminator” or if it would be more subtle than that, Heraud said, “Much more subtle. Think C-3PO. We don’t have anything to worry for a long while.”

Regardless of the risk, it shouldn’t be controversial that we need to discuss and regulate the future of robotics.

Northwestern Law professor John O. McGinnis makes clear how we can win the robot revolution right now in his paper, “Accelerating AI” [emphasis ours]:

“Even a non-anthropomorphic human intelligence still could pose threats to mankind, but they are probably manageable threats.  The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans.”

But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.

Long before any battle scenes ripped from science fiction actually take place, the real battle will be in the hands of the people building and designing artificially intelligent systems. Many of the same people who declined to be interviewed for this story are the ones who must stand up as heroes to save humanity from blockbuster science fiction terror in the real world.

Forget the missiles and lasers — the only weapons of consequence here will be algorithms and the human minds creating them.

- Dylan Love

READ: This robot is designed to become part of your family’s day-to-day life

READ: ‘Machines, not humans will be dominant by 2045’

  • Share on Facebook
  • Email this article
  •  

Published with permission from:

Business Insider
Business Insider is a business site with strong financial, media and tech focus.

Read next:

Comments (56 Comments)

Add New Comment