Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

The iCub robot tries to catch a ball during the Innorobo European summit, March 2012 Laurent Cipriani/AP/Press Association Images
AI

Artificial intelligence: how close is it to passing the test?

Some scientists believe that we are closer than ever to achieving artificial intelligence in machines – but questions about what intelligence really is remain.

THE QUEST FOR artificial intelligence (AI) began long ago in human history – even earlier than you might think.

The Egyptians were the first people known to discuss the concept of a “thinking machine”, long before the technology we use today existed.

Recently, scientists have said to we could be closer than ever to passing the Turing Test – a method of monitoring the human perception of intelligence in a machine, developed by British mathematician Alan Turing.

In practice, the test involves a panel of human judges who read typed answers to questions addressed to both a computer and a human. If no reliable distinctions between the two answers can be made, the machine may be considered “intelligent”.

Turing reasoned that if a computer could impersonate a human being so well as to be indistinguishable as a machine, it could be said that computer is at least as intelligent as a person.

A brief history of AI

  • In the 1950s, science fiction writer Isacc Asimov, who wrote the Law of Robotics and I Robot, seriously put forward the idea of pursuing the development of AI (“cybernetics”), saying: “Cybernetics is not just another branch of science. It is an intellectual revolution that rivals in importance the earlier Industrial Revolution.”
  • In 1950, Turing published his landmark paper Computing Machinery and Intelligence, which questioned whether machines could think, or possess intelligence. Turing pointed out that the definition of ‘intelligence’ was debatable, and reasoned that if a machine could appear to think just like a human it could be considered intelligent. The Turing test was born.
  • The Dartmouth Summer Research project on Artificial Intelligence, 1956, is now regarded as a seminal event in the field of AI. Many theories about the human race’s future with robots were ventured during the conference, with some insisting that household robots and teaching machines would become commonplace, and that computers would one day dominate our lives.
  • In the mid-1970s, due a series of AI failures, both the British and US governments cut funding to AI programmes. The years that followed are referred to as the ‘AI winter’.
  • With advent of expert systems in the early1980s – a computer system that emulates a human expert’s decision-making ability – funding was reintroduced to AI researchers. Although the success of Japan’s fifth generation computer also spurred more funding, other technological failures soon led to another ‘winter’.
  • The sector picked up again during the 1990s, with notable incidents such as the computer system Deep Blue beating the reigning world champion Garry Kasparov in a game of chess, and continued to build momentum throughout the new millennium.
  • Today, we use much technology that has been developed  from AI research, for example the 3D body–motion interface for the Xbox 360 – although few people would now consider this technology to be ‘intelligent’. Now, as technology becomes increasingly sophisticated, the question what constitutes intelligence is more hotly debated than ever.

Recent advances

Recently, cognitive scientist at the French National Center for Scientific Research Robert French wrote in the journal Science that two “revolutionary advances” in information technology could bring the Turing test out of retirement.

The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organising, and processing this rich collection of data.

The combination of these advancements means that machines could now answer questions that they were previously unable to by searching for information on the internet.

French says that if a complete record of a person’s life experiences – which help to develop their subcognitive network – were available to a machine, it’s possible that too could develop a similar network and pass the Turing Test.

If a machine could be created that not only analysed data but also mulled over data, this could be described a metacognition, French said, a way humans think and which “helps us build models of the world and manipulate them in our heads”.

French also noted IBM’s recent announcement of the creation of experimental “neurosynaptic” microchips, which are based on the computing principles of neurons found in the human brain – a move he commends, saying that if we are trying to develop machines that think “the human brain is certainly a good place to start.”

Read: ‘Build me a robot’: Irish teen’s challenge to tech community

Read: High-tech project aims to mine asteroids in space

Your Voice
Readers Comments
11
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.