The Titan supercomputer in the US state of Tennessee was named in November 2012 as the world’s most powerful computer. It can process quadrillions of calculations per second – 176,000 times the computing capacity of 20 years ago.
But is the Titan actually that much smarter than its predecessors? This, say researchers investigating artificial intelligence, remains to be seen.
Based at Monash University in Melbourne, Associate Professor David Dowe is part of an international collaboration that has developed the Anytime Universal Intelligence (anYnt) test, intended to be able to measure the progress of artificial intelligence. It is the world’s first test of this type.
Central to this effort, says Associate Professor Dowe, is the question of what intelligence is – that is, what is the test trying to measure? While researchers are still trying to answer this question, they believe that the ability to recognise and respond to patterns is a crucial element.
There is some evidence that machines are becoming smarter at specific tasks. In 1997 IBM’s Deep Blue computer defeated World Chess Champion Garry Kasparov in a game of chess, and in 2011 IBM’s Watson computer won the US game show Jeopardy!, competing against two of the game’s human grand champions. But when it comes to measuring general intelligence and the ability to adapt to unfamiliar situations, progress is less certain.
“We’re trying to come up with some yardstick that could be applied to everyone and everything – to machines, humans, non-human animals and hybrids thereof, and even entities from other planets,” Associate Professor Dowe says. “That could include combinations of any of these: a person with pen and paper, for example, or a person using a computer, or groups or communities of these, such as two people solving a problem.”
Associate Professor Dowe’s main area of research is in machine learning and statistical modelling, primarily using minimum message length (MML), which was co-developed in 1968 by Monash University’s Professor Chris Wallace, foundation professor and chair in what was then called information science. MML identifies the most effective patterns in data, allowing information to be compressed as tightly as possible into what researchers call two-part messages. The first part describes the optimal pattern, the second conveys ‘noise’ and variations from the pattern.
From his work on MML, Associate Professor Dowe has proposed that pattern recognition is central to intelligence; other elements include memory, mathematical ability and the ability to plan.
“You observe patterns in the world around you and you use those patterns to deal with the world,” he says. “Those patterns may be facial recognition, or the taste of foods. Sometimes those patterns have been learnt by other people and passed on to us. Other patterns we recognise for ourselves.”
Patterns of intelligence
Near-simultaneously during the 1990s, Spanish researcher Dr José Hernández-Orallo at the Technical University of Valencia was working independently on a theory that paralleled Associate Professor Dowe’s work. He used algorithmic information theory – pioneered in the 1960s by US scientist Ray Solomonoff – and MML to measure intelligence.
Dr Hernández-Orallo says the ability to quantify progress is at the heart of any discipline, and the quest to create artificial intelligence is no different. The pair have been working in concert on finding a way to measure intelligence since 2004, but their research has accelerated since 2010 when a grant from the Spanish Ministerio de Educación y Ciencia helped them push forward with the anYnt test.
This test attempts to remove as much human bias as possible. It is not unlike a computer game where the agent taking the test – human or computer – has to accumulate rewards and avoid penalties.
This involves working out the pattern in the movement of the ‘good’ element in the game, which leaves behind rewards, and the ‘evil’ element, which leaves behind penalties.
The test-taking agent also needs to work out which element is good and which is evil. The more consistently the agent can accumulate rewards, the more clearly it demonstrates an ability to identify the pattern and to plan its actions.
Using rewards and penalties is an attempt to remove the complexities of language from the test, allowing it to be applied not only to humans but also to machines and eventually to any kind of animal.
The prototype is being revised after initial trials with Spanish undergraduate students and a relatively simple machine learning program, Q-learning. Associate Professor Dowe says the results put the computer system on par with the students, although the students were clearly more intelligent.
While this indicates that a final test is still a long way off, the process of developing this first test has helped to clarify crucial issues for refinement. How the test is delivered to the agent is critical, particularly if it is to be extended to animals (not many animals can use a keyboard). The researchers also plan to incorporate greater adaptability into the test, to increase or decrease the level of difficulty depending on how the test-taking agent performs.
“This is a huge space for exploration,” Dr Hernández-Orallo says. “Artificial intelligence is much better, more practical today than it was 20 or 30 years ago. It can solve more things, but is it really more intelligent? We can’t answer that question yet in a principled way, with some scientific backing. But we’re working on it.”
See the anYnt project website: http://users.dsic.upv.es/proy/anynt