NewScientist.com | Mar 18, 2008
by Celeste Biever
A virtual child controlled by artificially intelligent software has passed a cognitive test regarded as a major milestone in human development. It could lead to smarter computer games able to predict human players’ state of mind.
Children typically master the “false belief test” at age 4 or 5. It tests their ability to realise that the beliefs of others can differ from their own, and from reality.
The creators of the new character – which they called Eddie – say passing the test shows it can reason about the beliefs of others, using a rudimentary “theory of mind”.
“Today’s characters have no genuine autonomy or mental picture of who you are,” researcher Selmer Bringsjord of Rensselaer Polytechnic Institute in Troy, New York, told New Scientist.
He aims to change that with future games and virtual worlds populated by genuinely intelligent computer characters able to predict and understand players actions and motives.
Bringsjord’s colleague Andrew Shilliday adds that their work will have applications outside of gaming. For example, search engines able to reason about the beliefs of a user might allow them to better understand their search queries.
In real life, the “false belief test” is used by psychologists to help diagnose disorders such as autism. The subject is shown a scene in which a child puts an object in a drawer and leaves the room. While out of sight, the child’s mother moves the object somewhere else.
Unable to see the world through the eyes of others, young children – and some people with autism – taking the test predict that the child will look for the object in the place his mother left it. Only at 4 or 5 years old can they understand that the child falsely believes the object is still in the drawer.
Bringsjord’s team set up a similar scenario inside the virtual world Second Life. A video shows their character, Eddie, taking and passing the test (15 MB, .mov format).
Two avatars controlled by humans stand with Eddie next to one red and one green suitcase. One human avatar then leaves and while they are gone the remaining human avatar moves the gun from the red suitcase into the green one.
Eddie is then asked where the character that left would look for the gun. The AI software correctly realises they will look in the red suitcase.
Eddie’s software maintains a database of facts that is constantly updated, for example, the location of the gun. The reasoning engine uses these facts to make sense of situations.
Eddie can pass the test thanks to a simple logical statement added to the reasoning engine: if someone sees something, they know it and if they don’t see it, they don’t. The program can reason correctly that an avatar will not know the gun has moved unless it was there to see it.
An “immature” version of Eddie without the extra piece of logic cannot pass the test.
John Laird, a researcher in computer games and Artificial Intelligence (AI) at the University of Michigan in Ann Arbor, is not overly impressed. “It’s not that challenging to get an AI system to do theory of mind,” he says.
He points out that last year, Cynthia Breazeal of the Massachusetts Institute of Technology’s Media Lab programmed that ability into a physical robot called Leonardo. A video shows the robot passing the test.
More impressive demonstration, says Laird, would be a character, initially unable to pass the test, that learned how to do so – just as humans do.
But Bringsjord points out his is the first computer character to achieve theory of mind, something necessary if characters are to become smarter, better opponents and collaborators. His team are now attempting to make characters that can lie, which also requires reasoning about other people’s mental states.
Shilliday presented the work on Sunday 2 March at the first conference on Artificial General Intelligence in Memphis, Tennessee, US.