By Robin Lloyd
In the future, more and more of us will learn from social robots, especially kids learning pre-school skills and students of all ages studying a new language.
This is just one of the scenarios sketched in a review essay that looks at a “new science of learning,” which brings together recent findings from the fields of psychology, neuroscience, machine learning and education.
The essay, published in the July 17 issue of the journal Science, outlines new insights into how humans learn now and could learn in the future, based on various studies including some that document the amazing amount of brain development that happens in infants and later on in childhood.
The premise for the new thinking: We humans are born immature and naturally curious, and become creatures capable of highly complex cultural achievements — such as the ability to build schools and school systems that can teach us how to create computers that mimic our brains.
With a stronger understanding of how this learning happens, scientists are coming up with new principles for human learning, new educational theories and designs for learning environments that better match how we learn best, says one of the essay’s authors, psychologist Andrew Meltzoff of the University of Washington’s Learning in Informal and Formal Environments (LIFE) Center.
And social robots have a potentially growing role in these future learning environments, he says. The mechanisms behind these sophisticated machines apparently complement some of the mechanisms behind human learning.
One such robot, which looks like the head of Albert Einstein, was revealed this week to show facial expressions and react to real human expressions. The researchers who built the strikingly real-looking yet body-less ‘bot plan to test it in schools.
In the first 5 years of life, our learning is “exhuberant” and “effortless,” Meltzoff says. We are born learning, he says, and adults are driven to teach infants and children. During those years and up to puberty, our brains exhibit “neural plasticity” — it’s easier to learn languages, including foreign languages. It’s almost magical how we learn a foreign language, what becomes our native tongue, in the first two or three years we’re alive, Meltzoff said.
Magic aside, our early learning is computational, Meltzoff and his colleagues write.
Children under three and even infants have been found to use statistical thinking, such as frequency distributions and probabilities and covariation, to learn the phonetics of their native tongue and to infer cause-effect relationships in the physical world.
Some of these findings have helped engineers build machines that can learn and develop social skills, such as BabyBot, a baby doll trained to detect human faces.
Meanwhile, our learning is also highly social, so social, in fact, that newborns as young as 42 minutes old have been found to match gestures shown to them, such as someone sticking out her tongue or opening his mouth, Meltzoff and a colleague reported more than a decade ago.
Imitation is a key component to our learning — it’s a faster and safer way to learn than just trying to figure something out on our own, the authors write.
Even as adults, we use imitation when we go to a new setting such as a dinner party or a foreign country, to try and fit in. Of course, for kids, the learning packed into every day can amount to traveling to a foreign country. In this case, they are “visiting” adult culture and learning how to act like the people in our culture, becoming more like us.
If you roll all these human learning features into the field of robotics, there is a somewhat natural overlap — robots are well-suited to imitate us, learn from us, socialize with us and eventually teach us, the researchers say.
Social robots are being used on an experimental basis already to teach various skills to preschool children, including the names of colors, new vocabulary words and simple songs.
In the future, robots will only be used to teach certain skills, such as acquiring a foreign or new language, possibly in playgroups with children or to individual adults. But robot teachers can be cost-effective compared to the expense of paying a human teacher, Meltzoff told LiveScience.
“If we can capture the magic of social interaction and pedagogy, what makes social interaction so effective as a vehicle for learning, we may be able to embody some of those tricks in machines, including computer agents, automatic tutors, and robots,” he said.
Still, children clearly learn best from other people and playgroups of peers, Meltzoff said, and he doesn’t see children in the future being taught entirely by robots.
Terrance Sejnowski of the Temporal Dynamics of Learning Center (TDLC) at the University of California at San Diego, a co-author of the new essay with Meltzoff, is working on using technology to merge the social with the instructional, and bringing it to bear on classrooms to create personalized, individualized teaching tailored to students and tracking their progress.
“By developing a very sophisticated computational model of a child’s mind, we can help improve that child’s performance,” Sejnowski said.
Overall, the hope, Meltzoff said, is to “figure out how to combine the passion and curiosity for learning that children display with formal schooling. There is no reason why curiosity and passion can’t be fanned at school where there are dedicated professionals, teachers, trying to help children learn.”
The essay is the first published article as part of a collaboration between the TDLC and the LIFE Center, both of which are funded under multimillion-dollar grants from the National Science Foundation. Meltzoff’s other co-authors on the essay are Patricia Kuhl of the University of Washington and Javier Movellan of the TDLC.