November 7, 2015

Tech: Robot Toddler Learns to Stand

 

Like many toddlers, Darwin sometimes looks a bit unsteady on its feet. But with each clumsy motion, the humanoid robot is demonstrating an important new way for androids to deal with challenging or unfamiliar environments. The robot learns to perform a new task by using a process somewhat similar to the neurological processes that underpin childhood learning.

Darwin lives in the lab of Pieter Abbeel, an associate professor at the University of California, Berkeley. When I saw the robot a few weeks ago, it was suspended from a camera tripod by a piece of rope, looking a bit tragic. A little while earlier, Darwin had been wriggling around on the end of the rope, trying to work out how best to move its limbs in order to stand up without falling over.

Darwin’s motions are controlled by several simulated neural networks—algorithms that mimic the way learning happens in a biological brain as the connections between neurons strengthen and weaken over time in response to input. The approach makes use of very complex neural networks, which are known as deep-learning networks, which have many layers of simulated neurons.

For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. The second network is required because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may throw it off completely, causing the robot to fall.

The researchers had the robot learn to stand, to move its hand to perform reaching motions, and to stay upright when the ground beneath it tilts.
“It practices in simulation for about an hour,” says Igor Mordatch, a postdoctoral researcher at UC Berkeley who carried out the study. “Then at runtime it’s learning on the fly how not to slip.”
Abbeel’s group has previously shown how deep learning can enable a robot to perform a task, such as passing a toy building block through a shaped hole, through a process of trial and error. The new approach is important because it may not always be possible for a robot to indulge in an extensive period of testing. And simulations lack the complexities found in the real world, conditions that with robots can cascade into a catastrophic failure.

No comments:

Post a Comment

Ancelotti - 'Ronaldo and Messi need each other'

  The Italian says the rivalry between the two superstars drives both to become better players and talks about the importance of Por...