Cassie can’t dance. At least not yet. But it recently took its first steps. You got to walk before you run!
Cassie is a bright yellow, two-legged, human-sized robot that recently taught itself to walk with a form of artificial intelligence called reinforcement learning.
Even before taking its first steps, the team of researchers from The University of California, Berkeley, used simulation to see if it was ready for its debut in the big, wide world. The researchers shared their work with MIT Technology Review in the article, Forget Boston Dynamics. This robot taught itself to walk. Their research, Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots, is available here.
Impressive videos from Boston Dynamics make it look easy
Boston Dynamics has been publishing spectacular videos of their robots for years, raising the expectations of robotic moves as they go. Late last year, they released a video of dancing robots that has now been viewed more than 30 million times.
“These videos may lead some people to believe that this is a solved and easy problem,” Zhongyu Li at the University of California, Berkeley, told MIT Technology Review. “But we still have a long way to go to have humanoid robots reliably operate and live in human environments.”
The amount of code required to program a bipedal robot to walk in a variety of environments is staggering. Uphill motion on a rocky path requires different control and balance than walking on a slick, flat surface. Sidewalks have different coefficients of friction than carpeted hallways.
Robustness and versatility are very hard to achieve. That’s why roboticists are turning to reinforcement learning.
The researchers reported that classic methods for stabilizing bipedal robots tend “to lack the ability to adapt to changes in the environment.” Reinforcement Learning, however, enables the robot to teach itself through trial and error. Reinforcement learning enabled Cassie to teach itself as it stepped and stumbled.
Learning to walk virtually, first
Due to their size and instability, two-legged robots can easily trip and tumble with even the tiniest of missteps. So, the Berkeley team let Cassie learn in a virtual environment before hitting the pavement, literally.
A trial-and-error approach includes errors, often lots of them. But a failure of an actual robot can be dangerous, expensive, or both. A physically accurate simulation environment such as SimscapeTM Mechanics is good for validating autonomous algorithms before they are deployed to expensive robot hardware, and that’s precisely what the researchers from Berkeley did. Much like fighter pilots learn to fly in flight simulators before taking the controls of expensive aircraft, Cassie learned to walk in a simulation environment.
The team used two levels of virtual environments. First, a simulated version of Cassie learned to walk by drawing on an extensive existing database of robot movements. They transferred this simulation to a second virtual environment, Simscape Mechanics, that replicates real-world physics with a high degree of accuracy.
The robot learned many different movements, such as walking in a crouched position, carrying loads, turning, and squatting. Once Cassie proved its ability in Simscape, the learned walking model was loaded onto the actual robot.
“The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate.”
-MIT Technology Review
So, while it’s true that you got to walk before you run, it turns out that if you’re a robot, it’s wise to test that you’re ready to walk first in simulation.
For more information:
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.