top of page

New algorithms enable four-legged robots to operate in the wild

A team led by the University of California, San Diego, has devised a novel algorithm system that allows four-legged robots to walk and run over difficult terrain while avoiding both static and moving impediments. During testing, the system navigated a robot through sandy areas, gravel, grass, and lumpy dirt hills covered with branches and fallen leaves without colliding with poles, trees, plants, rocks, benches, or humans. The robot also negotiated a crowded working environment without colliding with any boxes, desks, or chairs.


Researchers are one step closer to developing robots that can execute search and rescue operations or gather data in situations that are too dangerous or complex for people. The team will discuss their findings at the 2022 International Conference on Intelligent Robots and Systems (IROS), which will be held in Kyoto, Japan, from October 23 to 27.


Because of the way it integrates the robot's sense of sight with another sensory modality called proprioception, which incorporates the robot's feeling of movement, direction, speed, position, and touch, in this case, the feel of the ground beneath its feet, the system gives a legged robot greater adaptability. According to research lead author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, most techniques for training legged robots to walk and navigate focus on either proprioception or vision, but not both at the same time.


Wang compared it to training a blind robot to walk by simply touching and feeling the ground in one scenario. On the other, the robot plans its leg motions only on sight. It is not possible to learn two things at the same time. In our research, we integrate proprioception with computer vision to allow a legged robot to walk around quickly and gracefully while avoiding obstacles in a wide range of challenging locations, not just well-defined ones.

Wang and his colleagues' approach combines data from a depth camera on the robot's head with data from sensors on the robot's legs using a unique set of algorithms. This was no easy feat.


The difficulty, according to Wang, is that there is occasionally a little delay in obtaining pictures from the camera during real-world operation, thus data from the two separate sensing modalities do not always come at the same time. The team's answer was to recreate the mismatch by randomizing the two sets of inputs, a technique known as multi-modal delay randomization by the researchers. The fused and randomized inputs were then utilized to train an end-to-end reinforcement learning policy. This method enabled the robot to make quick judgments during navigation and predict changes in its surroundings ahead of time, allowing it to move and avoid hazards quicker on various terrains without the assistance of a human operator.


Wang and his colleagues are now focusing on making legged robots more adaptable so that they can traverse even more difficult terrain. We can now train a robot to perform simple tasks such as walking, running, and avoiding obstacles. Our next objective is to make a robot capable of walking up and down stairs, walking on stones, changing directions, and jumping over obstacles. The code has been published on GitHub, and the article is accessible on the arXiv preprint service.


Journal Information:Chieko Sarah Imai et al, Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization, arXiv (2022). arXiv:2109.14549 [cs.RO], arxiv.org/abs/2109.14549
Conference: iros2022.org/
GitHub: github.com/Mehooz/vision4leg
0 views0 comments

Recent Posts

See All

What Is Twitter Blue?

Twitter Blue is a paid subscription service that provides access to a premium version of Twitter. For a monthly fee (currently $2.99 in the

Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page