Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Fast object handling with Event Camera on ANYmal
Quadrupedal robots equipped with visual sensors such as depth cameras or LiDARs recently achieved fast and robust locomotion by perceiving the environment [1]. However, current quadruped robots are still far from achieving the same level of agility as animals like cats and dogs. In this project we use the event camera [2] to tackle this in the task of high-speed object handling such as ball catching.
The goal of this project is to make ANYmal catch a high-speed ball using an event stream. The current approach separates the problem into two parts. First, the detection pipeline [3] estimates the trajectory of the ball from the event camera information. Then the control policy which is trained to track the target position of the net moves to catch the ball. However this limits the total performance since the policy cannot exploit the nature of the visual pipeline and the physics of the environment. This project aims to extend this to jointly train the control policy with the event camera detection pipeline for better performance.
References:
[1] Miki, Takahiro, et al. "Learning robust perceptive locomotion for quadrupedal robots in the wild." Science Robotics 7.62 (2022): eabk2822.
[2] https://rpg.ifi.uzh.ch/research_dvs.html
[3] Falanga, Davide, Kevin Kleber, and Davide Scaramuzza. 2020. “Dynamic Obstacle Avoidance for Quadrotors with Event Cameras.” Science Robotics 5 (40). https://doi.org/10.1126/scirobotics.aaz9712.
The goal of this project is to make ANYmal catch a high-speed ball using an event stream. The current approach separates the problem into two parts. First, the detection pipeline [3] estimates the trajectory of the ball from the event camera information. Then the control policy which is trained to track the target position of the net moves to catch the ball. However this limits the total performance since the policy cannot exploit the nature of the visual pipeline and the physics of the environment. This project aims to extend this to jointly train the control policy with the event camera detection pipeline for better performance.
References: [1] Miki, Takahiro, et al. "Learning robust perceptive locomotion for quadrupedal robots in the wild." Science Robotics 7.62 (2022): eabk2822. [2] https://rpg.ifi.uzh.ch/research_dvs.html [3] Falanga, Davide, Kevin Kleber, and Davide Scaramuzza. 2020. “Dynamic Obstacle Avoidance for Quadrotors with Event Cameras.” Science Robotics 5 (40). https://doi.org/10.1126/scirobotics.aaz9712.
- Literature research
- Setup the training environment
- Hardware experiment
- Literature research - Setup the training environment - Hardware experiment
- Coding experience in python
- Knowledge of deep learning, computer vision, reinforcement learning
- Experience in above and ROS is a plus
- Coding experience in python - Knowledge of deep learning, computer vision, reinforcement learning - Experience in above and ROS is a plus
Please send a CV and transcript of records to Takahiro Miki (tamiki@ethz.ch) and Daniel Gehrig (dgehrig@ifi.uzh.ch)
Please send a CV and transcript of records to Takahiro Miki (tamiki@ethz.ch) and Daniel Gehrig (dgehrig@ifi.uzh.ch)