Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Enable ANYmal to see: Evaluation and Modeling of 3D Depth Sensors
Mobile robot navigation requires a robot to perceive its surroundings while moving through previously unseen environments. This project explores 3D depth sensors that provide the means to spatially capture the terrain and objects around the robot.
With the introduction of the 3D Depth Sensors like stereo camers and time-of-flight cameras, low-cost depth sensors for fast and high-quality dense depth images was made available. Its release had a big impact in robotics and we have seen a multitude of applications.
This projects evaluates and models some of the available sensors in our lab to perform navigation tasks in rough and outdoor terrain. The video below [1] shows the application of the Kinect v2 depth sensor on our quadrupedal robot StarlETH. The comparison with a rotating lases sensor [2] shows the adavantages of depth cameras.
This project analyzes the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, the project introduces empirically derived noise models for our 3D depth sensors in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the 3D depth sensor images for a variety of applications. The obtained results would be similar to the work conducted in [3].
One of the main challenges of the 3D depth processing are computational expensive calculations when using high resolution point clouds. Currently the processing it done on the CPU of the robot. One possibility to overcome this is to take advantage of multi threading on GPUs. In this case the task is to rewrite the software package [4] into CUDA.
Depending on the results and progress, the project can be extended to use the 3D depth information to perform path planning of the feet in rough terrain.
[1] https://www.youtube.com/watch?v=I9eP8GrMyNQ
[2] https://www.youtube.com/watch?v=iVMsQPTM65M
[3] Fankhauser, Péter, et al. "Kinect v2 for mobile robot navigation: Evaluation and modeling." Advanced Robotics (ICAR), 2015 International Conference on. IEEE, 2015.
[4] https://github.com/ethz-asl/elevation_mapping
With the introduction of the 3D Depth Sensors like stereo camers and time-of-flight cameras, low-cost depth sensors for fast and high-quality dense depth images was made available. Its release had a big impact in robotics and we have seen a multitude of applications.
This projects evaluates and models some of the available sensors in our lab to perform navigation tasks in rough and outdoor terrain. The video below [1] shows the application of the Kinect v2 depth sensor on our quadrupedal robot StarlETH. The comparison with a rotating lases sensor [2] shows the adavantages of depth cameras.
This project analyzes the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, the project introduces empirically derived noise models for our 3D depth sensors in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the 3D depth sensor images for a variety of applications. The obtained results would be similar to the work conducted in [3].
One of the main challenges of the 3D depth processing are computational expensive calculations when using high resolution point clouds. Currently the processing it done on the CPU of the robot. One possibility to overcome this is to take advantage of multi threading on GPUs. In this case the task is to rewrite the software package [4] into CUDA.
Depending on the results and progress, the project can be extended to use the 3D depth information to perform path planning of the feet in rough terrain.
[1] https://www.youtube.com/watch?v=I9eP8GrMyNQ
[2] https://www.youtube.com/watch?v=iVMsQPTM65M
[3] Fankhauser, Péter, et al. "Kinect v2 for mobile robot navigation: Evaluation and modeling." Advanced Robotics (ICAR), 2015 International Conference on. IEEE, 2015.
[4] https://github.com/ethz-asl/elevation_mapping
- Evaluate and model several 3D depth sensors.
- Integrate the sensors on our quadrupedal robot ANYmal.
- Test the performance of the terrain mapping in indoor and ourdoor terrain.
- Optional: Path planning of the feet in rought terrain using the 3D depth information.
- Elevation mapping on a GPU using CUDA
- Evaluate and model several 3D depth sensors. - Integrate the sensors on our quadrupedal robot ANYmal. - Test the performance of the terrain mapping in indoor and ourdoor terrain. - Optional: Path planning of the feet in rought terrain using the 3D depth information. - Elevation mapping on a GPU using CUDA
We are looking for an independent and highly motivated student who takes ownership of this project and demonstrates persistence in making his algorithms work on a real system. We require
- Prior experience with 3D depth sensors, preferably in the robotic domain.
- Strong programming skills in C++ and MATLAB.
- Knowledge of ROS (robotic operating system) is helpful
- For the GPU programming CUDA experience is needed
We are looking for an independent and highly motivated student who takes ownership of this project and demonstrates persistence in making his algorithms work on a real system. We require
- Prior experience with 3D depth sensors, preferably in the robotic domain. - Strong programming skills in C++ and MATLAB. - Knowledge of ROS (robotic operating system) is helpful - For the GPU programming CUDA experience is needed
Please contact Marko Bjelonic (marko.bjelonic@mavt.ethz.ch). Your application should include a brief statement of motivation, grade transcript, and your CV.
Please contact Marko Bjelonic (marko.bjelonic@mavt.ethz.ch). Your application should include a brief statement of motivation, grade transcript, and your CV.