Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Leveraging Deep Learnt Scene Completion for Fast Autonomous Exploration Planning and Mapping
A central capability of mobile robots is to autonomously navigate an unknown environment in order to build a map representation of the scene. This project aims to leverage deep learnt predictions to improve the exploration capabilities of mobile robots.
A central capability of mobile robots is to autonomously navigate an unknown environment in order to build a map representation of the scene. The goal is to explore as quickly as possible while guaranteeing safety of the robot.
Traditional approaches typically build a dense map by integrating a continuous stream of sensor information. Informative paths are then planned by sampling viewpoints or extracting frontiers between known and unknown regions of the map. Complementary, deep learning approaches have shown increasing performance in leveraging the partial observation available from a sensor scan and complete the scene based on learnt priors.
In this project, the goal is to leverage such predictions to improve the exploration capabilities of mobile robots. This includes primarily the integration and fusion of the predicted completions with the observed map as well as the extension of an informative path planning approach to account for this new information. An infrastructure for photo-realistic simulation, semantic scene completion, dense mapping and informative path planning is provided. In case of project success, the student is invited to contribute towards a publication of the work.
A central capability of mobile robots is to autonomously navigate an unknown environment in order to build a map representation of the scene. The goal is to explore as quickly as possible while guaranteeing safety of the robot.
Traditional approaches typically build a dense map by integrating a continuous stream of sensor information. Informative paths are then planned by sampling viewpoints or extracting frontiers between known and unknown regions of the map. Complementary, deep learning approaches have shown increasing performance in leveraging the partial observation available from a sensor scan and complete the scene based on learnt priors.
In this project, the goal is to leverage such predictions to improve the exploration capabilities of mobile robots. This includes primarily the integration and fusion of the predicted completions with the observed map as well as the extension of an informative path planning approach to account for this new information. An infrastructure for photo-realistic simulation, semantic scene completion, dense mapping and informative path planning is provided. In case of project success, the student is invited to contribute towards a publication of the work.
- Literature Review.
- Development of a method to integrate scene predictions into volumetric maps.
- Development of planning approach that leverages the developed map.
- Evaluation of the proposed system.
- Literature Review. - Development of a method to integrate scene predictions into volumetric maps. - Development of planning approach that leverages the developed map. - Evaluation of the proposed system.
- Highly motivated and independent student.
- Strong interest in Robotics and Computer Vision.
- Programming skills in C++ are mandatory.
- Experience with ROS and/or deep learning frameworks are a plus.
- Highly motivated and independent student. - Strong interest in Robotics and Computer Vision. - Programming skills in C++ are mandatory. - Experience with ROS and/or deep learning frameworks are a plus.
If you are interested in this project please send an email to Lukas Schmid (schmluk@mavt.ethz.ch) and Victor Reijgwart (victor.reijgwart@mavt.ethz.ch).
If you are interested in this project please send an email to Lukas Schmid (schmluk@mavt.ethz.ch) and Victor Reijgwart (victor.reijgwart@mavt.ethz.ch).