Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Reconstructing Reality for Skill Learning and High-Level Planning
In this project, we would like to develop a system that can use sensor data collected in an unstructured real-world scene to construct an articulated 3D model of the scene. This model will be useful for learning or fine-tuning robot skills and for high-level planning.
Keywords: Mobile manipulation, sim-to-real, physics simulation, perception
Mobile manipulation systems became increasingly capable over the last decades. As a result, we can set our eyes on employing these systems to solve useful tasks in unstructured real-world environments.
However, many of these tasks have in common that not all possible details and configurations of the environments the robot might operate in will be known at design time. Furthermore, an ideal system would also be able to solve tasks that it was not specifically programmed to carry out. To achieve this, the robot will need to adapt its skills (e.g. how to operate a certain mechanism or how to grasp an unusual bottle) as well as its overall strategy to solve the task (i.e. its high-level plan, consisting of the separate steps required to reach the goal). This could be done directly in the real-world scene the robot wants to act in. Unfortunately, this can be inefficient (if the robot needs multiple tries) or in the worst case even damage the environment.
To overcome these issues, our idea is to let the robot run any required exploration in simulation. For this, we need a representation of the scene at hand. In this project, a system will be developed that leverages real-world data collected by a robot to build a 3D reconstruction of a scene that can then be used for learning and planning. Once we manage to establish the geometry of scenes in the simulator, we will focus on making it more realistic by adding physical properties and/or kinematic constraints to the objects in the simulation. For this, we would like to leverage data collected during interactions of the robot with its environment.
If you are excited about robots improving our lives in the future as well as state of the art perception and simulation technology, we would be happy to hear from you.
Mobile manipulation systems became increasingly capable over the last decades. As a result, we can set our eyes on employing these systems to solve useful tasks in unstructured real-world environments. However, many of these tasks have in common that not all possible details and configurations of the environments the robot might operate in will be known at design time. Furthermore, an ideal system would also be able to solve tasks that it was not specifically programmed to carry out. To achieve this, the robot will need to adapt its skills (e.g. how to operate a certain mechanism or how to grasp an unusual bottle) as well as its overall strategy to solve the task (i.e. its high-level plan, consisting of the separate steps required to reach the goal). This could be done directly in the real-world scene the robot wants to act in. Unfortunately, this can be inefficient (if the robot needs multiple tries) or in the worst case even damage the environment. To overcome these issues, our idea is to let the robot run any required exploration in simulation. For this, we need a representation of the scene at hand. In this project, a system will be developed that leverages real-world data collected by a robot to build a 3D reconstruction of a scene that can then be used for learning and planning. Once we manage to establish the geometry of scenes in the simulator, we will focus on making it more realistic by adding physical properties and/or kinematic constraints to the objects in the simulation. For this, we would like to leverage data collected during interactions of the robot with its environment. If you are excited about robots improving our lives in the future as well as state of the art perception and simulation technology, we would be happy to hear from you.
- Literature review in reconstructing real-world scenes in simulation
- Set up simulation environment
- Develop method for static 3D reconstruction
- Add kinematic constraints
- Add physical properties
- Evaluate the system in simulation and on a real-world platform
- Literature review in reconstructing real-world scenes in simulation - Set up simulation environment - Develop method for static 3D reconstruction - Add kinematic constraints - Add physical properties - Evaluate the system in simulation and on a real-world platform
- Highly motivated and independent student
- Interest in perception, simulation and system identification
- Good programming skills in Python and/or C++
- Experience with one or several of the following is a plus: ROS, Git
- Highly motivated and independent student - Interest in perception, simulation and system identification - Good programming skills in Python and/or C++ - Experience with one or several of the following is a plus: ROS, Git
Julian Förster (julian.foerster@mavt.ethz.ch)
Giuseppe Rizzi (grizzi@mavt.ethz.ch)
Julian Förster (julian.foerster@mavt.ethz.ch) Giuseppe Rizzi (grizzi@mavt.ethz.ch)