Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Object-aware active 3D perception
Robots operating autonomously in real-world environments cannot rely on detailed a priori models of their surroundings but must instead perceive the scene and acquire task-relevant knowledge to guide subsequent interaction planning.
Keywords: Computer Vision, RGB-D Perception, 3D Reconstruction, Object Segmentation
Robots operating autonomously in real-world environments cannot rely on detailed a priori models of their surroundings but must instead perceive the scene and acquire task-relevant knowledge to guide subsequent interaction planning. An internal scene representation based on higher-level map entities, such as individual object instances, enables the definition of novel metrics that quantify the reconstruction quality of multiple scene elements for applications such as grasping and manipulation. Vice versa, such metrics allow to direct the exploration of the environment towards acquiring high-quality models of the salient objects, rather than of the less interesting background.
The goal of this project is to develop a novel object-aware next-best-view planner for a multi-object scenario. Starting from an existing incremental object-level mapping framework [1], the first task consists of defining the metrics describing the object-aware exploration utility. Next, a planning strategy should be developed within a simulator for selecting the next-best-view that maximizes the previously defined utility [2]. Finally, the developed framework should be validated within a real-world robotics setup in the context of a robotic arm exploring a tabletop scene with multiple objects prior to interacting with it.
References:
[1] M. Grinvald, F. Furrer, T. Novkovic, J. J. Chung, C. Cadena, R. Siegwart, and J. Nieto, “Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery,” IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 3037–3044, July 2019.
[2] L. Liu, X. Xia, H. Sun, Q. Shen, J. Xu, B. Chen, H. Huang, and K. Xu, “Object-Aware Guidance for Autonomous Scene Reconstruction,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 37, no. 4, pp. 104:1–104:12, 2018.
Robots operating autonomously in real-world environments cannot rely on detailed a priori models of their surroundings but must instead perceive the scene and acquire task-relevant knowledge to guide subsequent interaction planning. An internal scene representation based on higher-level map entities, such as individual object instances, enables the definition of novel metrics that quantify the reconstruction quality of multiple scene elements for applications such as grasping and manipulation. Vice versa, such metrics allow to direct the exploration of the environment towards acquiring high-quality models of the salient objects, rather than of the less interesting background.
The goal of this project is to develop a novel object-aware next-best-view planner for a multi-object scenario. Starting from an existing incremental object-level mapping framework [1], the first task consists of defining the metrics describing the object-aware exploration utility. Next, a planning strategy should be developed within a simulator for selecting the next-best-view that maximizes the previously defined utility [2]. Finally, the developed framework should be validated within a real-world robotics setup in the context of a robotic arm exploring a tabletop scene with multiple objects prior to interacting with it.
References:
[1] M. Grinvald, F. Furrer, T. Novkovic, J. J. Chung, C. Cadena, R. Siegwart, and J. Nieto, “Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery,” IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 3037–3044, July 2019.
[2] L. Liu, X. Xia, H. Sun, Q. Shen, J. Xu, B. Chen, H. Huang, and K. Xu, “Object-Aware Guidance for Autonomous Scene Reconstruction,” ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 37, no. 4, pp. 104:1–104:12, 2018.
- Review relevant literature and familiarize with an existing simulator environment
- Define metrics quantifying object-level exploration utility
- Implement an object-aware efficient next-best-view planner
- Evaluate the framework in simulation and compare against existing methods
- Integrate the framework within a real-world robotic arm setup.
- Review relevant literature and familiarize with an existing simulator environment - Define metrics quantifying object-level exploration utility - Implement an object-aware efficient next-best-view planner - Evaluate the framework in simulation and compare against existing methods - Integrate the framework within a real-world robotic arm setup.
- Strong interest in computer vision
- Good programming skills in C++/Python
- Interest or experience with one or more of the following is a plus: 3D vision, ROS, GIT
- Strong interest in computer vision - Good programming skills in C++/Python - Interest or experience with one or more of the following is a plus: 3D vision, ROS, GIT
Please apply with your CV and academic transcripts to Julian Foerster (julian.foerster@mavt.ethz.ch) and Margarita Grinvald (margarita.grinvald@mavt.ethz.ch)
Please apply with your CV and academic transcripts to Julian Foerster (julian.foerster@mavt.ethz.ch) and Margarita Grinvald (margarita.grinvald@mavt.ethz.ch)