Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Object goal navigation for quadrupedal robots
Imagine asking an assistance robot, “Could you please bring me an apple?”. The robot should respond by first walking to the apple, picking it up, walking back to you, and delivering it. More realistically, the robot doesn't know a priori where the apple is. As a result, it must try to locate/find them in a cluttered environment during exploration. In this student project, we will investigate this object goal navigation problem, the essential first step to interacting with the objects.
In its simplest form, object goal navigation [1, 2] is defined as the task of navigating to an object in an unexplored environment that resembles an apartment or an office. With only onboard perception, the robot should be able to locate the object efficiently (e.g., using the shortest path). For humans, we know the toaster is usually found near the fridge in the kitchen, and it’s highly unlikely that it is in the bedroom. How can the robot use such common sense for a more efficient search? As an example, a robot could perform local planning based on the current camera image to maximize its chances of finding an object. At the same time, the robot should not forget its past experiences. Global semantic maps could guide global search based on detected objects. A key component of efficient object goal navigation is object relationships (apples are usually near other fruits) [3, 4, 5, 6] which can also be integrated into this pipeline. Ultimately, we will implement this project's resulting object goal navigation algorithm onto our robotic manipulator (ALMA) to efficiently find common objects in the lab.
[1] Batra, Dhruv, et al. "Objectnav revisited: On evaluation of embodied agents navigating to objects." arXiv preprint arXiv:2006.13171 (2020).
[2] Habitat Challenge 2022 (https://aihabitat.org/challenge/2022/)
[3] Qiu, Yiding, Anwesan Pal, and Henrik I. Christensen. "Learning hierarchical relationships for object-goal navigation." arXiv preprint arXiv:2003.06749 (2020).
[4] Chaplot, Devendra Singh, et al. "Object goal navigation using goal-oriented semantic exploration." Advances in Neural Information Processing Systems 33 (2020): 4247-4258.
[5] Mayo, Bar, Tamir Hazan, and Ayellet Tal. "Visual navigation with spatial attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[6] Du, Heming, Xin Yu, and Liang Zheng. "Learning object relation graph and tentative policy for visual navigation." European Conference on Computer Vision. Springer, Cham, 2020.
In its simplest form, object goal navigation [1, 2] is defined as the task of navigating to an object in an unexplored environment that resembles an apartment or an office. With only onboard perception, the robot should be able to locate the object efficiently (e.g., using the shortest path). For humans, we know the toaster is usually found near the fridge in the kitchen, and it’s highly unlikely that it is in the bedroom. How can the robot use such common sense for a more efficient search? As an example, a robot could perform local planning based on the current camera image to maximize its chances of finding an object. At the same time, the robot should not forget its past experiences. Global semantic maps could guide global search based on detected objects. A key component of efficient object goal navigation is object relationships (apples are usually near other fruits) [3, 4, 5, 6] which can also be integrated into this pipeline. Ultimately, we will implement this project's resulting object goal navigation algorithm onto our robotic manipulator (ALMA) to efficiently find common objects in the lab.
[1] Batra, Dhruv, et al. "Objectnav revisited: On evaluation of embodied agents navigating to objects." arXiv preprint arXiv:2006.13171 (2020).
[3] Qiu, Yiding, Anwesan Pal, and Henrik I. Christensen. "Learning hierarchical relationships for object-goal navigation." arXiv preprint arXiv:2003.06749 (2020).
[4] Chaplot, Devendra Singh, et al. "Object goal navigation using goal-oriented semantic exploration." Advances in Neural Information Processing Systems 33 (2020): 4247-4258.
[5] Mayo, Bar, Tamir Hazan, and Ayellet Tal. "Visual navigation with spatial attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[6] Du, Heming, Xin Yu, and Liang Zheng. "Learning object relation graph and tentative policy for visual navigation." European Conference on Computer Vision. Springer, Cham, 2020.
- Literature review on relevant topics (visual navigation, semantic mapping, etc)
- Develop a method for creating a heatmap from global semantic map and local visual image
- Semantic mapping for creating a global map with seen objects
- Extensive tests on simulation and hardware
- Literature review on relevant topics (visual navigation, semantic mapping, etc) - Develop a method for creating a heatmap from global semantic map and local visual image - Semantic mapping for creating a global map with seen objects - Extensive tests on simulation and hardware
- Knowledge in Machine Learning
- Experience in Python and Deep learning frameworks.
- Highly motivated and research oriented.
- Knowledge in Machine Learning - Experience in Python and Deep learning frameworks. - Highly motivated and research oriented.
Kaixian Qu (kaixqu@ethz.ch) Please include your CV and up-to-date transcript.
Kaixian Qu (kaixqu@ethz.ch) Please include your CV and up-to-date transcript.