Register now After registration you will be able to apply for this opportunity online.
Egocentric Video Understanding for Environment Interaction
Motivation ⇾ We want to train robots to interact in everyday home environments. But the robot needs data to learn from.
1. The robot needs data from humans to naturally interact with the environment.
2. We need ground-truth of the interaction to evaluate methods.
3. The setup needs to be robust and versatile such that we can make many recordings.
Proposal ⇾ We want to develop a 3D ground-truth methodology for environment interactions.
We need a setup that is easier to transport than the classical “camera domes” that just record dynamic scenes from every angle. Instead we combine static scans with egocentric and a few exocentric video cameras.
Our goal is to be able to track the dynamic states of the functional elements within 1 cm accuracy. With this we can then go to any home and record interactions with high accuracy.
Keywords: Robotics, Egocentric Video Understanding, Scene interaction
Not specified
Not specified
Requirements: experience with a Python deep learning framework, understanding of 3D scene and camera geometry.
Please send us a CV and transcript.
Dr. Hermann Blum (blumh@ethz.ch)
Dr. Zuria Bauer (zbauer@ethz.ch)
Requirements: experience with a Python deep learning framework, understanding of 3D scene and camera geometry.
Please send us a CV and transcript. Dr. Hermann Blum (blumh@ethz.ch) Dr. Zuria Bauer (zbauer@ethz.ch)