Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Virtual Reality for Teleoperation of a Walking Excavator
The goal of this project is to investigate new virtual reality methods to improve teleoperation of our walking excavator. The excavator is rendered in a virtual environment (Unity) and the environment is displayed as projected images onto 3D meshes.
The Robotic Systems Lab (RSL) at ETH Zurich is working on a fully autonomous walking excavator based on a Menzi Muck M545. On the way towards full automation, teleoperation is an intermediate step.
We are looking for a motivated student to help us improve our already existing teleoperation setup. The current setup is consisting of multiple cameras mounted in the cabin that can be viewed from any computer anywhere in the world. Displaying camera images on a screen will result in the loss of depth perception, which is essential for completing standard tasks such as excavation or grasping objects. Another setup with a stereo camera on a gimbal used with a VR headset was also not satisfactory due to a delay between head motion and gimbal motion.
The idea of this project is to create a virtual replica of the real world. By rendering this virtual world on the VR headset, the operator will be able to move his head without experiencing any delay. Furthermore, we can augment the virtual reality with additional features to guide the operator, e.g. projections of the work plan for excavation etc..
The key challenges the student has to work on and solve are the following. First, a mesh of the ground around the machine has to be created from lidar scans and updated continously. Secondly, a constant stream of images taken from cameras have to be projected onto these rendered meshes in unity. Thirdly, the virtual model of the machine has to move identical to the teleoperated excavator by applying the joint sensor information from the machine to the model. Lastly, if time allows, the student can also investigate augmenting the scene to assist the operator.
The Robotic Systems Lab (RSL) at ETH Zurich is working on a fully autonomous walking excavator based on a Menzi Muck M545. On the way towards full automation, teleoperation is an intermediate step.
We are looking for a motivated student to help us improve our already existing teleoperation setup. The current setup is consisting of multiple cameras mounted in the cabin that can be viewed from any computer anywhere in the world. Displaying camera images on a screen will result in the loss of depth perception, which is essential for completing standard tasks such as excavation or grasping objects. Another setup with a stereo camera on a gimbal used with a VR headset was also not satisfactory due to a delay between head motion and gimbal motion.
The idea of this project is to create a virtual replica of the real world. By rendering this virtual world on the VR headset, the operator will be able to move his head without experiencing any delay. Furthermore, we can augment the virtual reality with additional features to guide the operator, e.g. projections of the work plan for excavation etc..
The key challenges the student has to work on and solve are the following. First, a mesh of the ground around the machine has to be created from lidar scans and updated continously. Secondly, a constant stream of images taken from cameras have to be projected onto these rendered meshes in unity. Thirdly, the virtual model of the machine has to move identical to the teleoperated excavator by applying the joint sensor information from the machine to the model. Lastly, if time allows, the student can also investigate augmenting the scene to assist the operator.
- Evaluate the type of perception sensors installed on the machine and their placement
- Render the excavator model in Unity
- Create meshes from the environment using lidar scans
- Render the meshes in Unity
- Project camera images onto the meshes for photorealism
- Investigate augmentation of the scene to work more efficient
- Evaluate the type of perception sensors installed on the machine and their placement - Render the excavator model in Unity - Create meshes from the environment using lidar scans - Render the meshes in Unity - Project camera images onto the meshes for photorealism - Investigate augmentation of the scene to work more efficient
- highly motivated student
- good problem solving skills
- some programming experience (C++/C#, ROS)
- highly motivated student - good problem solving skills - some programming experience (C++/C#, ROS)