Register now After registration you will be able to apply for this opportunity online.
Advanced User Interfaces for Shared Task Human-Robot Collaboration
This project focuses on developing an innovative user interface for our human-robot collaboration system with a robotic arm. The student will investigate various interface modalities, including speech input, gesture recognition, and eye tracking using sensors such as cameras, microphones, and IMUs to effectively communicate user intent and control the robot during collaborative tasks.
Keywords: Robotics, User Interface, Multimodal, Data Science, Computer Vision, Human-Robot Collaboration, Deep Learning
We developed a human-robot collaboration system, which allows a robot arm to function as a “third hand” in shared tasks. Currently, the user interface is based on the Hololens 2 AR glasses and uses gestures and eye-tracking to give robot commands. However, our rudimentary UI implementation is clunky and distracts the user from the task at hand. To develop a more seamless user interface, this project aims at exploring a wide range of sensors and input modalities, such as speech, gestures, full-body, hand and eye-tracking using cameras, microphones and IMUs. We are looking for a person that is passionate about robotics and wants to work with multimodal data to improve the collaboration between humans and robots.
We developed a human-robot collaboration system, which allows a robot arm to function as a “third hand” in shared tasks. Currently, the user interface is based on the Hololens 2 AR glasses and uses gestures and eye-tracking to give robot commands. However, our rudimentary UI implementation is clunky and distracts the user from the task at hand. To develop a more seamless user interface, this project aims at exploring a wide range of sensors and input modalities, such as speech, gestures, full-body, hand and eye-tracking using cameras, microphones and IMUs. We are looking for a person that is passionate about robotics and wants to work with multimodal data to improve the collaboration between humans and robots.
You will be introduced to our robot system and the sensors available (cameras, microphone, smartwatch with IMU). Your tasks include:
- Exploring potential interactions to control the robot in different scenarios and collaboration modes
- Digitalizing the required interactions using techniques from computer vision and data science
- Implementing the resulting user interface into a demonstrator, which can be shown in fairs and conferences
You will be introduced to our robot system and the sensors available (cameras, microphone, smartwatch with IMU). Your tasks include:
- Exploring potential interactions to control the robot in different scenarios and collaboration modes
- Digitalizing the required interactions using techniques from computer vision and data science
- Implementing the resulting user interface into a demonstrator, which can be shown in fairs and conferences
- Strong programming skills (Python, C#, C++, …) - An interest in robotics - Experience with machine learning, data science or computer vision - The ability to take initiative and shape the direction of the project - Enthusiasm for tackling practical challenges
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in human-robot collaboration. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in human-robot collaboration. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
- Master thesis
Please send your CV and master thesis grades to Sophokles Ktistakis (ktistaks@ethz.ch)
Please send your CV and master thesis grades to Sophokles Ktistakis (ktistaks@ethz.ch)