Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Visual-Inertial SLAM with Unsynchronized Sensor Inputs
The goal of this project is to develop a SLAM system that is able to perform robust and accurate state estimation using input feeds from a rolling shutter camera and unsynchronized IMU measurements.
Simultaneous Localization And Mapping (**SLAM**) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. Using high-quality sensor feeds from global shutter cameras with time-synchronized IMU data, state-of-the-art SLAM system have reached considerable maturity in robustness and accuracy. However, many devices do not have such an elaborate and costly sensor setup available, such as **consumer-level smartphones**, which are **usually equipped with rolling shutter cameras and IMUs of lesser quality**.
Compared to global shutter cameras, rolling shutter sensors read the camera image line by line from the sensor chip, resulting in image distortions when the camera and/or the observed object is moving, as depicted in Fig. 1. This effect becomes even stronger the faster the motion is, resulting eventually in extreme changes as shown Fig. 2. This "rolling shutter effect", in connection with unsynchronized IMU data, can drastically impair the quality of the SLAM estimate. Therefore, the goal of this project is to **develop a SLAM system that is able to account for rolling shutter effects and missing time synchronization of the input data**, building on top of an existing proof of concept for roller shutter compensation for vision-based state estimation. Towards the end of the project, the developed SLAM system should potentially be deployed and tested under realistic conditions on a consumer-level smartphone.
Simultaneous Localization And Mapping (**SLAM**) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. Using high-quality sensor feeds from global shutter cameras with time-synchronized IMU data, state-of-the-art SLAM system have reached considerable maturity in robustness and accuracy. However, many devices do not have such an elaborate and costly sensor setup available, such as **consumer-level smartphones**, which are **usually equipped with rolling shutter cameras and IMUs of lesser quality**.
Compared to global shutter cameras, rolling shutter sensors read the camera image line by line from the sensor chip, resulting in image distortions when the camera and/or the observed object is moving, as depicted in Fig. 1. This effect becomes even stronger the faster the motion is, resulting eventually in extreme changes as shown Fig. 2. This "rolling shutter effect", in connection with unsynchronized IMU data, can drastically impair the quality of the SLAM estimate. Therefore, the goal of this project is to **develop a SLAM system that is able to account for rolling shutter effects and missing time synchronization of the input data**, building on top of an existing proof of concept for roller shutter compensation for vision-based state estimation. Towards the end of the project, the developed SLAM system should potentially be deployed and tested under realistic conditions on a consumer-level smartphone.
- WP1: Research into existing algorithms and systems for SLAM, Visual Odometry and the handling of rolling shutter data and unsynchronized sensor feeds.
- WP2: Adaption of an existing visual-inertial SLAM system to handle unsynchronized sensor data and rolling shutter effects.
- WP3: Deployment, testing and evaluation of the system, potentially using data from a consumer-level smartphone.
- WP1: Research into existing algorithms and systems for SLAM, Visual Odometry and the handling of rolling shutter data and unsynchronized sensor feeds. - WP2: Adaption of an existing visual-inertial SLAM system to handle unsynchronized sensor data and rolling shutter effects. - WP3: Deployment, testing and evaluation of the system, potentially using data from a consumer-level smartphone.
- C++ programming experience
- Background knowledge in computer vision and/or 3D geometry is beneficial
- C++ programming experience - Background knowledge in computer vision and/or 3D geometry is beneficial