Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Visual-Inertial-Laser Odometry Fusion
We would like to investigate possibilities to fuse different types of odometry algorithms.
Keywords: Visual, inertial, odometry, lidar
In this project, we would like to investigate different possibilities for fusing visual-inertial and lidar-based odometry. On one side, visual sensors work very well in environments with texture however, they are not robust against illumination changes. The laser range sensors, on the other hand, handle are completely insensitive to illumination changes, however, they need some geometry features for the odometry to function properly (i.e. in open flat environments it is hard to obtain a good odometry estimate). For each sensing modality we would like to obtain the odometry with some indicator of good it is (or how confident is the algorithm about the performance) and then we can fuse them using nonlinear optimization. We would like to evaluate different odometry algorithms for both visual-inertial sensors and laser sensors and understand what is the best combination. As a testing bench, we would use handheld sensor module and later we would like to port the algorithm (if successful) to one of our robots (Menzi Muck or WACO).
Literature:
- [1] Sandy, T., Stadelmann, L., Kerscher, S. and Buchli, J., 2019. Confusion: Sensor fusion for complex robotic systems using nonlinear optimization. IEEE Robotics and Automation Letters, 4(2), pp.1093-1100.
- [2] Scaramuzza, D. and Fraundorfer, F., 2011. Visual odometry [tutorial]. IEEE robotics & automation magazine, 18(4), pp.80-92.
- [3] Zhang, J. and Singh, S., 2015, May. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2174-2181). IEEE.
In this project, we would like to investigate different possibilities for fusing visual-inertial and lidar-based odometry. On one side, visual sensors work very well in environments with texture however, they are not robust against illumination changes. The laser range sensors, on the other hand, handle are completely insensitive to illumination changes, however, they need some geometry features for the odometry to function properly (i.e. in open flat environments it is hard to obtain a good odometry estimate). For each sensing modality we would like to obtain the odometry with some indicator of good it is (or how confident is the algorithm about the performance) and then we can fuse them using nonlinear optimization. We would like to evaluate different odometry algorithms for both visual-inertial sensors and laser sensors and understand what is the best combination. As a testing bench, we would use handheld sensor module and later we would like to port the algorithm (if successful) to one of our robots (Menzi Muck or WACO).
Literature:
- [1] Sandy, T., Stadelmann, L., Kerscher, S. and Buchli, J., 2019. Confusion: Sensor fusion for complex robotic systems using nonlinear optimization. IEEE Robotics and Automation Letters, 4(2), pp.1093-1100.
- [2] Scaramuzza, D. and Fraundorfer, F., 2011. Visual odometry [tutorial]. IEEE robotics & automation magazine, 18(4), pp.80-92.
- [3] Zhang, J. and Singh, S., 2015, May. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2174-2181). IEEE.
- Literature investigation
- Integration of different visual odometry algorithms
- Integration of different lidar odometry algorithms
- Confidence estimates for poses
- Fusion using nonlinear optimization
- Literature investigation - Integration of different visual odometry algorithms - Integration of different lidar odometry algorithms - Confidence estimates for poses - Fusion using nonlinear optimization
- Programming skills in C++
- ROS knowledge is a plus
- Basic knowledge of optimization
- Experience with lidar or visual inertial sensors is a plus
- Programming skills in C++ - ROS knowledge is a plus - Basic knowledge of optimization - Experience with lidar or visual inertial sensors is a plus
Edo Jelavic, jelavice@ethz.ch
Please send your grade transcripts and CV.