Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
High Accuracy Visual Inertial SLAM for Autonomous Navigation of small UAVs
This project aims at the development of a highly accurate monocular visual-inertial SLAM pipeline, capable of running in real-time on the onboard embedded processor of a small UAV.
The goal of this project is to develop a framework achieving **accurate pose estimation** enabling **autonomous navigation** of a small UAV equipped only with visual-inertial sensors and an embedded processor with limited resources.
Although external sensing modalities such as GPS have proven to boost autonomy for UAVs, their range of applications is limited by the environment (e.g. in between tall buildings, indoors). In such situations the ability to perform accurate and timely state estimation using only onboard sensing capabilities is crucial in order to safely navigate through the environment. It is well established that Simultaneous Localization And Mapping (SLAM) using Visual-Inertial (VI) sensing modalities is able to achieve high accuracy while requiring only a limited payload. Current state-of-the-art VI-SLAM systems based on windowed-optimization are capable of achieving a remarkable accuracy, however at the price of computational loads prohibiting application of these techniques on embedded platforms. On the other hand, recent advances incremental methods for optimization are promising to enable highly accurate VI-SLAM on computationally restricted platforms such as UAVs.
In this spirit, this project is aimed at the development of a complete pipeline allowing to perform highly accurate VI-SLAM capable of running in real-time onboard a small UAV. The focus of the project lies on the careful design and implementation of an optimization back-end tailored for low computational cost. Furthermore, verification and evaluation of the implementation on a realistic setup are part of this project.
The student will have the opportunity to work on a challenging project on the cutting edge of vision-based perception capabilities. We offer the opportunity to work with a real setup and equipment provided by V4RL. A successful implementation has the potential to mark a milestone for the applicability of VI-SLAM on platforms wit
The goal of this project is to develop a framework achieving **accurate pose estimation** enabling **autonomous navigation** of a small UAV equipped only with visual-inertial sensors and an embedded processor with limited resources.
Although external sensing modalities such as GPS have proven to boost autonomy for UAVs, their range of applications is limited by the environment (e.g. in between tall buildings, indoors). In such situations the ability to perform accurate and timely state estimation using only onboard sensing capabilities is crucial in order to safely navigate through the environment. It is well established that Simultaneous Localization And Mapping (SLAM) using Visual-Inertial (VI) sensing modalities is able to achieve high accuracy while requiring only a limited payload. Current state-of-the-art VI-SLAM systems based on windowed-optimization are capable of achieving a remarkable accuracy, however at the price of computational loads prohibiting application of these techniques on embedded platforms. On the other hand, recent advances incremental methods for optimization are promising to enable highly accurate VI-SLAM on computationally restricted platforms such as UAVs.
In this spirit, this project is aimed at the development of a complete pipeline allowing to perform highly accurate VI-SLAM capable of running in real-time onboard a small UAV. The focus of the project lies on the careful design and implementation of an optimization back-end tailored for low computational cost. Furthermore, verification and evaluation of the implementation on a realistic setup are part of this project.
The student will have the opportunity to work on a challenging project on the cutting edge of vision-based perception capabilities. We offer the opportunity to work with a real setup and equipment provided by V4RL. A successful implementation has the potential to mark a milestone for the applicability of VI-SLAM on platforms wit
- WP1: Research into existing works tackling Visual, Visual-Inertial SLAM, as well as the techniques for performing (incremental) optimization.
- WP2: Modification of an existing open source Visual (-Inertial) tracking pipeline with focus on run-time efficiency as well interfacing the back-end optimization (WP3).
- WP3: evelopment of a new algorithm for performing the back-end optimization of the VI-tracker output of WP2, with special attention paid to the computational efficiency.
- WP4: Evaluation of the developed pipeline on the real setup using a small UAV equipped with VI-sensing capabilities and an embedded processor.
- WP1: Research into existing works tackling Visual, Visual-Inertial SLAM, as well as the techniques for performing (incremental) optimization. - WP2: Modification of an existing open source Visual (-Inertial) tracking pipeline with focus on run-time efficiency as well interfacing the back-end optimization (WP3). - WP3: evelopment of a new algorithm for performing the back-end optimization of the VI-tracker output of WP2, with special attention paid to the computational efficiency. - WP4: Evaluation of the developed pipeline on the real setup using a small UAV equipped with VI-sensing capabilities and an embedded processor.
- C++ programming experience
- Background in visual SLAM/sensor fusion desired
- Experience in mobile robotics, Linux, ROS are beneficial
- C++ programming experience - Background in visual SLAM/sensor fusion desired - Experience in mobile robotics, Linux, ROS are beneficial
Interested students please write to Marco Karrer (karrerm@student.ethz.ch) with cc to Patrik Schmuck (pschmuck@ethz.ch)
Interested students please write to Marco Karrer (karrerm@student.ethz.ch) with cc to Patrik Schmuck (pschmuck@ethz.ch)