Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Master Thesis: Deep Learning for Drone Motion Estimation
At Voliro we build drones that need to physically interact with their environment. This requires robust and highly accurate positioning. The goal of this project is to explore and integrate deep learning techniques to reinforce the current state estimation.
This thesis will be conducted as a collaboration between the Autonomous Systems Lab and Voliro Airborne Robotics (www.voliro.com).
At Voliro we build drones that need to physically interact with their environment. This requires robust and highly accurate positioning, which can’t be achieved by relying solely on traditional sensors like optical flow and GPS. The goal of your project will be to improve the position estimation when close to structures by taking advantage of a front-facing RGB-D camera mounted on the drone.
To achieve this, you will need to adapt state-of-the-art deep learning-based techniques for scene flow or egomotion estimation to run in real-time on the onboard GPU and to take advantage of prior information from the drone’s other sensors. The solution should ideally be able to work in challenging environments with poor lighting conditions and little texture.
This thesis will be conducted as a collaboration between the Autonomous Systems Lab and Voliro Airborne Robotics (www.voliro.com). At Voliro we build drones that need to physically interact with their environment. This requires robust and highly accurate positioning, which can’t be achieved by relying solely on traditional sensors like optical flow and GPS. The goal of your project will be to improve the position estimation when close to structures by taking advantage of a front-facing RGB-D camera mounted on the drone. To achieve this, you will need to adapt state-of-the-art deep learning-based techniques for scene flow or egomotion estimation to run in real-time on the onboard GPU and to take advantage of prior information from the drone’s other sensors. The solution should ideally be able to work in challenging environments with poor lighting conditions and little texture.
- Literature review
- Propose/adapt learned methods for use-case
- Evaluate with recorded data
- Deploy on the onboard GPU and integrate with flight controller
- Evaluate real-time performance in flight
- Literature review - Propose/adapt learned methods for use-case - Evaluate with recorded data - Deploy on the onboard GPU and integrate with flight controller - Evaluate real-time performance in flight
- Highly motivated and independent student
- Strong foundation in Deep Learning/ML, State Estimation and Computer Vision
- Strong interest in deploying system on a robotic platform
- Good Python skills
- Prior (project) experience with ROS 2 and TensorRT is beneficial
- Highly motivated and independent student - Strong foundation in Deep Learning/ML, State Estimation and Computer Vision - Strong interest in deploying system on a robotic platform - Good Python skills - Prior (project) experience with ROS 2 and TensorRT is beneficial
Send your CV, transcript and a short intro why you are interested in this topic to:
- Marius Fehr (marius.fehr@voliro.com)
- Daniel Mouritzen (daniel.mouritzen@voliro.com)
Send your CV, transcript and a short intro why you are interested in this topic to:
- Marius Fehr (marius.fehr@voliro.com) - Daniel Mouritzen (daniel.mouritzen@voliro.com)