Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
A multi-task Neural Network for drone delivery
Object Detection using Deep Learning to improve landing of a UAV in the scenario of drone delivery
Keywords: Object Detection, Place Recognition, Aerial Vehicles, Deep Learning, Computer Vision
Unmanned Aerial Vehicles (UAVs) have sparked great interest in the delivery industry in the last few years. While the delivery of goods using UAVs in remote areas is already a reality, safety issues still prevent their use in urban environments, especially when operating in highly populated areas. In order to guarantee a safe delivery, a good understanding of the workspace becomes crucial. As such, several approaches for Simultaneous Localisation And Mapping (SLAM) have been proposed in the literature to address robotic ego-motion and scene estimation, while Place Recognition has become of fundamental importance to increase the accuracy of both. Besides this, a better understanding of the environment to aid during the landing can be achieved by incrementing the drone delivery pipeline with semantic information.
As such, the goal of this project is to develop an object detection strategy capable of identifying dynamic objects that should be avoided during landing (e.g. cars, pedestrians, etc.). This information can be used not only to avoid collision and decide the best path to reach the landing spot, but also to improve place recognition, as features depicted in dynamic objects are not always present in the scene and should be ignored while comparing images from a location. The object detection algorithm needs to be capable of recognizing the objects of interest from different viewpoints, as the UAV comes from high to low altitudes during landing, making the same objects being captured from very distinct viewpoints. As state-of-the-art object detection approaches usually rely on large and powerful neural networks, we plan to use knowledge distillation to train a smaller network capable of running onboard a small UAV with limited computational capabilities. Besides this, the proposed network should output not only the classes of the dynamic objects but also image features that can be directly used in the algorithms of the delivery pipeline.
Unmanned Aerial Vehicles (UAVs) have sparked great interest in the delivery industry in the last few years. While the delivery of goods using UAVs in remote areas is already a reality, safety issues still prevent their use in urban environments, especially when operating in highly populated areas. In order to guarantee a safe delivery, a good understanding of the workspace becomes crucial. As such, several approaches for Simultaneous Localisation And Mapping (SLAM) have been proposed in the literature to address robotic ego-motion and scene estimation, while Place Recognition has become of fundamental importance to increase the accuracy of both. Besides this, a better understanding of the environment to aid during the landing can be achieved by incrementing the drone delivery pipeline with semantic information. As such, the goal of this project is to develop an object detection strategy capable of identifying dynamic objects that should be avoided during landing (e.g. cars, pedestrians, etc.). This information can be used not only to avoid collision and decide the best path to reach the landing spot, but also to improve place recognition, as features depicted in dynamic objects are not always present in the scene and should be ignored while comparing images from a location. The object detection algorithm needs to be capable of recognizing the objects of interest from different viewpoints, as the UAV comes from high to low altitudes during landing, making the same objects being captured from very distinct viewpoints. As state-of-the-art object detection approaches usually rely on large and powerful neural networks, we plan to use knowledge distillation to train a smaller network capable of running onboard a small UAV with limited computational capabilities. Besides this, the proposed network should output not only the classes of the dynamic objects but also image features that can be directly used in the algorithms of the delivery pipeline.
WP1: Literature review of existing state-of-the-art Object Detection techniques, for both ground and aerial vehicles.
WP2: Comparison of available state-of-the-art algorithms in Object Detection for the drone delivery scenario and selection of the network that best fits the envisioned application.
WP3: Development of an Object Detection pipeline capable of detecting dynamic objects (e.g. cars) in the presence of large viewpoint changes experienced in the drone delivery scenario.
WP4: Evaluation of the proposed method and comparison against state-of-the-art approaches in Object Detection and report writing.
WP1: Literature review of existing state-of-the-art Object Detection techniques, for both ground and aerial vehicles. WP2: Comparison of available state-of-the-art algorithms in Object Detection for the drone delivery scenario and selection of the network that best fits the envisioned application. WP3: Development of an Object Detection pipeline capable of detecting dynamic objects (e.g. cars) in the presence of large viewpoint changes experienced in the drone delivery scenario. WP4: Evaluation of the proposed method and comparison against state-of-the-art approaches in Object Detection and report writing.
The student taking this project needs to be highly motivated, preferably with strong analytical skills, while experience in coding in C/C++, Python and Deep Learning would be very beneficial.
The student taking this project needs to be highly motivated, preferably with strong analytical skills, while experience in coding in C/C++, Python and Deep Learning would be very beneficial.
Fabiola Maffra, fmaffra@mavt.ethz.ch
Lucas Teixeira, lteixeira@mavt.ethz.ch
Fabiola Maffra, fmaffra@mavt.ethz.ch Lucas Teixeira, lteixeira@mavt.ethz.ch