Register now After registration you will be able to apply for this opportunity online.
Label transfer from satellite to aerial imagery
The project aims to implement a semantic label transfer from satellite to aerial imagery in order to enable the training of image-based machine learning algorithms for autonomous aerial vehicle tasks, such as path planning, collision avoidance, and localization.
Keywords: semantic scene information, dataset generation, air-satellite semantic label transfer, vision-based deep learning, computer vision, drone navigation
Nowadays, several sources of satellite imagery, such as elevation and semantic maps, have been made publicly available. In order to make robots capable of using this information to navigate their environment or to train vision algorithms, the semantic knowledge needs to be transferred from their original point clouds to the images seen by the robot perspective. The transference of semantic labels from satellite to aerial images can then be used to perform several robotic tasks, such as localization and path planning. However, satellite and aerial imagery are acquired at different times and geometric differences are usually present. Dynamic objects (e.g. cars, pedestrians), construction works, and seasonal changes (e.g. vegetation) are common sources of errors. As such, the goal of this project is to implement a strategy to identify and correct areas of the scene where both satellite and aerial imagery do not match. Satellite images from Swisstopo and aerial images from drones will be used in the project.
Nowadays, several sources of satellite imagery, such as elevation and semantic maps, have been made publicly available. In order to make robots capable of using this information to navigate their environment or to train vision algorithms, the semantic knowledge needs to be transferred from their original point clouds to the images seen by the robot perspective. The transference of semantic labels from satellite to aerial images can then be used to perform several robotic tasks, such as localization and path planning. However, satellite and aerial imagery are acquired at different times and geometric differences are usually present. Dynamic objects (e.g. cars, pedestrians), construction works, and seasonal changes (e.g. vegetation) are common sources of errors. As such, the goal of this project is to implement a strategy to identify and correct areas of the scene where both satellite and aerial imagery do not match. Satellite images from Swisstopo and aerial images from drones will be used in the project.
-WP1: Familiarisation with a current implementation that aligns both satellite and aerial imagery;
-WP2: Literature review of existing state-of-the-art point-cloud correction approaches;
-WP3: Development of a strategy to identify and correct problematic areas in the scene;
-WP4: Evaluation of the proposed method and comparison against ground-truth information.
-WP1: Familiarisation with a current implementation that aligns both satellite and aerial imagery; -WP2: Literature review of existing state-of-the-art point-cloud correction approaches; -WP3: Development of a strategy to identify and correct problematic areas in the scene; -WP4: Evaluation of the proposed method and comparison against ground-truth information.