Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Long-term Place Recognition using Style Transfer Techniques
Place Recognition using deep learning style transfer techniques to tackle appearance changes.
Keywords: Place Recognition, Style Transfer, Deep Learning, Computer Vision, Robotics
With the emergence of powerful techniques for robotic egomotion estimation and map building that follow the SLAM (Simultaneous Localization And Mapping) paradigm, Place Recognition has become of fundamental importance for robotic autonomy, enabling global accurate maps, relocalization and even collaboration between different robots performing SLAM. The problem of recognizing whether or not a robot is revisiting a place where it has been to before is commonly addressing by comparing an image of its current location against all places previously visited by the robot.
Place recognition is, however, a challenging task, due to the large variability in a scene's appearance that can be observed in the real world, caused by changes in illumination, seasons, or presence of occlusions and dynamic objects. Considering Place Recognition using images captured by a UAV is especially challenging given that the same place can be revisited from very different viewpoints.
While viewpoint changes are usually addressed using feature-based approaches, these methods usually suffer from lack of repeatability of local descriptors when large changes in appearance occur. On the other hand, deep learning approaches for Place recognition have demonstrated exciting results for extreme appearance changes such as day/night and seasonal variations. However, deep learning approaches are quite sensitive to changes in viewpoint and the impact of extreme viewpoint changes, such the ones experienced by a UAV, remains largely unexplored.
As such, the goal of this project is to combine both a feature-based approach that can handle viewpoint variations and deep learning techniques to tackle appearance changes. In order to do so, deep learning approaches for style normalization (e.g. transforming all night images to day images) can be used to convert all images to a common style, making it much easier for a feature-based Place Recognition system to evaluate images similarity.
With the emergence of powerful techniques for robotic egomotion estimation and map building that follow the SLAM (Simultaneous Localization And Mapping) paradigm, Place Recognition has become of fundamental importance for robotic autonomy, enabling global accurate maps, relocalization and even collaboration between different robots performing SLAM. The problem of recognizing whether or not a robot is revisiting a place where it has been to before is commonly addressing by comparing an image of its current location against all places previously visited by the robot.
Place recognition is, however, a challenging task, due to the large variability in a scene's appearance that can be observed in the real world, caused by changes in illumination, seasons, or presence of occlusions and dynamic objects. Considering Place Recognition using images captured by a UAV is especially challenging given that the same place can be revisited from very different viewpoints.
While viewpoint changes are usually addressed using feature-based approaches, these methods usually suffer from lack of repeatability of local descriptors when large changes in appearance occur. On the other hand, deep learning approaches for Place recognition have demonstrated exciting results for extreme appearance changes such as day/night and seasonal variations. However, deep learning approaches are quite sensitive to changes in viewpoint and the impact of extreme viewpoint changes, such the ones experienced by a UAV, remains largely unexplored.
As such, the goal of this project is to combine both a feature-based approach that can handle viewpoint variations and deep learning techniques to tackle appearance changes. In order to do so, deep learning approaches for style normalization (e.g. transforming all night images to day images) can be used to convert all images to a common style, making it much easier for a feature-based Place Recognition system to evaluate images similarity.
- WP1: Familiarization with our existing Place Recognition system implementation.
- WP2: Research into existing state-of-the-art Style Transfer techniques.
- WP3: Development of a Style Transfer approach to be integrated in an existing Place Recognition system.
- WP4: Experimentation of the method against state-of-the-art approaches in Place Recognition and report writing.
- WP1: Familiarization with our existing Place Recognition system implementation. - WP2: Research into existing state-of-the-art Style Transfer techniques. - WP3: Development of a Style Transfer approach to be integrated in an existing Place Recognition system. - WP4: Experimentation of the method against state-of-the-art approaches in Place Recognition and report writing.
The student taking this project needs to be highly motivated, preferably with strong analytical skills, while experience in coding in C/C++, Python and Deep Learning would be very beneficial.
The student taking this project needs to be highly motivated, preferably with strong analytical skills, while experience in coding in C/C++, Python and Deep Learning would be very beneficial.