Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Where am I? Registering Egocentric Views to (3D)Satellite Data
In this project, a machine learning approach for visual appearance matching or semantic correspondence search should be developed to align the robot in the scene in 3DOFs. Given this rough initial guess, the LiDAR information should then be utilized to register against the map, allowing for the computation of the full 6DOF pose in the world frame.
Keywords: Global Localization, registration, localization, computer vision
Globally consistent localization is a fundamental task in long-term, large-scale robotic applications. Recent scholarly works [1] fused different modalities, including GNSS signals allowing for global localization. However, consistent geo-localization in urban areas is still an open question, as GNSS signals are often considered unreliable or even absent (e.g. under bridges or next to high walls).
A recent increase in the amount and quality of publicly available global map information (top row of the image) allows operators to localize and navigate in a given scene.
In this work we would like to achieve something similar, and develop a pipeline for global 3D localization while only using egocentric visual and ranging data collected on the robot (bottom row of the image) and a given 3D map. The use of vision or lidar data for consistent geo-localization is still a fairly unexplored field [2, 3].
While most prior work mostly relies on camera data providing 2D localization against the top down bird-eye-view (BEV), in this project we would like also to utilize the already available 3D cartography and the 3D LiDAR sensor.
In this project, a machine learning approach for visual appearance matching or semantic correspondence search should be developed to align the robot in the scene in 3DOFs. Given this rough initial guess, the LiDAR information should then be utilized to register against the map, allowing for the computation of the full 6DOF pose in the world frame. The developed pipeline should be robot-agnostic, and will be tested on HEAP and/or ANYmal. As a first step, information from the GNSS can be used to limit the search space.
Globally consistent localization is a fundamental task in long-term, large-scale robotic applications. Recent scholarly works [1] fused different modalities, including GNSS signals allowing for global localization. However, consistent geo-localization in urban areas is still an open question, as GNSS signals are often considered unreliable or even absent (e.g. under bridges or next to high walls). A recent increase in the amount and quality of publicly available global map information (top row of the image) allows operators to localize and navigate in a given scene. In this work we would like to achieve something similar, and develop a pipeline for global 3D localization while only using egocentric visual and ranging data collected on the robot (bottom row of the image) and a given 3D map. The use of vision or lidar data for consistent geo-localization is still a fairly unexplored field [2, 3]. While most prior work mostly relies on camera data providing 2D localization against the top down bird-eye-view (BEV), in this project we would like also to utilize the already available 3D cartography and the 3D LiDAR sensor. In this project, a machine learning approach for visual appearance matching or semantic correspondence search should be developed to align the robot in the scene in 3DOFs. Given this rough initial guess, the LiDAR information should then be utilized to register against the map, allowing for the computation of the full 6DOF pose in the world frame. The developed pipeline should be robot-agnostic, and will be tested on HEAP and/or ANYmal. As a first step, information from the GNSS can be used to limit the search space.
- Research on global egocentric localization.
- Development of visual scene-alignment.
- Point cloud-based map registration.
- Testing and evaluation on the robot
- Research on global egocentric localization. - Development of visual scene-alignment. - Point cloud-based map registration. - Testing and evaluation on the robot
- Experience in Python and machine learning.
- +: Knowledge in point cloud processing and ROS
- Highly motivated and research-oriented.
- Experience in Python and machine learning. - +: Knowledge in point cloud processing and ROS - Highly motivated and research-oriented.