Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
State Estimation using Ground Penetrating Radar Sensors
The goal of this project is to utilize measurements from a Ground Penetrating Radar (GPR) for state estimation in challenging conditions.
Keywords: Machine Learning, Sensor Fusion, State Estimation
The use of visual data enables accurate and robust localization and ego-motion estimation under various conditions. However, if the environment does not contain strong texture or its appearance has drastically changed (e.g. due to different weather conditions), visual information may be insufficient for a stable estimation result. While the capability of a GPR to perceive what lies beyond the visible surface, is crucial in inspection tasks, these sensor measurements can also be used to localize a device based on the obtained radar signature.
In literature, there are a few approaches that already demonstrate the benefit of using GPR sensor data in the state estimation process. In this project, we aim to go even further and investigate how we can extract distinct features from GPR scans such that we can associate them across multiple scans. Ultimately this information shall be used for state estimation in cases where reliable visual information is scarce. Also, we would like to address the question whether features can be extracted in such a way that we can not only track over multiple scans but also localize a query scan against previous scans analogous to visual localization.
You will have the opportunity to contribute to active research in the field of localization, mobile mapping and scene perception with direct implication for current and future industry applications. This project is carried out with a shared supervision between the Vision for Robotics Lab and the Hexagon Technology Center. You’ll have the chance to work with a passionate team of computer vision and robotics engineers at Hexagon’s Innovation Hub in Zurich and gain insight into applied research in industry.
The use of visual data enables accurate and robust localization and ego-motion estimation under various conditions. However, if the environment does not contain strong texture or its appearance has drastically changed (e.g. due to different weather conditions), visual information may be insufficient for a stable estimation result. While the capability of a GPR to perceive what lies beyond the visible surface, is crucial in inspection tasks, these sensor measurements can also be used to localize a device based on the obtained radar signature.
In literature, there are a few approaches that already demonstrate the benefit of using GPR sensor data in the state estimation process. In this project, we aim to go even further and investigate how we can extract distinct features from GPR scans such that we can associate them across multiple scans. Ultimately this information shall be used for state estimation in cases where reliable visual information is scarce. Also, we would like to address the question whether features can be extracted in such a way that we can not only track over multiple scans but also localize a query scan against previous scans analogous to visual localization.
You will have the opportunity to contribute to active research in the field of localization, mobile mapping and scene perception with direct implication for current and future industry applications. This project is carried out with a shared supervision between the Vision for Robotics Lab and the Hexagon Technology Center. You’ll have the chance to work with a passionate team of computer vision and robotics engineers at Hexagon’s Innovation Hub in Zurich and gain insight into applied research in industry.
- Familiarize yourself with technology around GPR measurements and state-of-the-art deep learning approaches for feature detection and matching.
- Explore how additional sensor modalities can be combined in the detection process.
- Development of a feature extraction method which makes use of GPR scans.
- Implement an matching scheme for your features.
- Test and evaluate the implemented approach.
- Design a state estimation pipeline that fuses GPR features with visual-inertial data.
- Familiarize yourself with technology around GPR measurements and state-of-the-art deep learning approaches for feature detection and matching. - Explore how additional sensor modalities can be combined in the detection process. - Development of a feature extraction method which makes use of GPR scans. - Implement an matching scheme for your features. - Test and evaluate the implemented approach. - Design a state estimation pipeline that fuses GPR features with visual-inertial data.
- You’re highly motivated and curious about the topic.
- Strong interest in visual SLAM.
- Previous experience with deep learning and computer vision.
- Solid programming skills (C++/Python) are mandatory.
- An excellent academic record is desirable, but may be compensated by profound knowledge in the afore-mentioned fields.
- You’re highly motivated and curious about the topic. - Strong interest in visual SLAM. - Previous experience with deep learning and computer vision. - Solid programming skills (C++/Python) are mandatory. - An excellent academic record is desirable, but may be compensated by profound knowledge in the afore-mentioned fields.