Register now After registration you will be able to apply for this opportunity online.
GNSS/RTK-SLAM fusion for accurate positioning of geospatial data in Mixed Reality
The main objective of the project is to increase the accuracy and usability of the Mixed Reality solution developed by V-Labs. The V-Labs team expects that the integration of a fusion algorithm based on Artificial Intelligence or an Unscented Kalman Filter (UKF) will be able to reach that goal.
Keywords: SLAM; Sensor Fusion
In order to improve the accuracy of our application, V-Labs is looking for ways to further improve the fusion algorithm. For example, through Kalman filtering or AI/ML-techniques.
Kalman filtering is a well-known real-time filtering technique used in autonomous vehicles and robotics to accurately estimate the position of the object. Integrating Kalman filtering techniques might allow us to correct the drift on the fly while the user is walking and help us solve the three issues mentioned above.
However, the difference between our solution and autonomous vehicles is that the movement of a user (especially the head) is highly non-linear, which means that the standard Kalman filter is not a viable solution. Even the Extended Kalman Filter (EKF) seems to be too simple. The Unscented Kalman Filter (UKF) might be a solution, but we are not sure.
With Artificial Intelligence technologies on the rise, we can also imagine that AI or ML (Machine Learning) techniques combined with GNSS/RTK can provide a solution to optimize the accuracy of the position and orientation of our application.
**Planning**: V-Labs aims to finish the implementation of the new fusion algorithm by the end of January 2025. The earliest start will be 2 September 2024 (02.09.2024).
Sep-Oct: Study UKF and what is needed to implement it in our application.
Nov: Program the algorithm and implement the code into the application.
Dec: Testing in the field and fine tuning the algorithm. Update new software version.
Jan: Testing in the field and fine tuning the algorithm. Update new software version.
**Benefits**: V-Labs doesn’t require work in the office, so you can work on this topic from home or at the ETH. We can provide you temporarily with a Hololens 2 equipped with a GNSS-RTK module during your research project.
In order to improve the accuracy of our application, V-Labs is looking for ways to further improve the fusion algorithm. For example, through Kalman filtering or AI/ML-techniques.
Kalman filtering is a well-known real-time filtering technique used in autonomous vehicles and robotics to accurately estimate the position of the object. Integrating Kalman filtering techniques might allow us to correct the drift on the fly while the user is walking and help us solve the three issues mentioned above.
However, the difference between our solution and autonomous vehicles is that the movement of a user (especially the head) is highly non-linear, which means that the standard Kalman filter is not a viable solution. Even the Extended Kalman Filter (EKF) seems to be too simple. The Unscented Kalman Filter (UKF) might be a solution, but we are not sure.
With Artificial Intelligence technologies on the rise, we can also imagine that AI or ML (Machine Learning) techniques combined with GNSS/RTK can provide a solution to optimize the accuracy of the position and orientation of our application.
**Planning**: V-Labs aims to finish the implementation of the new fusion algorithm by the end of January 2025. The earliest start will be 2 September 2024 (02.09.2024).
Sep-Oct: Study UKF and what is needed to implement it in our application.
Nov: Program the algorithm and implement the code into the application.
Dec: Testing in the field and fine tuning the algorithm. Update new software version.
Jan: Testing in the field and fine tuning the algorithm. Update new software version.
**Benefits**: V-Labs doesn’t require work in the office, so you can work on this topic from home or at the ETH. We can provide you temporarily with a Hololens 2 equipped with a GNSS-RTK module during your research project.
1. Study whether the UKF or AI-technology can be used to fuse the GNSS/RTK-data with data from the Hololens (Note: We don’t have direct access to the IMU from the Hololens. However, we can indirectly read the velocity and orientation from the Hololens in Unity3D).
2. Develop the theoretical framework for our solution.
3. Write an (UKF/AI?) algorithm to correct the drift of our MR-solution on the fly, while the user is moving, such that the user doesn’t need to stop to correct for the drift and can keep on working uninterrupted.
1. Study whether the UKF or AI-technology can be used to fuse the GNSS/RTK-data with data from the Hololens (Note: We don’t have direct access to the IMU from the Hololens. However, we can indirectly read the velocity and orientation from the Hololens in Unity3D). 2. Develop the theoretical framework for our solution. 3. Write an (UKF/AI?) algorithm to correct the drift of our MR-solution on the fly, while the user is moving, such that the user doesn’t need to stop to correct for the drift and can keep on working uninterrupted.
Please send your CV and transcript to d.cohenstuart@v-labs.ch
Tel: +41791947686
Website: www.v-labs.ch
Please send your CV and transcript to d.cohenstuart@v-labs.ch