Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Decentralized Visual Place Recognition in the Real World
Make design decisions for a decentralized multi-robot SLAM experiment setup from the perspective of place recognition.
Keywords: SLAM decentralized experiment robotics multi-robot distributed place recognition loop closure computer vision
We have recently developed decentralized multi-robot visual place recognition (neural network based) and SLAM and demonstrated them on well-known datasets ( http://rpg.ifi.uzh.ch/docs/arXiv17_Cieslewski.pdf ). We want to take this work one step further and deploy it in the real world with a group of quadrotors. This is where you come in.
We have recently developed decentralized multi-robot visual place recognition (neural network based) and SLAM and demonstrated them on well-known datasets ( http://rpg.ifi.uzh.ch/docs/arXiv17_Cieslewski.pdf ). We want to take this work one step further and deploy it in the real world with a group of quadrotors. This is where you come in.
In this project, you will analyse the feasibility of real-world decentralized visual SLAM from one key aspect: Visual Place Recognition. You will help plan the field experiment by engineering for performance and robustness. Based on data you collect, you will provide design decisions for various aspects of the experiment. These will be up to you (what are the main bottlenecks?), but examples include: Camera placement on the robot, active camera control, neural network fine-tuning, where to do the experiments (default option is our offices), adapting the environment with minimal effort, …
In this project, you will analyse the feasibility of real-world decentralized visual SLAM from one key aspect: Visual Place Recognition. You will help plan the field experiment by engineering for performance and robustness. Based on data you collect, you will provide design decisions for various aspects of the experiment. These will be up to you (what are the main bottlenecks?), but examples include: Camera placement on the robot, active camera control, neural network fine-tuning, where to do the experiments (default option is our offices), adapting the environment with minimal effort, …
Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python
Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python