Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Trajectory estimation and scene reconstruction from any YouTube video!
Title says it all
Keywords: computer vision youtube SLAM visual odometry badass FPV
EARLIEST PROJECT START FALL 2019! We believe that with the right processing, it should be possible to obtain trajectory estimation and scene reconstruction from many of the videos that are already out there on the internet! This could have nice applications: two approaches that we would be quite passionate about would be a) to visualize FPV races ( https://www.youtube.com/watch?v=EcLk_uZe33w ) and b), more practical for the community, to create new robotics datasets with little effort.
EARLIEST PROJECT START FALL 2019! We believe that with the right processing, it should be possible to obtain trajectory estimation and scene reconstruction from many of the videos that are already out there on the internet! This could have nice applications: two approaches that we would be quite passionate about would be a) to visualize FPV races ( https://www.youtube.com/watch?v=EcLk_uZe33w ) and b), more practical for the community, to create new robotics datasets with little effort.
Go from YouTube videos to trajectory estimation and scene reconstruction. The more general, the better. Of course, you will start with an as simple as possible approach: Slow motion, 360° videos (theoretically no need to calibrate), then increase in complexity. While this will likely be rigged with engineering, hacks and qualitative evaluation, it might, with the right approach, also lead to interesting research (e.g. how to deal with motion blur in tracking, auto-calibration, potential to apply machine learning etc...).
Go from YouTube videos to trajectory estimation and scene reconstruction. The more general, the better. Of course, you will start with an as simple as possible approach: Slow motion, 360° videos (theoretically no need to calibrate), then increase in complexity. While this will likely be rigged with engineering, hacks and qualitative evaluation, it might, with the right approach, also lead to interesting research (e.g. how to deal with motion blur in tracking, auto-calibration, potential to apply machine learning etc...).
Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python. Prefer students who took the Vision Algorithms for Mobile Robots class!
Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python. Prefer students who took the Vision Algorithms for Mobile Robots class!