Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Events and Lidar For Autonomous Driving
We are collecting a large scale dataset with a 128-beam Lidar and high-resolution event cameras. The target application is multi-sensor fusion of frames, events, and Lidar for 3D moving object detection and tracking.
Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling-based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads.
We are currently building a large-scale dataset with high-resolution event cameras and 128-beam Lidar that is targeting object detection and tracking. This project builds on top of already existing hardware that we have built and tested in the last year. In this project, it will be extended with state-of-the-art sensors.
Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling-based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads.
We are currently building a large-scale dataset with high-resolution event cameras and 128-beam Lidar that is targeting object detection and tracking. This project builds on top of already existing hardware that we have built and tested in the last year. In this project, it will be extended with state-of-the-art sensors.
In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset with state-of-the-art Lidar and event cameras will be created. Depending on the progress, the prospective student will work on building novel 3D object detection pipelines on the dataset.
We seek a highly motivated student with the following minimum qualifications:
- Experience with programming microcontrollers or motivation to acquire it quickly
- Excellent coding skills in Python and C++
- At least one course in computer vision (multiple view geometry)
- Strong work ethic
- Excellent communication and teamwork skills
Preferred qualifications:
- Background in robotics and experience with ROS
- Experience with machine learning
- Experience with event-based vision
Contact us for more details.
In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset with state-of-the-art Lidar and event cameras will be created. Depending on the progress, the prospective student will work on building novel 3D object detection pipelines on the dataset.
We seek a highly motivated student with the following minimum qualifications:
- Experience with programming microcontrollers or motivation to acquire it quickly - Excellent coding skills in Python and C++ - At least one course in computer vision (multiple view geometry) - Strong work ethic - Excellent communication and teamwork skills
Preferred qualifications:
- Background in robotics and experience with ROS - Experience with machine learning - Experience with event-based vision
Contact us for more details.
Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)
Please add CV + transcripts (Bachelor and Master)
Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)