Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Learn Depth from Multiple Input Modalities
Learn depth from RGB frames and sparse depth information.
Depth sensing has become pervasive in applications
as diverse as autonomous driving, augmented reality, and scene reconstruction. Mobile robots often feature sensors that provide depth information (LiDARs, RGB-D cameras, ToF cameras). The data from these sensors is typically sparse. Furthermore, depending on the sensing principle, these sensors fail when exposed to shiny, bright, transparent, distant, and low-texture surfaces.
Learning-based approaches to infer depth from single images have already shown promising results. This project aims to combine both input modalities (RGB frames + sparse depth information) in a learned way to predict a dense depth image.
Depth sensing has become pervasive in applications as diverse as autonomous driving, augmented reality, and scene reconstruction. Mobile robots often feature sensors that provide depth information (LiDARs, RGB-D cameras, ToF cameras). The data from these sensors is typically sparse. Furthermore, depending on the sensing principle, these sensors fail when exposed to shiny, bright, transparent, distant, and low-texture surfaces.
Learning-based approaches to infer depth from single images have already shown promising results. This project aims to combine both input modalities (RGB frames + sparse depth information) in a learned way to predict a dense depth image.
Combine both RGB and (sparse) depth data from an RGBD sensor to regress a high-quality dense depth image.
Combine both RGB and (sparse) depth data from an RGBD sensor to regress a high-quality dense depth image.