Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Visually Guided Sparse Point Cloud Completion
This project aims to develop a depth completion framework to supplement the point cloud data with visual information while preserving spatial relations of the geometry
Keywords: Depth Completion, Semantics, LiDAR, RGB Camera
This project aims to develop a depth completion framework to supplement the point cloud data with visual information while preserving spatial relations of the geometry. While multiple scholarly works propose LiDAR depth completion [1, 2, 3], there is still limited research on geometry-aware [2] spatially consistent depth completion. To bridge this gap, a spatial geometry-aware depth completion framework will be developed using RGB images to guide the point cloud depth measurements. Particularly, this framework aims to take the spatial relations of the points into account for completion as well as the high-level understanding of the visual information. The developed framework will supplement sparse point cloud maps for high-resolution dense map generation, and improve the robustness of high-level point cloud operations such as segmentation or dynamic obstacle detection. Special emphasis is put on the LiDAR depth completion in the far-field, where neighboring LiDAR scan lines are far apart from each other. The developed pipeline will be deployed on platforms such as ANYmal robot and HEAP excavators.
This project aims to develop a depth completion framework to supplement the point cloud data with visual information while preserving spatial relations of the geometry. While multiple scholarly works propose LiDAR depth completion [1, 2, 3], there is still limited research on geometry-aware [2] spatially consistent depth completion. To bridge this gap, a spatial geometry-aware depth completion framework will be developed using RGB images to guide the point cloud depth measurements. Particularly, this framework aims to take the spatial relations of the points into account for completion as well as the high-level understanding of the visual information. The developed framework will supplement sparse point cloud maps for high-resolution dense map generation, and improve the robustness of high-level point cloud operations such as segmentation or dynamic obstacle detection. Special emphasis is put on the LiDAR depth completion in the far-field, where neighboring LiDAR scan lines are far apart from each other. The developed pipeline will be deployed on platforms such as ANYmal robot and HEAP excavators.
- Literature research and problem analysis.
- Benchmarking current depth completion approaches.
- Development of a learned visually and geometrically guided depth completion framework
- Real-time deployment on ANYmal robot and HEAP Excavator.
- Demonstrating advantages in perception applications such as dynamic obstacle detection, SLAM
- Literature research and problem analysis. - Benchmarking current depth completion approaches. - Development of a learned visually and geometrically guided depth completion framework - Real-time deployment on ANYmal robot and HEAP Excavator. - Demonstrating advantages in perception applications such as dynamic obstacle detection, SLAM
- Knowledge of Computer vision, C++, Python.
- Basic knowledge of machine learning architectures.
- Highly motivated and research-oriented.
- Optional: Experience with LiDAR and/or RGB camera modalities.
- Knowledge of Computer vision, C++, Python. - Basic knowledge of machine learning architectures. - Highly motivated and research-oriented. - Optional: Experience with LiDAR and/or RGB camera modalities.
- tutuna@ethz.ch
- patilv@ethz.ch
- nubertj@ethz.ch
Please include your CV and up-to-date transcript.