Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Fast Deformable Mesh Tracking from Occluded Point Clouds
Despite the recent advances in point cloud fusion and mesh reconstruction, real-time shape reconstruction for complex and deformable objects is still an open problem in the field of robotics. While state-of-the-art works in computer graphics achieve good reconstruction accuracy with simulated point clouds, they are not time efficient for robotic applications and they have not been validated on real-world point clouds obtained via RGB-D cameras. This project will aim at making prior work from the lab robust to such noise/dropped points.
Keywords: 3D Vision, Robotic Perception, Mesh Reconstruction, Point Cloud, Data Augmentation
The student will study the discrepancy between simulated and real world point cloud data, and develop robotic learning pipeline for generalizable fast mesh reconstruction from occluded/noisy point cloud observations (Pytorch/Pytorch3D), leveraging our existing Python code base that already en- ables mesh reconstruction from simulated noiseless point cloud data. A literature review will be part of the thesis and a written report and an oral presentation will conclude it.
The student will study the discrepancy between simulated and real world point cloud data, and develop robotic learning pipeline for generalizable fast mesh reconstruction from occluded/noisy point cloud observations (Pytorch/Pytorch3D), leveraging our existing Python code base that already en- ables mesh reconstruction from simulated noiseless point cloud data. A literature review will be part of the thesis and a written report and an oral presentation will conclude it.
The goal of this project is to deal with the discrepancy between simulated and real world point cloud data for robotic perception so that prior mesh reconstruction work can be extended to occluded and noisy point cloud observations.
The goal of this project is to deal with the discrepancy between simulated and real world point cloud data for robotic perception so that prior mesh reconstruction work can be extended to occluded and noisy point cloud observations.