Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Leveraging Event Cameras for 3D Gaussian Splatting Reconstruction under fast motion
This project seeks to leverage the sparse nature of events to accelerate the training of radiance fields.
Building on the advancements of 3D Gaussian Splatting (3DGS) methods for scene reconstruction and synthesis, this project focuses on overcoming significant limitations that arise in scenarios involving fast camera motion or rapid object dynamics. Current 3DGS methods, while impressive under controlled conditions, struggle in two key areas: (1) inaccuracies in camera tracking within 3DGS-enabled SLAM systems and (2) degraded object reconstruction quality due to motion blur. These challenges are further amplified in scenes with low light or high dynamic range. Event cameras, with their high temporal resolution and robustness to motion blur, present an exciting opportunity to address these issues. By leveraging the asynchronous event streams provided by these cameras, we aim to enhance the performance of 3DGS methods in tracking and reconstruction tasks under challenging conditions. Applicants with expertise in programming, computer vision, and experience with machine learning frameworks (e.g., PyTorch) are invited to apply. Previous experience with 3DGS, NeRF and event-based cameras is a plus.
Building on the advancements of 3D Gaussian Splatting (3DGS) methods for scene reconstruction and synthesis, this project focuses on overcoming significant limitations that arise in scenarios involving fast camera motion or rapid object dynamics. Current 3DGS methods, while impressive under controlled conditions, struggle in two key areas: (1) inaccuracies in camera tracking within 3DGS-enabled SLAM systems and (2) degraded object reconstruction quality due to motion blur. These challenges are further amplified in scenes with low light or high dynamic range. Event cameras, with their high temporal resolution and robustness to motion blur, present an exciting opportunity to address these issues. By leveraging the asynchronous event streams provided by these cameras, we aim to enhance the performance of 3DGS methods in tracking and reconstruction tasks under challenging conditions. Applicants with expertise in programming, computer vision, and experience with machine learning frameworks (e.g., PyTorch) are invited to apply. Previous experience with 3DGS, NeRF and event-based cameras is a plus.
The project aims to develop a novel framework that integrates the high temporal resolution and motion blur robustness of event cameras with state-of-the-art 3D Gaussian Splatting (3DGS) techniques to enable accurate scene reconstruction and synthesis in challenging dynamic environments. Key objectives include enhancing 3DGS-based SLAM systems for improved camera tracking and addressing motion blur-induced degradation in object reconstruction quality. The framework will be benchmarked on tasks involving fast camera motion, rapid object dynamics, low-light scenarios, and high dynamic range scenes, demonstrating its robustness and scalability. The ultimate goal is to establish new methods for leveraging event cameras to advance 3DGS technologies, enabling high-quality reconstructions in real-world dynamic environments.
The project aims to develop a novel framework that integrates the high temporal resolution and motion blur robustness of event cameras with state-of-the-art 3D Gaussian Splatting (3DGS) techniques to enable accurate scene reconstruction and synthesis in challenging dynamic environments. Key objectives include enhancing 3DGS-based SLAM systems for improved camera tracking and addressing motion blur-induced degradation in object reconstruction quality. The framework will be benchmarked on tasks involving fast camera motion, rapid object dynamics, low-light scenarios, and high dynamic range scenes, demonstrating its robustness and scalability. The ultimate goal is to establish new methods for leveraging event cameras to advance 3DGS technologies, enabling high-quality reconstructions in real-world dynamic environments.
Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Marco Cannici (cannici AT ifi DOT uzh DOT ch), Davide Scaramuzza [sdavide (at) ifi (dot) uzh (dot) ch].
Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Marco Cannici (cannici AT ifi DOT uzh DOT ch), Davide Scaramuzza [sdavide (at) ifi (dot) uzh (dot) ch].