Processing the sparse and asynchronous data from event-based cameras presents significant challenges. Transformer-based models have achieved remarkable results in sequence modeling tasks, including event-based vision, due to their powerful representation capabilities. Despite their success, their high computational complexity and memory demands make them impractical for deployment on resource-constrained devices typical in real-world applications. Recent advancements in efficient sequence modeling architectures offer promising alternatives that provide competitive performance with significantly reduced computational overhead. Recognizing that Transformers already demonstrate strong performance on event-based vision tasks, we aim to leverage their strengths while addressing efficiency concerns.
Processing the sparse and asynchronous data from event-based cameras presents significant challenges. Transformer-based models have achieved remarkable results in sequence modeling tasks, including event-based vision, due to their powerful representation capabilities. Despite their success, their high computational complexity and memory demands make them impractical for deployment on resource-constrained devices typical in real-world applications. Recent advancements in efficient sequence modeling architectures offer promising alternatives that provide competitive performance with significantly reduced computational overhead. Recognizing that Transformers already demonstrate strong performance on event-based vision tasks, we aim to leverage their strengths while addressing efficiency concerns.
Study knowledge transfer techniques to transfer knowledge from complex Transformer models to simpler, more efficient models. Test the developed models on benchmark event-based vision tasks such as object recognition, optical flow estimation, and SLAM.
Study knowledge transfer techniques to transfer knowledge from complex Transformer models to simpler, more efficient models. Test the developed models on benchmark event-based vision tasks such as object recognition, optical flow estimation, and SLAM.
Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Giovanni Cioffi (cioffi@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).
Interested candidates should send their CV, transcripts (bachelor and master) to Nikola Zubic (zubic@ifi.uzh.ch), Giovanni Cioffi (cioffi@ifi.uzh.ch) and Davide Scaramuzza (sdavide@ifi.uzh.ch).