Register now After registration you will be able to apply for this opportunity online.
Going Beyond Faces in Realistic Head Avatars
Digital humans are a very popular and fast-growing area with manifold applications in AR/VR. However, the dynamic of the existing head avatars is mostly limited to the facial region. In this project, we will focus on realistically rendered avatars that would include realistic modeling of both faces and hair with physically accurate dynamics and interactions.
Keywords: Computer Vision, Computer Graphics, 3D Reconstruction
There has been rapid progress in creating realistic and animatable 3D facial avatars from images, video, and text. What is still missing is accurate dynamic head avatars with realistic hair geometry. Existing methods typically represent hair with coarse mesh geometry, implicit surfaces, or neural radiance fields. While this improves the systems' visual performance, the reconstructed hair remains static despite head movements. This introduces a lack of realism and aliasing artifacts during novel-view changes in existing systems.
This project aims to address these issues and introduce dynamic and realistically rendered hair to the head avatars.
The project will build on three previous hair modeling works carried out by the supervisors: implicit fields for hair modeling (Neural Haircut, ICCV 2023 oral), conditional diffusion models for hair (HAAR, CVPR 2024), and 3D Gaussians for highly accurate strand reconstruction and rendering (Strand-Aligned Gaussians, under review).
There has been rapid progress in creating realistic and animatable 3D facial avatars from images, video, and text. What is still missing is accurate dynamic head avatars with realistic hair geometry. Existing methods typically represent hair with coarse mesh geometry, implicit surfaces, or neural radiance fields. While this improves the systems' visual performance, the reconstructed hair remains static despite head movements. This introduces a lack of realism and aliasing artifacts during novel-view changes in existing systems.
This project aims to address these issues and introduce dynamic and realistically rendered hair to the head avatars.
The project will build on three previous hair modeling works carried out by the supervisors: implicit fields for hair modeling (Neural Haircut, ICCV 2023 oral), conditional diffusion models for hair (HAAR, CVPR 2024), and 3D Gaussians for highly accurate strand reconstruction and rendering (Strand-Aligned Gaussians, under review).
This project aims to create a method that creates an animatable and dynamic head avatar with physically-accurate hair geometry using monocular capture of a subject. The results of this work are expected to be published at the top computer vision conferences (we aim for CVPR 2025).
This project aims to create a method that creates an animatable and dynamic head avatar with physically-accurate hair geometry using monocular capture of a subject. The results of this work are expected to be published at the top computer vision conferences (we aim for CVPR 2025).