Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Explicit 3D avatars from a video
The learning of animatable 3D body avatars has diverse applications in gaming, video production, and AR/VR communication. While recent methods using neural implicit representations, such as Signed Distance Functions (SDFs), can capture high-quality geometry, they are often inefficient to train as well as challenging to animate. Additionally, converting these implicit avatars into meshes is necessary for rendering them in standard engines, leading to a reduction in rendering quality.
Recent work has made great progress in using explicit representations, e.g. point cloud and meshes, to learn 3D geometry. Nvdiffrec employs meshes to learn high-quality static geometry. PointAvatar leverages animatable point clouds to represent head avatars. Can we extend them to full body, clothed avatars?
After reconstructing the avatar, there are also several exciting follow-up tasks. For example, can we modify the avatar given text guidance? Can we learn avatars from only a few images instead of a video?
Keywords: 3D animatable avatars, mesh, point cloud
Each year the IDEA League offers the students of its partner universities over 180 monthly grants for a short-term research exchange. In general, these grants are awarded based on academic merit. For more information visit http://idealeague.org/student-grant/
Semester Project
Master Thesis
CLS Student Project [managed by Max Planck ETH Center for Learning Systems]