Institute for Intelligent Interactive SystemsOpen OpportunitiesWe will explore the design space of avatars in Virtual Reality to support learning and creativity. The project will leverage the concept of "embodied cognition", a set of theories that imply that our bodies and their interaction with the environment can impact how we learn. We will develop a Unity3D-based VR environment for embodied learning that can be deployed on everyday VR headsets. - Computer Graphics, Computer-Human Interaction
- Master Thesis, Semester Project
| Many industrial assembly tasks require high accuracy when manipulating parts, e.g., when inserting an object somewhere, such as a box that we would like to put in a bin. Here, the tolerances that are required can vary wildly.
While industrial manipulators are generally accurate in a controlled setting, as soon as the environment is not as structured as we would like it to be, we need to leverage feedback policies to correct for inaccuraies in estimation, and unexpected/unmodeled effects.
The setting we are interested in in this work is robotic bin packing, i.e., enabling a robot to pick objects (initially boxes/parcels), and (the focus of this work) placing them in a box, where other previously placed object possibly obstruct the placement. - Robotics and Mechatronics
- Master Thesis
| In this project, you are going to work with a state-of-the-art deep learning approach and generative models for building an efficient system to directly reconstruct a 3D animatable avatar from a single image.
Feel free to contact me for more details.
- Information, Computing and Communication Sciences
- ETH Zurich (ETHZ), Master Thesis, Semester Project
| The advancement in humanoid robotics has reached a stage where mimicking complex human motions with high accuracy is crucial for tasks ranging from entertainment to human-robot interaction in dynamic environments. Traditional approaches in motion learning, particularly for humanoid robots, rely heavily on motion capture (MoCap) data. However, acquiring large amounts of high-quality MoCap data is both expensive and logistically challenging. In contrast, video footage of human activities, such as sports events or dance performances, is widely available and offers an abundant source of motion data.
Building on recent advancements in extracting and utilizing human motion from videos, such as the method proposed in WHAM (refer to the paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"), this project aims to develop a system that extracts human motion from videos and applies it to teach a humanoid robot how to perform similar actions. The primary focus will be on extracting dynamic and expressive motions from videos, such as soccer player celebrations, and using these extracted motions as reference data for reinforcement learning (RL) and imitation learning on a humanoid robot. - Engineering and Technology
- Master Thesis
| This project reconstructs liquids from multi-view imagery, segmenting fluid regions using methods like Mask2Former and reconstructing static scenes with 3D Gaussian Splatting or Mast3r. The identified fluid clusters initialize a particle-based simulation, refined for temporal consistency and enhanced by optional thermal data and visual language models for fluid properties. - Computer Vision
- Master Thesis, Semester Project
|
|