Register now After registration you will be able to apply for this opportunity online.
Transfer Learning with Inpainting: Learn hand manipulation skills with human hand demonstrations
Imitation learning has demonstrated remarkable success in applications such as parallel grippers and robotic hands. However, current state-of-the-art imitation learning pipelines often depend heavily on demonstrations performed using the robot's specific embodiment.
Inpainting augmentation techniques present an exciting opportunity to overcome this limitation, enabling robots to learn from demonstrations involving other embodiments. This is particularly promising for dexterous hand manipulation, where skills can potentially be learned directly from extensive human hand datasets.
This project focuses on adapting inpainting augmentation methods to robotic hand manipulation. The goal is to integrate these techniques into our cutting-edge imitation learning framework and hardware, enabling efficient transfer learning from human demonstrations.
Keywords: dexterous hand manipulation, transfer learning, cross-embodiment
Imitation learning has demonstrated remarkable success in applications such as parallel grippers and robotic hands. However, current state-of-the-art imitation learning pipelines often depend heavily on demonstrations performed using the robot's specific embodiment.
Inpainting augmentation techniques present an exciting opportunity to overcome this limitation, enabling robots to learn from demonstrations involving other embodiments. This is particularly promising for dexterous hand manipulation, where skills can potentially be learned directly from extensive human hand datasets.
This project focuses on adapting inpainting augmentation methods to robotic hand manipulation. The goal is to integrate these techniques into our cutting-edge imitation learning framework and hardware, enabling efficient transfer learning from human demonstrations.
**Work packages**
- Literature research
- Implement inpainting augmentation
- Imitation learning policy training
- Hardware deployment
**Requirements**
- Strong programming skills in Python
- Experience in reinforcement learning
**Publication**
This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics
and machine learning conferences where outstanding robotic performances are highlighted.
**Related literature**
- Lawrence Yunliang Chen, Kush Hari, Karthik Dharmarajan, Chenfeng Xu, Quan Vuong, Ken Goldberg: Mirage: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting. CoRR abs/2402.19249 (2024)
- Zoey Qiuyu Chen, Shosuke C. Kiami, Abhishek Gupta, Vikash Kumar: GenAug: Retargeting behaviors to unseen situations via Generative Augmentation. Robotics: Science and Systems 2023
- Lawrence Yunliang Chen, Chenfeng Xu etal: RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning
- Davide Liconti, Yasunori Toshimitsu, Robert K. Katzschmann: Leveraging Pretrained Latent Representations for Few-Shot Imitation Learning on a Dexterous Robotic Hand. CoRR abs/2404.16483 (2024)
Imitation learning has demonstrated remarkable success in applications such as parallel grippers and robotic hands. However, current state-of-the-art imitation learning pipelines often depend heavily on demonstrations performed using the robot's specific embodiment.
Inpainting augmentation techniques present an exciting opportunity to overcome this limitation, enabling robots to learn from demonstrations involving other embodiments. This is particularly promising for dexterous hand manipulation, where skills can potentially be learned directly from extensive human hand datasets.
This project focuses on adapting inpainting augmentation methods to robotic hand manipulation. The goal is to integrate these techniques into our cutting-edge imitation learning framework and hardware, enabling efficient transfer learning from human demonstrations.
**Work packages**
- Literature research - Implement inpainting augmentation - Imitation learning policy training - Hardware deployment
**Requirements**
- Strong programming skills in Python - Experience in reinforcement learning
**Publication**
This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics and machine learning conferences where outstanding robotic performances are highlighted.
**Related literature**
- Lawrence Yunliang Chen, Kush Hari, Karthik Dharmarajan, Chenfeng Xu, Quan Vuong, Ken Goldberg: Mirage: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting. CoRR abs/2402.19249 (2024) - Zoey Qiuyu Chen, Shosuke C. Kiami, Abhishek Gupta, Vikash Kumar: GenAug: Retargeting behaviors to unseen situations via Generative Augmentation. Robotics: Science and Systems 2023 - Lawrence Yunliang Chen, Chenfeng Xu etal: RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning - Davide Liconti, Yasunori Toshimitsu, Robert K. Katzschmann: Leveraging Pretrained Latent Representations for Few-Shot Imitation Learning on a Dexterous Robotic Hand. CoRR abs/2404.16483 (2024)
Not specified
Chenyu Yang: chenyu.yang@srl.ethz.ch
Liconti Davide: davide.liconti@srl.ethz.ch
To apply, please send to both the contacts a short motivation statement for this project, with a copy of your CV, transcripts, and two reference contacts if you have worked on any past projects.
Chenyu Yang: chenyu.yang@srl.ethz.ch
Liconti Davide: davide.liconti@srl.ethz.ch
To apply, please send to both the contacts a short motivation statement for this project, with a copy of your CV, transcripts, and two reference contacts if you have worked on any past projects.