SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Imprint
  • Terms & conditions
Register now After registration you will be able to apply for this opportunity online.

Learning Object Manipulation from Demonstrations using Tactile Feedback and Imitation Learning

Recently, there has been significant progress in learning object manipulation from human videos. One of the key limitations of these methods is the absence of tactile feedback, making it hard to identify whether or not a contact has been made. Thus, in this project, we would like to investigate how we can use demonstrations including tactile measurements to learn object manipulation.

Keywords: Machine Learning, Tactile Sensing, Imitation Learning, Diffusion Models, Transformers,

  • In recent years, there has been significant progress in robotic manipulation, with applications ranging from household tasks to industrial automation. One of the key challenges in this area is enabling robots to manipulate objects with skill and precision in real-world environments. Videos of demonstrations have proven to be an effective and intuitive way to teach robots complex manipulation tasks [1,2]. However, they often lack the fine-grained tactile feedback that humans use to adjust their grip and movement during object manipulation. In this project, we would like to investigate how we can use tactile measurements from a two-finger gripper equipped with optical tactile sensors [3] to learn various manipulation tasks from demonstrations. To this end, we will create a teleoperation setup to collect demonstration data using a robotic arm and investigate how state-of-the-art methods for imitation learning from images can be adapted to learn in the tactile domain. - [1]: Zhao, Tony Z., et al. "Learning fine-grained bimanual manipulation with low-cost hardware." -- - [2]: Chi, C., et al. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. - [3] https://www.gelsight.com/gelsightmini/

    In recent years, there has been significant progress in robotic manipulation, with applications ranging from household tasks to industrial automation. One of the key challenges in this area is enabling robots to manipulate objects with skill and precision in real-world environments. Videos of demonstrations have proven to be an effective and intuitive way to teach robots complex manipulation tasks [1,2].
    However, they often lack the fine-grained tactile feedback that humans use to adjust their grip and movement during object manipulation. In this project, we would like to investigate how we can use tactile measurements from a two-finger gripper equipped with optical tactile sensors [3] to learn various manipulation tasks from demonstrations. To this end, we will create a teleoperation setup to collect demonstration data using a robotic arm and investigate how state-of-the-art methods for imitation learning from images can be adapted to learn in the tactile domain.

    - [1]: Zhao, Tony Z., et al. "Learning fine-grained bimanual manipulation with low-cost hardware." --
    - [2]: Chi, C., et al. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion.
    - [3] https://www.gelsight.com/gelsightmini/

  • - Literature review on imitation learning including diffusion models and transformers. - Develop a teleoperation system to collect human demonstrations - Collect a tactile dataset (e.g. for pick-up, peg in hole insertion) - Implement an imitation learning algorithm to learn from the collected data. - (Optional) Extend the system to a two-hand setup and learn more complex tasks such as e.g. tying shoes

    - Literature review on imitation learning including diffusion models and transformers.
    - Develop a teleoperation system to collect human demonstrations
    - Collect a tactile dataset (e.g. for pick-up, peg in hole insertion)
    - Implement an imitation learning algorithm to learn from the collected data.
    - (Optional) Extend the system to a two-hand setup and learn more complex tasks such as e.g. tying shoes

  • - Highly motivated and autonomous student - Programming experience in Python and PyTorch and Machine Learning - (Preferred) Experience using ROS

    - Highly motivated and autonomous student
    - Programming experience in Python and PyTorch and Machine Learning
    - (Preferred) Experience using ROS

  • René Zurbrügg (zrene@ethz.ch) Arjun Bhardwaj (bhardwaj@ethz.ch) Jan Preisig (preisigj@ethz.ch)

    René Zurbrügg (zrene@ethz.ch)
    Arjun Bhardwaj (bhardwaj@ethz.ch)
    Jan Preisig (preisigj@ethz.ch)

  • Not specified

  • Not specified

Calendar

Earliest start2023-09-01
Latest endNo date

Location

Robotic Systems Lab (ETHZ)

Other involved organizations
ETH Competence Center - ETH AI Center (ETHZ)

Labels

Master Thesis

Topics

  • Information, Computing and Communication Sciences
  • Engineering and Technology

Documents

NameCommentSizeActions
tactproj.pdf3.8MBDownload
SiROP PARTNER INSTITUTIONS