SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Imprint
  • Terms & conditions
Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.

Lifelike Agility on ANYmal by Learning from Animals

The remarkable agility of animals, characterized by their rapid, fluid movements and precise interaction with their environment, serves as an inspiration for advancements in legged robotics. Recent progress in the field has underscored the potential of learning-based methods for robot control. These methods streamline the development process by optimizing control mechanisms directly from sensory inputs to actuator outputs, often employing deep reinforcement learning (RL) algorithms. By training in simulated environments, these algorithms can develop locomotion skills that are subsequently transferred to physical robots. Although this approach has led to significant achievements in achieving robust locomotion, mimicking the wide range of agile capabilities observed in animals remains a significant challenge. Traditionally, manually crafted controllers have succeeded in replicating complex behaviors, but their development is labor-intensive and demands a high level of expertise in each specific skill. Reinforcement learning offers a promising alternative by potentially reducing the manual labor involved in controller development. However, crafting learning objectives that lead to the desired behaviors in robots also requires considerable expertise, specific to each skill.

Keywords: learning from demonstrations, imitation learning, reinforcement learning

  • **Work packages** Literature research Skill development from an animal dataset (available) Hardware deployment **Requirements** Strong programming skills in Python Experience in reinforcement learning and imitation learning frameworks **Publication** This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences where outstanding robotic performances are highlighted. **Related literature** This project and the following literature will make you a master in imitation/demonstration/expert learning. Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14. Peng, X.B., Coumans, E., Zhang, T., Lee, T.W., Tan, J. and Levine, S., 2020. Learning agile robotic locomotion skills by imitating animals. arXiv preprint arXiv:2004.00784. Peng, X.B., Ma, Z., Abbeel, P., Levine, S. and Kanazawa, A., 2021. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (ToG), 40(4), pp.1-20. Escontrela, A., Peng, X.B., Yu, W., Zhang, T., Iscen, A., Goldberg, K. and Abbeel, P., 2022, October. Adversarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 25-32). IEEE. Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F. and Martius, G., 2023, March. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning (pp. 342-352). PMLR. Tessler, C., Kasten, Y., Guo, Y., Mannor, S., Chechik, G. and Peng, X.B., 2023, July. Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings (pp. 1-9). Starke, Sebastian, et al. "Deepphase: Periodic autoencoders for learning motion phase manifolds." ACM Transactions on Graphics (TOG) 41.4 (2022): 1-13. Li, Chenhao, et al. "FLD: Fourier latent dynamics for Structured Motion Representation and Learning." Han, L., Zhu, Q., Sheng, J., Zhang, C., Li, T., Zhang, Y., Zhang, H., Liu, Y., Zhou, C., Zhao, R. and Li, J., 2023. Lifelike agility and play on quadrupedal robots using reinforcement learning and generative pre-trained models. arXiv preprint arXiv:2308.15143.

    **Work packages**

    Literature research


    Skill development from an animal dataset (available)


    Hardware deployment


    **Requirements**


    Strong programming skills in Python


    Experience in reinforcement learning and imitation learning frameworks

    **Publication**

    This project will mostly focus on algorithm design and system integration. Promising results will be submitted to robotics or machine learning conferences where outstanding robotic performances are highlighted.


    **Related literature**
    This project and the following literature will make you a master in imitation/demonstration/expert learning.


    Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14.


    Peng, X.B., Coumans, E., Zhang, T., Lee, T.W., Tan, J. and Levine, S., 2020. Learning agile robotic locomotion skills by imitating animals. arXiv preprint arXiv:2004.00784.


    Peng, X.B., Ma, Z., Abbeel, P., Levine, S. and Kanazawa, A., 2021. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (ToG), 40(4), pp.1-20.


    Escontrela, A., Peng, X.B., Yu, W., Zhang, T., Iscen, A., Goldberg, K. and Abbeel, P., 2022, October. Adversarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 25-32). IEEE.


    Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F. and Martius, G., 2023, March. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning (pp. 342-352). PMLR.


    Tessler, C., Kasten, Y., Guo, Y., Mannor, S., Chechik, G. and Peng, X.B., 2023, July. Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings (pp. 1-9).


    Starke, Sebastian, et al. "Deepphase: Periodic autoencoders for learning motion phase manifolds." ACM Transactions on Graphics (TOG) 41.4 (2022): 1-13.


    Li, Chenhao, et al. "FLD: Fourier latent dynamics for Structured Motion Representation and Learning."


    Han, L., Zhu, Q., Sheng, J., Zhang, C., Li, T., Zhang, Y., Zhang, H., Liu, Y., Zhou, C., Zhao, R. and Li, J., 2023. Lifelike agility and play on quadrupedal robots using reinforcement learning and generative pre-trained models. arXiv preprint arXiv:2308.15143.

  • Not specified

  • Please include your CV and transcript in the submission. **Chenhao Li** https://breadli428.github.io/ chenhli@ethz.ch **Victor Klemm** https://www.linkedin.com/in/vklemm/?originalSubdomain=ch vklemm@ethz.ch

    Please include your CV and transcript in the submission.

    **Chenhao Li**


    https://breadli428.github.io/


    chenhli@ethz.ch


    **Victor Klemm**


    https://www.linkedin.com/in/vklemm/?originalSubdomain=ch


    vklemm@ethz.ch

Calendar

Earliest startNo date
Latest endNo date

Location

ETH Competence Center - ETH AI Center (ETHZ)

Other involved organizations
Course 6: Electrical Engineering and Computer Science (MIT), Robotic Systems Lab (ETHZ)

Labels

Master Thesis

Topics

  • Information, Computing and Communication Sciences
SiROP PARTNER INSTITUTIONS