Robotic Systems LabOpen OpportunitiesRobots have become increasingly advanced recently, capable of performing challenging tasks such as taking elevators and cooking shrimp. Moreover, their ability to accomplish long-horizon tasks given simple natural language instructions is also made possible by large language models. However, with this increased functionality comes the risk that intelligent robots might unintentionally or intentionally harm people based on instructions from an operator. On the other hand, significant efforts have been made to restrain large language models from generating harmful content. Can these efforts be applied to robotics to ensure safe interactions between robots and humans, even as robots become more capable? This project aims to answer this question.
- Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis, Semester Project
| The goal of this project is to apply LLMs to teach the ANYmal robot new low-level skills via Reinforcement Learning (RL) that the task planner identifies to be missing. - Intelligent Robotics
- Master Thesis
| Robots may not be able to complete tasks fully autonomously in unstructured or unseen environments, however direct teleoperation from human operators may also be challenging due to the difficulty of providing full situational awareness to the operator as well as degradation in communication leading to the loss of control authority. This motivates the use of shared autonomy for assisting the operator thereby enhancing the performance during the task.
In this project, we aim to develop a shared autonomy framework for teleoperation of manipulator arms, to assist non-expert users or in the presence of degraded communication. Imitation learning, such as diffusion models, have emerged as a popular and scalable approach for learning manipulation tasks [1, 2]. Additionally, recent works have combined this with partial diffusion to enable shared autonomy [3]. However, the tasks were restricted to simple 2D domains. In this project, we wish to extend previous work in the lab using diffusion-based imitation learning, to enable shared autonomy for non-expert users to complete unseen tasks or in degraded communication environments.
- Intelligent Robotics, Robotics and Mechatronics
- ETH Zurich (ETHZ), Semester Project
| The GELLO system proposed in [1] is a low-cost “puppet” robot arm that is used to teleoperate a larger, main robot arm. This project aims to adapt this open source design to enable teleoperation of the DynaArm, which is a robot manipulator arm custom designed by the Robotic Systems Lab to be mounted on the ANYmal quadruped platform. Such a system provides a simplification over the existing DynaArm teleoperation interface consisting of a second identical DynaArm used purely as a human interface device [2], which may be an unnecessarily expensive and cumbersome solution. The system developed may have applications in remote teleoperation for industrial inspection or disaster response scenarios, as well as providing an interface for training imitation learning models, which may optionally be explored as time permits.
[1] Wu, Philip et al. "GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators". arXiv preprint (2024)
[2] Fuchioka, Yuni et al. AIRA Challenge: Teleoperated Mobile Manipulation for Industrial Inspection. Youtube Video. (2024) - Intelligent Robotics, Mechanical Engineering, Robotics and Mechatronics, Simulation and Modelling
- Bachelor Thesis, Master Thesis, Semester Project
| This project addresses sampling inefficiency in classical reinforcement learning by exploring smart weight initialization. Inspired by computer vision, we aim to enhance learning across different hardware (cross embodiment) and skills (cross skills) using pre-trained representations, reducing training times and potentially improving the overall effectiveness of reinforcement learning policies. - Engineering and Technology, Intelligent Robotics
- Master Thesis
| In recent years, advancements in reinforcement learning have achieved remarkable success in quadruped locomotion tasks. Despite their similar structural designs, quadruped robots often require uniquely tailored reward functions for effective motion pattern development, limiting the transferability of learned behaviors across different models. This project proposes to bridge this gap by developing a unified, continuous latent representation of quadruped motions applicable across various robotic platforms. By mapping these motions onto a shared latent space, the project aims to create a versatile foundation that can be adapted to downstream tasks for specific robot configurations.
- Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis
| The remarkable agility of animals, characterized by their rapid, fluid movements and precise interaction with their environment, serves as an inspiration for advancements in legged robotics. Recent progress in the field has underscored the potential of learning-based methods for robot control. These methods streamline the development process by optimizing control mechanisms directly from sensory inputs to actuator outputs, often employing deep reinforcement learning (RL) algorithms. By training in simulated environments, these algorithms can develop locomotion skills that are subsequently transferred to physical robots. Although this approach has led to significant achievements in achieving robust locomotion, mimicking the wide range of agile capabilities observed in animals remains a significant challenge. Traditionally, manually crafted controllers have succeeded in replicating complex behaviors, but their development is labor-intensive and demands a high level of expertise in each specific skill. Reinforcement learning offers a promising alternative by potentially reducing the manual labor involved in controller development. However, crafting learning objectives that lead to the desired behaviors in robots also requires considerable expertise, specific to each skill.
- Information, Computing and Communication Sciences
- Master Thesis
| The project aims to explore curriculum learning techniques to push the limits of quadruped running speed using reinforcement learning. By systematically designing and implementing curricula that guide the learning process, the project seeks to develop a quadruped controller capable of achieving the fastest possible forward locomotion. This involves not only optimizing the learning process but also ensuring the robustness and adaptability of the learned policies across various running conditions. - Engineering and Technology
- Master Thesis
| The advancement in humanoid robotics has reached a stage where mimicking complex human motions with high accuracy is crucial for tasks ranging from entertainment to human-robot interaction in dynamic environments. Traditional approaches in motion learning, particularly for humanoid robots, rely heavily on motion capture (MoCap) data. However, acquiring large amounts of high-quality MoCap data is both expensive and logistically challenging. In contrast, video footage of human activities, such as sports events or dance performances, is widely available and offers an abundant source of motion data.
Building on recent advancements in extracting and utilizing human motion from videos, such as the method proposed in WHAM (refer to the paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"), this project aims to develop a system that extracts human motion from videos and applies it to teach a humanoid robot how to perform similar actions. The primary focus will be on extracting dynamic and expressive motions from videos, such as soccer player celebrations, and using these extracted motions as reference data for reinforcement learning (RL) and imitation learning on a humanoid robot. - Engineering and Technology
- Master Thesis
| Humanoid robots, designed to mimic the structure and behavior of humans, have seen significant advancements in kinematics, dynamics, and control systems. Teleoperation of humanoid robots involves complex control strategies to manage bipedal locomotion, balance, and interaction with environments. Research in this area has focused on developing robots that can perform tasks in environments designed for humans, from simple object manipulation to navigating complex terrains. Reinforcement learning has emerged as a powerful method for enabling robots to learn from interactions with their environment, improving their performance over time without explicit programming for every possible scenario. In the context of humanoid robotics and teleoperation, RL can be used to optimize control policies, adapt to new tasks, and improve the efficiency and safety of human-robot interactions. Key challenges include the high dimensionality of the action space, the need for safe exploration, and the transfer of learned skills across different tasks and environments. Integrating human motion tracking with reinforcement learning on humanoid robots represents a cutting-edge area of research. This approach involves using human motion data as input to train RL models, enabling the robot to learn more natural and human-like movements. The goal is to develop systems that can not only replicate human actions in real-time but also adapt and improve their responses over time through learning. Challenges in this area include ensuring real-time performance, dealing with the variability of human motion, and maintaining stability and safety of the humanoid robot.
- Information, Computing and Communication Sciences
- Master Thesis
|
|