Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Learning Rapid Transporting of Objects without Spillage
Rapid object transportation is in increased demand for automated warehouses. However, high-speed motions are susceptible to unsafe behaviors. This project looks into learning a control policy for high-speed robotic manipulation.
Rapid object transportation is in increased demand for automated warehouses. However, high-speed motions risk damaging the grasped product, spilling its contents, or slippage between the object and the gripper. Thus, existing robotic controllers are often conservative to generate slow but safe motions.
Recently, Jeffery et al. [1] proposed an SQP formulation for the transportation problem by adding the end-effector’s velocity and jerk constraints based on the grasped object. While their work shows successful deployment on hardware, they need to assume the object’s inertial properties which may not be available in advance. To overcome this limitation, in this project, we would like to use model-free reinforcement learning (RL) instead [3].
The project involves creating a simulation environment on Isaac Gym [2] and learning a suitable policy for the fast transportation task. If time permits, we would like to compare it to the model-based controller in [1] and also deploy the method on a fixed-base robotic arm.
References:
1. Ichnowski, Jeffrey, et al. "GOMP-FIT: Grasp-Optimized Motion Planning for Fast Inertial Transport." IEEE ICRA 2022.
2. Makoviychuk, Viktor, et al. "Isaac Gym: High-Performance GPU-Based Physics Simulation for Robot Learning." NeurIPS 2021.
3. Kumar, Visak, et al. "Joint Space Control via Deep Reinforcement Learning." IEEE/RSJ IROS 2021.
Rapid object transportation is in increased demand for automated warehouses. However, high-speed motions risk damaging the grasped product, spilling its contents, or slippage between the object and the gripper. Thus, existing robotic controllers are often conservative to generate slow but safe motions.
Recently, Jeffery et al. [1] proposed an SQP formulation for the transportation problem by adding the end-effector’s velocity and jerk constraints based on the grasped object. While their work shows successful deployment on hardware, they need to assume the object’s inertial properties which may not be available in advance. To overcome this limitation, in this project, we would like to use model-free reinforcement learning (RL) instead [3].
The project involves creating a simulation environment on Isaac Gym [2] and learning a suitable policy for the fast transportation task. If time permits, we would like to compare it to the model-based controller in [1] and also deploy the method on a fixed-base robotic arm.
References:
1. Ichnowski, Jeffrey, et al. "GOMP-FIT: Grasp-Optimized Motion Planning for Fast Inertial Transport." IEEE ICRA 2022. 2. Makoviychuk, Viktor, et al. "Isaac Gym: High-Performance GPU-Based Physics Simulation for Robot Learning." NeurIPS 2021. 3. Kumar, Visak, et al. "Joint Space Control via Deep Reinforcement Learning." IEEE/RSJ IROS 2021.
- Literature research
- Creating environment using NVIDIA IsaacGym
- Training a policy for the task
- Evaluation on hardware
- Comparison to model-based solution
- Literature research - Creating environment using NVIDIA IsaacGym - Training a policy for the task - Evaluation on hardware - Comparison to model-based solution
- Knowledge of reinforcement learning and optimization
- Good programming skills and experience with Python and PyTorch
- Experience in working with hardware is a plus
- Knowledge of reinforcement learning and optimization - Good programming skills and experience with Python and PyTorch - Experience in working with hardware is a plus
Mayank Mittal (mittalma@ethz.ch), Yuntao Ma (mayun@ethz.ch)
Mayank Mittal (mittalma@ethz.ch), Yuntao Ma (mayun@ethz.ch)