Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Uncertainty-aware localization of a moving object in a realistic simulation for a robotic task using a fixed camera
Robots need to be aware of their surroundings to operate effectively. Accurate 3D position and orientation of objects, often referred to as the 6 Degrees of Freedom (DoF) pose, is an important measurement for the precise robot feedback control system. In order to further improve the performance of such a controller for a robot arm, we need to also know the uncertainty of our 6 DoF pose measurement. Subsequently, this knowledge is crucial for the robust design of a data-driven feedback robot controller.
Deep neural networks have shown potential capabilities for uncertainty-aware object detection. However, a large number of manually labeled data is required to train a neural network for 3D detection. One approach is to utilize synthetic data which we can generate a sufficient amount of pre-labeled training data with little effort. Nevertheless, we need to overcome the reality gap of using synthetic data to address variations in real scenarios.
In this project, we aim to create an uncertainty-aware real-time perception pipeline to precisely localize a moving object in simulation. The main task is to develop a perception module in Python that takes RGB data as input, it estimates the target object’s position in the workspace with an uncertainty assigned to its estimation. The student will render a realistic simulation in a physics engine using a camera fixed to a workspace where an object is moving on a conveyor belt.
In the automatic control laboratory, we are developing a feedback controller for a robot arm performing an assembly task on a conveyor belt as shown in Figure \ref{fig:real}. The student simulates a similar environment shown in Figure \ref{fig:simulation} where the task is to measure in real-time the precise location of a hole in a moving cubic object using a fixed camera on the workspace. This measurement then is utilized by the feedback controller of a robot arm performing an assembly task. The project will continue to analyze how fast and precise we can perceive objects on the fly including the tip of a robot arm with 6 DoF. The effect of variations in the speed of the moving object on the developed solution is studied. Depending on the progress, the depth or tactile data can be fused with visual data to evaluate the performance of the algorithm.
You will explore state-of-the-art novel visual perception libraries including deep networks and statistical inferences. Pybullet, Pytorch, scikit-learn, scikit-image, OpenCV, Robot Operating System (ROS) are among the toolkits that are potentially going to be used in this project.
In this project, we aim to create an uncertainty-aware real-time perception pipeline to precisely localize a moving object in simulation. The main task is to develop a perception module in Python that takes RGB data as input, it estimates the target object’s position in the workspace with an uncertainty assigned to its estimation. The student will render a realistic simulation in a physics engine using a camera fixed to a workspace where an object is moving on a conveyor belt.
In the automatic control laboratory, we are developing a feedback controller for a robot arm performing an assembly task on a conveyor belt as shown in Figure \ref{fig:real}. The student simulates a similar environment shown in Figure \ref{fig:simulation} where the task is to measure in real-time the precise location of a hole in a moving cubic object using a fixed camera on the workspace. This measurement then is utilized by the feedback controller of a robot arm performing an assembly task. The project will continue to analyze how fast and precise we can perceive objects on the fly including the tip of a robot arm with 6 DoF. The effect of variations in the speed of the moving object on the developed solution is studied. Depending on the progress, the depth or tactile data can be fused with visual data to evaluate the performance of the algorithm. You will explore state-of-the-art novel visual perception libraries including deep networks and statistical inferences. Pybullet, Pytorch, scikit-learn, scikit-image, OpenCV, Robot Operating System (ROS) are among the toolkits that are potentially going to be used in this project.
The main tasks of this project are summarized below:
1- Simulate a camera sensor communicating with a hypothetical ROS node; 2- Implement an algorithm to localize a hole on a moving cube with uncertainty; 3- Improve and analyze the precision of the algorithm robust to the object speed; 4- Improve and analyze how fast the algorithm can perceive the cube with respect to its variations in speed
Depending on the progress, additional aspects are studied including :
1- Adapt the framework to precisely localize the tip of the robot arm (robot end-effector); 2- Integrate the localizer with our feedback control system
The main tasks of this project are summarized below: 1- Simulate a camera sensor communicating with a hypothetical ROS node; 2- Implement an algorithm to localize a hole on a moving cube with uncertainty; 3- Improve and analyze the precision of the algorithm robust to the object speed; 4- Improve and analyze how fast the algorithm can perceive the cube with respect to its variations in speed
Depending on the progress, additional aspects are studied including : 1- Adapt the framework to precisely localize the tip of the robot arm (robot end-effector); 2- Integrate the localizer with our feedback control system
Mahdi Nobar, Automatic Control Laboratory (IFA), ETH Zurich
mnobar@ethz.ch
Mahdi Nobar, Automatic Control Laboratory (IFA), ETH Zurich