Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Learned end-effector pose using Lidar and cameras for autonomous construction machines
In this project you will work with the Gravis team to develop 1.) a segmentation algorithm using point clouds to detect the shovel pose, 2.) a calibration procedure to calibrate the lidar to the machine using the detected shovel pose and the existing arm pose state estimation, and 3.) a learning algorithm that outputs the shovel pose given images.
Gravis Robotics is an ETH spin-off from the Robotic Systems Lab (RSL) working on the automation of heavy machinery (https://gravisrobotics.com/). In this project you will work with the Gravis team to develop 1.) a segmentation algorithm using point clouds to detect the shovel pose, 2.) a calibration procedure to calibrate the lidar to the machine using the detected shovel pose and the existing arm pose state estimation, and 3.) a learning algorithm that outputs the shovel pose from images. You will carry out your project at Gravis under the joint supervision by RSL. Autonomous construction is highly dependent on the exact position of the bucket in a global frame. The global frame is usually defined by GNSS antennas mounted on the cabin, and the kinematic chain from the cabin to the end-effector is measured by inertial sensors, resulting in an inaccurate pose of the shovel in the global frame. To overcome these challenges, our approach is to use the knowledge of the machine and advanced solid state lidar sensors such as Livox Mid-70 or Livox Avia and high resolution cameras. Livox Avia is known for its ability to generate high-density point clouds through time-varying scanning patterns, which helps in point cloud segmentation. This project is a mix of working with datasets and validating the approach on real hardware. You will have the opportunity to work with a real CAT 323 or Menzi Muck M545 excavator. The desired outcome of this project is the successful implementation of the calibration algorithm on one of our machines. We are open to tailoring the project to your needs (see requirements below) and look forward to receiving your application.
Gravis Robotics is an ETH spin-off from the Robotic Systems Lab (RSL) working on the automation of heavy machinery (https://gravisrobotics.com/). In this project you will work with the Gravis team to develop 1.) a segmentation algorithm using point clouds to detect the shovel pose, 2.) a calibration procedure to calibrate the lidar to the machine using the detected shovel pose and the existing arm pose state estimation, and 3.) a learning algorithm that outputs the shovel pose from images. You will carry out your project at Gravis under the joint supervision by RSL. Autonomous construction is highly dependent on the exact position of the bucket in a global frame. The global frame is usually defined by GNSS antennas mounted on the cabin, and the kinematic chain from the cabin to the end-effector is measured by inertial sensors, resulting in an inaccurate pose of the shovel in the global frame. To overcome these challenges, our approach is to use the knowledge of the machine and advanced solid state lidar sensors such as Livox Mid-70 or Livox Avia and high resolution cameras. Livox Avia is known for its ability to generate high-density point clouds through time-varying scanning patterns, which helps in point cloud segmentation. This project is a mix of working with datasets and validating the approach on real hardware. You will have the opportunity to work with a real CAT 323 or Menzi Muck M545 excavator. The desired outcome of this project is the successful implementation of the calibration algorithm on one of our machines. We are open to tailoring the project to your needs (see requirements below) and look forward to receiving your application.
- Investigate the literature and choose appropriate algorithms
- Segmentation of the shovel in the point cloud
- Lidar calibration to machine’s base frame using existing arm-state estimation
- Setup learning pipeline
- Hardware experiments, accuracy evaluation
- Investigate the literature and choose appropriate algorithms - Segmentation of the shovel in the point cloud - Lidar calibration to machine’s base frame using existing arm-state estimation - Setup learning pipeline - Hardware experiments, accuracy evaluation
- High motivation and interest in the topic
- Structured, independent and goal oriented working behavior
- Excellent programming skills in C++ or Python
- Bonus: Experience with learning (pytorch)
- Bonus: Experience with geometry processing libraries (e.g open3D)
- Bonus: Experience with ROS or ROS2
- High motivation and interest in the topic - Structured, independent and goal oriented working behavior - Excellent programming skills in C++ or Python - Bonus: Experience with learning (pytorch) - Bonus: Experience with geometry processing libraries (e.g open3D) - Bonus: Experience with ROS or ROS2