SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Imprint
  • Terms & conditions

Autonomous Systems Lab

AcronymASL
Homepagehttp://www.asl.ethz.ch/
CountrySwitzerland
ZIP, City 
Address
Phone
TypeAcademy
Top-level organizationETH Zurich
Parent organizationInstitute of Robotics and Intelligent Systems D-MAVT
Current organizationAutonomous Systems Lab
Memberships
  • Max Planck ETH Center for Learning Systems
Partners
  • Robotic Systems Lab


Open Opportunities

Multi-Sensor Semantic Odometry

  • ETH Zurich
  • Autonomous Systems Lab Other organizations: Vision for Robotics Lab

Semantic segmentation augments visual information from cameras or geometric information from LiDARs by classifying what objects are present in a scene. Fusing this semantic information with visual or geometric sensor data can improve the odometry estimate of a robot moving through the scene. Uni-modal semantic odometry approaches using camera images or LiDAR point clouds have been shown to outperform traditional single-sensor approaches. However, multi-sensor odometry approaches typically provide more robust estimation in degenerate environments.

  • Computer Vision, Image Processing, Intelligent Robotics, Signal Processing
  • Master Thesis, Semester Project

LiDAR-Visual-Inertial Odometry with a Unified Representation

  • ETH Zurich
  • Autonomous Systems Lab Other organizations: Vision for Robotics Lab

Lidar-Visual-Inertial odometry approaches [1-3] aim to overcome the limitations of the individual sensing modalities by estimating a pose from heterogenous measurements. Lidar-inertial odometry often diverges in environments with degenerate geometric structures and visual-inertial odometry can diverge in environments with uniform texture. Many existing lidar-visual-inertial odometry approaches use independent lidar-inertial and visual-inertial pipelines [2-3] to compute odometry estimates that are combined in a joint optimisation to obtain a single pose estimate. These approaches are able to obtain a robust pose estimate in degenerate environments but often underperform lidar-inertial or visual-inertial methods in non-degenerate scenarios due to the complexity of maintaining and combining odometry estimates from multiple representations.

  • Computer Vision, Intelligent Robotics, Signal Processing
  • Master Thesis, Semester Project

Odometry and Mapping in Dynamic Environments

  • ETH Zurich
  • Autonomous Systems Lab Other organizations: Vision for Robotics Lab

Existing lidar-inertial odometry approaches (e.g., FAST-LIO2 [1]) are capable of providing sufficiently accurate pose estimation in structured environments to capture high quality 3D maps of static structures in real-time. However, the presence of dynamic objects in an environment can reduce the accuracy of the odometry estimate and produce noisy artifacts in the captured 3D map. Existing approaches to handling dynamic objects [2-4] focus on detecting and filtering them from the captured 3D map but typically operate independently from the odometry pipeline, which means that the dynamic filtering does not improve the pose estimation accuracy.

  • Computer Vision, Engineering and Technology, Intelligent Robotics, Signal Processing
  • Master Thesis, Semester Project
SiROP PARTNER INSTITUTIONS