Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Real-time point-cloud compression, streaming and displaying on virtual/augmented reality glasses for teleoperation
In this framework, we would like to build complete end-to-end point cloud streaming system for teleoperation applications. The candidate is expected to setup the testbed for the framework and implement the software packages for the sensors, compression methods and AR display glasses.
To visualize both static and dynamic environments in a natural and immersive manner, point-clouds can be used as an additional modality in teleoperation applications. Nowadays, the acquisition of high resolution point clouds is possible using professional and consumer sensors such as Microsoft Kinect, Intel RealSense, LIDAR etc. Eventually, it is also expected that conventional cameras will be capable of producing point-cloud content either by employing additional depth sensors or analyzing multi-view image content. Although there are on-going scientific efforts and standardization activities such as MPEG, an efficient compression and streaming algorithms for point clouds do not exist yet. In this framework, we would like to build complete end-to-end point cloud streaming system for teleoperation applications. The candidate is expected to setup the testbed for the framework and implement the software packages for the sensors, compression methods and AR display glasses. The candidate integrates the available point-cloud compression schemes into the system and evaluates their performance in terms of coding efficiency and subjective visual quality. Furthermore, based on the quality and efficiency analyzation, the candidate is expected to propose new methodologies for the performance improvement of the available approaches.
To visualize both static and dynamic environments in a natural and immersive manner, point-clouds can be used as an additional modality in teleoperation applications. Nowadays, the acquisition of high resolution point clouds is possible using professional and consumer sensors such as Microsoft Kinect, Intel RealSense, LIDAR etc. Eventually, it is also expected that conventional cameras will be capable of producing point-cloud content either by employing additional depth sensors or analyzing multi-view image content. Although there are on-going scientific efforts and standardization activities such as MPEG, an efficient compression and streaming algorithms for point clouds do not exist yet. In this framework, we would like to build complete end-to-end point cloud streaming system for teleoperation applications. The candidate is expected to setup the testbed for the framework and implement the software packages for the sensors, compression methods and AR display glasses. The candidate integrates the available point-cloud compression schemes into the system and evaluates their performance in terms of coding efficiency and subjective visual quality. Furthermore, based on the quality and efficiency analyzation, the candidate is expected to propose new methodologies for the performance improvement of the available approaches.