SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Imprint
  • Terms & conditions
Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.

Multi-Person Pose Estimation and Activity Monitoring during Intubation-Procedures

This project aims to develop a multi-person pose estimation and activity monitoring system for pre-surgical intubation procedures. The system will utilize multiple depth cameras to track the body poses and activities of multiple persons involved in the procedure, including the patient, anesthesiologist, and other medical staff.

Keywords: Deep Learning, Body Pose Estimation, Data Science, Activity Recognition, Object Detection, Sensor Fusion, Camera, Depth Camera, Medical, Hospital

  • This project aims to develop a system for multi-person body pose estimation using multiple depth cameras in a medical setting. Specifically, the system will be utilized during pre-surgical intubation procedures to estimate the 3D body poses of the involved medical persons, including the patient, anesthesiologist, and other medical staff. By tracking the body poses in relation to the environment, the system will be able to identify and track touches and activities to improve hygiene and track bacterial transmission during the procedure. The development of the system will consist of adapting an existing, optimization based 3D body pose estimation procedure for the intubation scenarios and develop touch and activity monitoring ontop of it. Another part of the project involves conducting recordings of real intubation procedures along with our clinical partner.

    This project aims to develop a system for multi-person body pose estimation using multiple depth cameras in a medical setting. Specifically, the system will be utilized during pre-surgical intubation procedures to estimate the 3D body poses of the involved medical persons, including the patient, anesthesiologist, and other medical staff. By tracking the body poses in relation to the environment, the system will be able to identify and track touches and activities to improve hygiene and track bacterial transmission during the procedure. The development of the system will consist of adapting an existing, optimization based 3D body pose estimation procedure for the intubation scenarios and develop touch and activity monitoring ontop of it. Another part of the project involves conducting recordings of real intubation procedures along with our clinical partner.

  • - Developing and adapting the 3D body pose estimation system (Python, Pytorch, Open3D) - Developing Activity & Touch Monitoring - Recording of real intubation procedures

    - Developing and adapting the 3D body pose estimation system (Python, Pytorch, Open3D)
    - Developing Activity & Touch Monitoring
    - Recording of real intubation procedures

  • - Good programming skills in Python (or Java, C#, C, C++)
    - Experience with machine (and deep) learning
    - Experience with Computer (3D) Vision
    - Methodical way of working
    - Ability to take ownership in shaping the direction of the project

  • As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.

    As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.

  • - Master Thesis

  • Please send your CV and master course grades to Sophokles Ktistakis (ktistaks@ethz.ch)

    Please send your CV and master course grades to Sophokles Ktistakis (ktistaks@ethz.ch)

Calendar

Earliest start2023-05-01
Latest endNo date

Location

pd|z Product Development Group Zurich (ETHZ)

Labels

Master Thesis

Topics

  • Information, Computing and Communication Sciences
  • Engineering and Technology
SiROP PARTNER INSTITUTIONS