Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Sequential Learning for Surgical Performance Assessment
The aim of this project is to evaluate surgeons' skills using movement data from surgical tools through advanced artificial intelligence methods for sequential data analysis.
Keywords: Machine Learning, Sequential Learning, Data Science, Deep Learning, Activity recognition, Time series analysis, Orthopedic Surgery, Biomedical Engineering, Computer Vision
Orthopedic surgery requires a high level of experience and skill from surgeons, typically gained through extensive training. To enhance training efficiency and improve educational outcomes, surgical training simulators have emerged as a promising tool. These simulators aim to replicate surgical procedures accurately and evaluate the surgeon's performance. Performance assessment is a crucial component of these simulators as it provides essential feedback to the surgeons, enabling them to improve their skills effectively. This project seeks to develop solutions for assessing surgical performance using tool tracking data, leveraging artificial intelligence for predictive analysis.
Orthopedic surgery requires a high level of experience and skill from surgeons, typically gained through extensive training. To enhance training efficiency and improve educational outcomes, surgical training simulators have emerged as a promising tool. These simulators aim to replicate surgical procedures accurately and evaluate the surgeon's performance. Performance assessment is a crucial component of these simulators as it provides essential feedback to the surgeons, enabling them to improve their skills effectively. This project seeks to develop solutions for assessing surgical performance using tool tracking data, leveraging artificial intelligence for predictive analysis.
You will be provided with a dataset containing approximately 20 hours of recorded tool tracking data from training surgeries, each labeled with performance scores. Your tasks include:
- Statistically exploring the data and creating visualizations to gain a deeper understanding of the dataset.
- Researching various techniques for predicting sequential data, such as LSTM, GRU, and Transformer models.
- Implementing these techniques and evaluating their performance in the context of surgical skill assessment.
You will be provided with a dataset containing approximately 20 hours of recorded tool tracking data from training surgeries, each labeled with performance scores. Your tasks include:
- Statistically exploring the data and creating visualizations to gain a deeper understanding of the dataset. - Researching various techniques for predicting sequential data, such as LSTM, GRU, and Transformer models. - Implementing these techniques and evaluating their performance in the context of surgical skill assessment.
- Strong programming skills (Python, MATLAB, C#, C, C++) - An interest in data science - Experience with machine learning or statistics - A methodical approach to work - The ability to take initiative and shape the direction of the project - Enthusiasm for tackling practical challenges
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
- Master thesis
Please send your CV to and transcript to Tobias Stauffer (tobiasta@ethz.ch)
Please send your CV to and transcript to Tobias Stauffer (tobiasta@ethz.ch)