Register now After registration you will be able to apply for this opportunity online.
Master Thesis / Project - SENSEI: Sensor Teaching in Multi-Activity classification from Video and Wearables for Wheelchair Users
In this project, we focus on continuous and quantitative monitoring of activities of daily living (ADL) in SCI individuals with the goal of identifying cardiovascular events and PI-related risk behaviors.
ADLs specific to SCI patients and their lifestyles shall be discussed and narrowed down in the scope of this work, therefore an autonomous camera-based system is proposed to classify ADLs.
The Current work builds on a previous project where a SlowFast network [1] was trained to identify SCI-specific classes and we aim to further improve the classification and temporal resolution for transferring to wearables' time-series data.
Keywords: Computer vision, activity classification, video processing, Deep Learning, ADL, soft-labelling, probabilistic networks
In this project, we will develop an ADL monitoring system for smart wheelchairs using wearable sensors (i.e., camera, inertial measurement units).
The core idea is to classify ADL by leveraging different sensing and feature computation modalities to extract relevant and complementary context bits. For example, the presence of and the interaction with distinctive objects within the camera scene provides strong cue about the ongoing user activity [1, 2, 3]. From another perspective, inertial sensors record distinct orientation and acceleration signatures of ADL-related temporal patterns [4]. Moreover, previous literature provides strong evidence that the combination of video cameras and inertial sensors improves the recognition performance compared to one or the other [5, 6]. Based on the general multi-modal activity recognition paradigm, we are seeking a novel solution tailored to the smart wheelchair. The outcome of the study will be key to analyse the expected cardiovascular function at different moments of the day in free living.
1. Bensland, S., Paul, A., Grossmann, L., Eriks-Hogland, I., Riener, R., & Paez-Granados, D. (2023). Healthcare Monitoring for SCI individuals: Learning Activities of Daily Living through a SlowFast Network. IEEE International Conference on System Integration.
2. M. Wang, C. Luo, B. Ni, J. Yuan, J. Wang and S. Yan, "First-Person Daily Activity Recognition With Manipulated Object Proposals and Non-Linear Feature Fusion," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 10, pp. 2946-2955, Oct. 2018
3. G. Schiboni, F. Wasner and O. Amft, "A Privacy-Preserving Wearable Camera Setup for Dietary Event Spotting in Free-Living," 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2018, pp. 872-877
4. Lopez-Nava IH, Muñoz-Meléndez A. Human action recognition based on low- and high-level data from wearable inertial sensors. International Journal of Distributed Sensor Networks. 2019.
5. C. Chen, R. Jafari and N. Kehtarnavaz, "UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor," 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 168-172
6. S. K. Yadav, K. Tiwari, H. M. Pandey, S. A. Akbar, “A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions”, Knowledge-Based Systems, 2021, Volume 223
In this project, we will develop an ADL monitoring system for smart wheelchairs using wearable sensors (i.e., camera, inertial measurement units). The core idea is to classify ADL by leveraging different sensing and feature computation modalities to extract relevant and complementary context bits. For example, the presence of and the interaction with distinctive objects within the camera scene provides strong cue about the ongoing user activity [1, 2, 3]. From another perspective, inertial sensors record distinct orientation and acceleration signatures of ADL-related temporal patterns [4]. Moreover, previous literature provides strong evidence that the combination of video cameras and inertial sensors improves the recognition performance compared to one or the other [5, 6]. Based on the general multi-modal activity recognition paradigm, we are seeking a novel solution tailored to the smart wheelchair. The outcome of the study will be key to analyse the expected cardiovascular function at different moments of the day in free living.
1. Bensland, S., Paul, A., Grossmann, L., Eriks-Hogland, I., Riener, R., & Paez-Granados, D. (2023). Healthcare Monitoring for SCI individuals: Learning Activities of Daily Living through a SlowFast Network. IEEE International Conference on System Integration.
2. M. Wang, C. Luo, B. Ni, J. Yuan, J. Wang and S. Yan, "First-Person Daily Activity Recognition With Manipulated Object Proposals and Non-Linear Feature Fusion," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 10, pp. 2946-2955, Oct. 2018
3. G. Schiboni, F. Wasner and O. Amft, "A Privacy-Preserving Wearable Camera Setup for Dietary Event Spotting in Free-Living," 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2018, pp. 872-877
4. Lopez-Nava IH, Muñoz-Meléndez A. Human action recognition based on low- and high-level data from wearable inertial sensors. International Journal of Distributed Sensor Networks. 2019.
5. C. Chen, R. Jafari and N. Kehtarnavaz, "UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor," 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 168-172
6. S. K. Yadav, K. Tiwari, H. M. Pandey, S. A. Akbar, “A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions”, Knowledge-Based Systems, 2021, Volume 223
- Review the previously trained network outcomes for activity classification and re-evaluate at higher time resolution.
- Describe the state-of-the-art of sensor-based activity monitoring systems with a focus on multi-modal activity recognition techniques, i.e., combining object detection, pose estimation, and wearable inertial sensor features.
- Define a multi-sensor configuration according to a definition of (sub)optimality with the goal to achieve a robust and generalisable recognition system.
- Define a taxonomy of activity classes and patterns to model wheelchair user behaviour in free living.
- Design and implement a multi-stage machine learning framework.
- When required, implement hyperparameter optimisation methods.
- Implement a validation framework (e.g., nested k-fold cross-validation strategy) to provide unbiased evaluation of the system’s predictive performance.
- Write a comprehensive report on the project outcomes.
- Review the previously trained network outcomes for activity classification and re-evaluate at higher time resolution. - Describe the state-of-the-art of sensor-based activity monitoring systems with a focus on multi-modal activity recognition techniques, i.e., combining object detection, pose estimation, and wearable inertial sensor features. - Define a multi-sensor configuration according to a definition of (sub)optimality with the goal to achieve a robust and generalisable recognition system. - Define a taxonomy of activity classes and patterns to model wheelchair user behaviour in free living. - Design and implement a multi-stage machine learning framework. - When required, implement hyperparameter optimisation methods. - Implement a validation framework (e.g., nested k-fold cross-validation strategy) to provide unbiased evaluation of the system’s predictive performance. - Write a comprehensive report on the project outcomes.
- Gain unique access and first-hand experience in one of the leading institutions on long-term health management - At the Swiss Paraplegic Center at Nottwil.
- Get introduced to state-of-the-art machine learning techniques and contribute to their application in health management
- Learn about intelligent health systems, modelling for human conditions and apply regression and classification models to available data
- Gain unique access and first-hand experience in one of the leading institutions on long-term health management - At the Swiss Paraplegic Center at Nottwil. - Get introduced to state-of-the-art machine learning techniques and contribute to their application in health management - Learn about intelligent health systems, modelling for human conditions and apply regression and classification models to available data
- Enrolled student at ETH Zurich or EPFL (or another European University):
- ETHZ: D-MAVT, D-INFK / EPFL: IMT, CS (or equivalent)
- Structured and reliable working style
- Strong programming skills in Python/Matlab
- Deep learning experience on video data
- Ability to work independently on a challenging topic
- Strong knowledge of bash, python and data structures
- Knowledge of virtual environments (conda / docker)
- Enrolled student at ETH Zurich or EPFL (or another European University): - ETHZ: D-MAVT, D-INFK / EPFL: IMT, CS (or equivalent) - Structured and reliable working style - Strong programming skills in Python/Matlab - Deep learning experience on video data - Ability to work independently on a challenging topic - Strong knowledge of bash, python and data structures - Knowledge of virtual environments (conda / docker)
Host: Dr. Diego Paez (SCAI Lab, ETHZ, SPF)
Please send your CV and the latest transcript of records from my studies to Dr. Diego Paez (diego.paez@hest.ethz.ch)
Host: Dr. Diego Paez (SCAI Lab, ETHZ, SPF)
Please send your CV and the latest transcript of records from my studies to Dr. Diego Paez (diego.paez@hest.ethz.ch)
ETH for Development (ETH4D) aims to develop innovations that are directly relevant to improving the livelihoods of people in low-resource settings and to educate future leaders in sustainable development.