In recent years, computer vision techniques for human pose estimation have increasingly become more accurate and robust . However, these techniques are often trained and evaluated on data where humans display “ordinary” poses, such as pedestrians walking in the streets . Several approaches have also been successful in reliably detecting human poses in more “active” situations such as dancing , workout  or yoga exercises .However, in the context of martial arts, most of the existing models are still struggling to robustly detect the fighters’ poses. While extracting data from fights could significantly transform the experience for athletes, coaches and fans alike, there are still some technical challenges to be addressed. The particularities of the movements displayed in martial arts (such as punches, kicks, clinch and ground fight situations) require alternative models and techniques to obtain meaningful results.
Depending on the interest and background of the student, projects with different focuses are possible:
- Adapting existing models for human pose estimation to martial arts situations through intelligent fine-tuning and post-processing techniques
- Developing novel computer vision models that are specifically trained on martial arts data (real and synthetic)
This student project is a collaboration with an industry partner. Combat IQ is a fast-growing startup that is specialized in data analytics for martial arts. Students will be co-supervised by Sena Kiciroglu (CVLab EPFL) and Dr. Christian Giang (Combat IQ). Students completing a full-time master thesis or master internship will also receive a monthly scholarship.
Proficiency with Python and machine learning / computer vision related frameworks (e.g., PyTorch, OpenCV, OpenPose, Movenet, Mediapipe, CUDA, Tensorflow)
Interest for sports and sports analytics
Fluent in English
Scholarship for master theses and master internship
Work on real-world problems
Experienced mentors with a world-class academic network
Opportunity to work with professional martial arts athletes and coaches
 Wang, J., Tan, S., Zhen, X., Xu, S., Zheng, F., He, Z., & Shao, L. (2021). Deep 3D human pose estimation: A review. Computer Vision and Image Understanding, 210, 103225.
 Fabbri, M., Brasó, G., Maugeri, G., Cetintas, O., Gasparini, R., Ošep, A., ... & Cucchiara, R. (2021). Motsynth: How can synthetic data help pedestrian detection and tracking?. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10849-10859).
 Katircioglu, I., Georgantas, C., Salzmann, M., & Fua, P. (2021). Dyadic human motion prediction. arXiv preprint arXiv:2112.00396.
 Vinzant et al., “3D Pose Based Motion Correction for Physical Exercises”, EPFL Masters Thesis 2021.
 Verma, M., Kumawat, S., Nakashima, Y., & Raman, S. (2020). Yoga-82: a new dataset for fine-grained classification of human poses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 1038-1039).
- Computer Vision, Knowledge Representation and Machine Learning
- Internship, Master Thesis, Semester Project