pd|z Product Development Group ZurichOpen OpportunitiesThis thesis focuses on developing a real-time capable system to detect hand contacts of medical staff during surgical procedures. The proposed system will be used to detect potential breaches in hand hygiene protocols and warn medical staff before contact with the patient. - Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis, Semester Project
| This thesis aims at creating a context-aware human-robot collaboration system for manual assembly. Given low-level information about human actions, such as skeleton, IMU and motion data the goal is to create an LLM task planner that dynamically adapts to it's human collaborator and triggers appropriate assistive robot actions. - Artificial Intelligence and Signal and Image Processing, Computer Software, Information Systems
- Master Thesis, Semester Project
| To train reinforcement learning robot policies for human-robot collaborations, this thesis aims at leveraging the Isaac Gym to simulate collaborative scenarios and train robot policies for seamless robot assistance. - Artificial Intelligence and Signal and Image Processing, Computation Theory and Mathematics, Computer Software, Engineering and Technology, Information Systems
- Master Thesis, Semester Project
| This project aims to develop different action recognition models for assembly tasks and compare the effects of depth information and/or hand/object poses. A further investigation could be utilizing synthetic data to train the model, which can significantly improve the adaptation ability. It could be extended to a publication if the progress is good. - Computer Vision, Engineering and Technology
- Master Thesis, Semester Project
| The goal of this thesis is to implement a method that allows for efficient handovers between the collaborative robots and humans. To this end you should leverage state-of-the-art computer vision and AI methods to generate a model of the human’s hand. Then we need to create an algorithm that determines the optimal approach to the hand. Finally, we need to account for interaction object geometries. This is the pipeline in a static case: However, when the user is moving, we also do not know, where the handover will take place. Thus, this thesis could take one of two main directions: - Engineering and Technology
- Master Thesis
| The goal of this thesis is to achieve a proactive human-robot collaboration (HRC) using real-time sensor measurements from the user. Currently, we implemented a HRC system that enables different collaborative tasks (object handovers, hand following, etc.). However, at the moment this system is purely reactive and requires gestures or speech commands to interact with the robot. Thus, we would like to implement a human intent prediction, based on sensor measurements, such as skeleton tracking, gaze tracking, IMU measurements or even heartrate and blood pressure (through a smartwatch). This could be based on AI or other probabilistic models, where the system proactively prepares the next action or even pre-empts it entirely. This would require a series of case studies to gather data about human behavior during interaction with our system, with subsequent data analysis and training or fitting of the models. The whole Pipeline should be demonstrated on a select use-case.
- Engineering and Technology
- Master Thesis
| Current 3D perception pipelines severely lack in accuracy and performance. Inherent noise in Point Cloud measurements, as well as occlusions starts one off with sub-par data. Additionally, there is very little annotated data available for direct Point cloud segmentation. Thus, workarounds have been tested, like depth projection of 2D segmentation masks [us, SAMPRO3D…]. However, they tend to be slow, because of the need of various views to reconstruct the scene, with additional cameras. Furthermore, they require previous semantic segmentation. Direct Point cloud segmentation has the potential to be much faster, since multiple view angles can easily be concatenated. However, they lack the right sizes and quality of datasets to build foundational models. Your task would thus be to finetune or create a Neural Network for Point cloud segmentation, as well as a dataset for supervised learning. For this, you can use our preexisting vision pipelines or data available online. To create annotations, we propose to automatically generate ground-truth labels with SAM-Pro 3D to keep manual labelling minimal. - Engineering and Technology
- Master Thesis
|
|