Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Stereo Camera Setup for Object Tracking in AR Surgery and Robotics
The goal of this project is to increase the accuracy and robustness of our object tracking setups by creating a stereo camera setup using two high-end cameras and implementing object tracking algorithms.
Keywords: Machine Learning, Computer Vision, Deep Learning, Camera Setup, Stereo Camera, Depth Camera, Depth Estimation, Hardware Setup, Design,
A cruicial task in our augmented reality research in areas such as robotics, surgery and assembly is keeping track of objects and their positions in the scenery.
By using a (depth) camera coupled with deep learning based pose estimation, model based ICP algorithms or fiducial markers we can track the position of surgical tools and objects,
which enables AR apps for uses such as surgical navigation for spinal fusion surgery and robotic pick and place teleoperation.
The goal of this project is to increase the accuracy and robustness of our setup by creating a stereo camera setup using two high-end cameras and implementing object tracking algorithms on this setup.
A cruicial task in our augmented reality research in areas such as robotics, surgery and assembly is keeping track of objects and their positions in the scenery. By using a (depth) camera coupled with deep learning based pose estimation, model based ICP algorithms or fiducial markers we can track the position of surgical tools and objects, which enables AR apps for uses such as surgical navigation for spinal fusion surgery and robotic pick and place teleoperation. The goal of this project is to increase the accuracy and robustness of our setup by creating a stereo camera setup using two high-end cameras and implementing object tracking algorithms on this setup.
The first part of the project consists of designing and assembling the physical setup:
- adjustable mounting system to position the two cameras relative to each other
- triggers to synchronize the cameras (for example with Arduino)
- integration of adjustable lighting
The second part will consist of implementing object tracking algorithms on the setup and testing their performance.
Depending on the interests of the candidate he can choose to implement methods such as:
- Deep Learning based methods (example: Yolo, PVNet3D, EfficientPose)
- ICP algorithm
- Fiducial (Aruco / Infrared) markers
The first part of the project consists of designing and assembling the physical setup:
- adjustable mounting system to position the two cameras relative to each other
- triggers to synchronize the cameras (for example with Arduino)
- integration of adjustable lighting
The second part will consist of implementing object tracking algorithms on the setup and testing their performance. Depending on the interests of the candidate he can choose to implement methods such as:
- Deep Learning based methods (example: Yolo, PVNet3D, EfficientPose)
- ICP algorithm
- Fiducial (Aruco / Infrared) markers
You like to realize both hardware and software components of computer vision applications Hands on and pragmatic way of thinking Desired are interest and skills in programming (Python is preferred) Experience with deep learning and training of neural networks such as YOLO, MobileNet etc. will help You are a team player, curious and come up with innovative ideas
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.