Robotics and PerceptionOpen OpportunitiesIn this project, we are going to develop a vision-based reinforcement learning policy for drone navigation in dynamic environments. The policy should adapt to two potentially conflicting navigation objectives: maximizing the visibility of a visual object as a perceptual constraint and obstacle avoidance to ensure safe flight. - Engineering and Technology
- Master Thesis
| Recent research has demonstrated significant success in integrating foundational models with robotic systems. In this project, we aim to investigate how these foundational models can enhance the vision-based navigation of UAVs. The drone will utilize learned semantic relationships from extensive world-scale data to actively explore and navigate through unfamiliar environments. While previous research primarily focused on ground-based robots, our project seeks to explore the potential of integrating foundational models with aerial robots to enhance agility and flexibility. - Engineering and Technology
- Master Thesis, Semester Project
| Vision-based reinforcement learning (RL) is more sample inefficient and more complex to train compared to state-based RL because the policy is learned directly from raw image pixels rather than from the robot state. In comparison to state-based RL, vision-based policies need to learn some form of visual perception or image understanding from scratch, which makes them way more complex to learn and to generalise. Foundation models trained on vast datasets have shown promising potential in outputting feature representations that are useful for a large variety of downstream tasks. In this project, we investigate the capabilities of such models to provide robust feature representations for learning control policies. We plan to study how different feature representations affect the exploration behavior of RL policies, the resulting sample complexity and the generalisation and robustness to out-of-distribution samples. - Intelligent Robotics
- Master Thesis, Semester Project
| Vision-based reinforcement learning (RL) is often sample-inefficient and computationally very expensive. One way to bootstrap the learning process is to leverage offline interaction data. However, this approach faces significant challenges, including out-of-distribution (OOD) generalization and neural network plasticity. The goal of this project is to explore methods for transferring offline policies to the online regime in a way that alleviates the OOD problem. By initially training the robot's policies system offline, the project seeks to leverage the knowledge of existing robot interaction data to bootstrap the learning of new policies. The focus is on overcoming domain shift problems and exploring innovative ways to fine-tune the model and policy using online interactions, effectively bridging the gap between offline and online learning. This advancement would enable us to efficiently leverage offline data (e.g. from human or expert agent demonstrations or previous experiments) for training vision-based robotic policies. - Intelligent Robotics, Robotics and Mechatronics
- Master Thesis
| Model-based reinforcement learning (MBRL) methods have greatly improved sample efficiency compared to model-free approaches. Nonetheless, the amount of samples and compute required to train these methods remains too large for real-world training of robot control policies. Ideally, we should be able to leverage expert data (collected by human or artificial agents) to bootstrap MBRL. The exact way to leverage such data is yet unclear and many options are available. For instance, it is possible to only use such data for training high-accuracy dynamics models (world models) that are useful for multiple tasks. Alternatively, expert data can (also) be used for training the policy. Additionally, pretraining MBRL components can itself be very challenging as offline expert data is typically sampled from a very narrow distribution of behaviors, which makes finetuning non-trivial in out-of-distributions areas of the robot’s state-action space. In this thesis, you will look at different ways of incorporating expert data in MBRL and ideally propose new approaches to best do that. You will test these methods in both simulation (simulated drone, wheeled, legged) and in the real world on our quadrotor platform. You will gain insights into MBRL, sim-to-real transfer, robot control. - Intelligent Robotics, Knowledge Representation and Machine Learning, Robotics and Mechatronics
- Master Thesis, Semester Project
| Recent advances in model-free Reinforcement Learning have shown superior performance in different complex tasks, such as the game of chess, quadrupedal locomotion, or even drone racing. Given a reward function, Reinforcement Learning is able to find the optimal policy through trial-and-error in simulation, that can directly be deployed in the real-world. In our lab, we have been able to outrace professional human pilots using model-free Reinforcement Learning trained solely in simulation.
- Flight Control Systems, Intelligent Robotics, Knowledge Representation and Machine Learning
- Master Thesis, Semester Project
| Autonomous quadrotors are increasingly used in inspection tasks, where flight time is often limited by battery capacity. his project aims to explore and evaluate state-of-the-art path planning approaches that incorporate energy efficiency into trajectory optimization. - Engineering and Technology
- Master Thesis, Semester Project
| Perform knowledge distillation from Transformers to more energy-efficient neural network architectures for Event-based Vision. - Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis
| Explore novel ideas for low-cost but stable training of Neural Networks. - Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis, Semester Project
| Study the application of Long Sequence Modeling techniques within Reinforcement Learning (RL) to improve autonomous drone racing capabilities. - Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis
|
|