Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Ensuring the Safety of Natural Language Commands via Model Predictive Control
Conventional engineered control systems are typically designed for isolated and well-specified environments, emphasizing conservatism against uncertainty, especially in safety-critical domains with limited human interaction. In the pursuit of versatile, autonomous systems capable of adapting to diverse environments, there is a growing need for effective communication through natural language. Despite the successes of Natural Language algorithms and Large Language Models like GPT-4, leveraging these models for instructing autonomous safety-critical control systems remains an open question. This project addresses this gap by building upon a preliminary control architecture, translating natural language commands into Model Predictive Control (MPC) formulations. By utilizing the safety and stability assurances of MPC, we aim to automate the generation of optimization problems, providing non-conservative feasibility. Additionally, the project explores the use of predictive safety filters to mitigate uncertainty introduced by language models, enhancing the reliability of autonomous systems.
Keywords: Model Predictive Control, Control Theory, Large Language Models
Most engineered control systems are traditionally designed to act in isolated, clearly specified environments, or to be conservative against uncertainty. This is specially true for safety-critical systems, where interaction with humans is very limited, and changes in the specification comes at a great engineering effort in order to ensure safety. In a world where we aim for autonomous and intelligent systems, it is paramount that they be versatile in a variety of environments. Moreover, in order to facilitate humans in the loop and make these technologies accessible for the broader population, we must be able to communicate with these systems via natural language. Despite the impressive success of Natural Language algorithms and Large Language Models like GPT4, these learnt models provide no guarantees on the accuracy or correctness of their outputs, nor on a proper “understanding” of the input. How to leverage natural language to rely instructions to autonomous safety-critical control systems is therefore an open question that has received little attention so far.
In this project, we aim at addressing this gap. A preliminary control architecture is available that allows to convert natural language commands into control inputs amenable to a specific control system. To do this, a LLM module is used to convert a natural language instruction into a specific a Model Predictive Control (MPC) formulation. Since MPC is known for its ability to enforce safety constraints and various theoretical guarantees, the MPC formulation generated by the LLM can be leveraged to provide non-conservative feasibility and stability for the MPC trajectories.
Most engineered control systems are traditionally designed to act in isolated, clearly specified environments, or to be conservative against uncertainty. This is specially true for safety-critical systems, where interaction with humans is very limited, and changes in the specification comes at a great engineering effort in order to ensure safety. In a world where we aim for autonomous and intelligent systems, it is paramount that they be versatile in a variety of environments. Moreover, in order to facilitate humans in the loop and make these technologies accessible for the broader population, we must be able to communicate with these systems via natural language. Despite the impressive success of Natural Language algorithms and Large Language Models like GPT4, these learnt models provide no guarantees on the accuracy or correctness of their outputs, nor on a proper “understanding” of the input. How to leverage natural language to rely instructions to autonomous safety-critical control systems is therefore an open question that has received little attention so far. In this project, we aim at addressing this gap. A preliminary control architecture is available that allows to convert natural language commands into control inputs amenable to a specific control system. To do this, a LLM module is used to convert a natural language instruction into a specific a Model Predictive Control (MPC) formulation. Since MPC is known for its ability to enforce safety constraints and various theoretical guarantees, the MPC formulation generated by the LLM can be leveraged to provide non-conservative feasibility and stability for the MPC trajectories.
The goal of this project is to find a way to obtain these guarantees and automate the generation of the optimization problems. Moreover, the capabilities of predictive safety filters can further be explored to shield from the uncertainty introduced by the language model and its lack of predictability.
The project includes:
• Developing guarantees for the MPC controller formulation generated via the LLM, and finding a way to automatically introduce them for every MPC problem generated via the LLM.
• Developing a safety filter to guarantee a safe performance of the system despite the uncertain output from the language model.
• Implementing and testing the resulting architecture in simulation.
The goal of this project is to find a way to obtain these guarantees and automate the generation of the optimization problems. Moreover, the capabilities of predictive safety filters can further be explored to shield from the uncertainty introduced by the language model and its lack of predictability. The project includes: • Developing guarantees for the MPC controller formulation generated via the LLM, and finding a way to automatically introduce them for every MPC problem generated via the LLM. • Developing a safety filter to guarantee a safe performance of the system despite the uncertain output from the language model. • Implementing and testing the resulting architecture in simulation.
Dr. Carmen Amo Alonso (camoalonso@ethz.ch)
Dr. Andrea Carron (carrona@ethz.ch).
Dr. Carmen Amo Alonso (camoalonso@ethz.ch) Dr. Andrea Carron (carrona@ethz.ch).