Register now After registration you will be able to apply for this opportunity online.
Controllable Music Generation and Editing with AI for Health, Creativity, and Wellbeing
This project explores controllable music generation and editing using cutting-edge AI techniques. Instead of generating entire songs with a single text prompt, we aim to create fine-grained, temporally controlled music where specific aspects (e.g., melody, chords, drum patterns, musical style) can be independently specified, edited, and regenerated.
Such controllable music systems open exciting applications not only in creative industries but also in healthcare and wellbeing, supporting adaptive music therapy, emotional regulation, and accessible creative tools for individuals with disabilities.
Keywords: AI, Music Generation, Health Technology, Symbolic-Audio Control
Recent advances in AI-based music generation have enabled the creation of long, coherent musical pieces from text prompts. However, current systems often lack fine-grained control, limiting their practical use for creative or health-supporting contexts.
This project investigates new methods for locally and globally controlled music generation, combining symbolic inputs (e.g., chords, melodies) and audio-based conditions (e.g., separated stems, rhythmic patterns).
Inspired by recent progress in generative models, you will explore:
Text-to-music generation with symbolic and audio controls
Evaluation of generation quality and control adherence
Our goal is to develop personalized, adaptive, and editable music generation systems with potential applications in healthcare (e.g., music therapy, assistive devices) and wellbeing.
Recent advances in AI-based music generation have enabled the creation of long, coherent musical pieces from text prompts. However, current systems often lack fine-grained control, limiting their practical use for creative or health-supporting contexts.
This project investigates new methods for locally and globally controlled music generation, combining symbolic inputs (e.g., chords, melodies) and audio-based conditions (e.g., separated stems, rhythmic patterns). Inspired by recent progress in generative models, you will explore:
Text-to-music generation with symbolic and audio controls Evaluation of generation quality and control adherence Our goal is to develop personalized, adaptive, and editable music generation systems with potential applications in healthcare (e.g., music therapy, assistive devices) and wellbeing.