Augmented Performer: Musician in the Loop

Artificial intelligence technologies have enabled new applications in performative arts, particularly in music systems that generate content or provide accompaniment in real time. In this domain, the Augmented Performer project focuses on real-time human–AI co-creation in live musical performance. We are developing a generative audio AI system that listens and responds to a human pianist, collaboratively co-composing in real time. Our approach keeps the human musician at the center of the creative process while creating conditions for mutual influence between human and AI. This project is supported by Aalto Creatives (https://www.aalto.fi/en/aalto-creatives), XTREME (https://xtremeitu.dk/), MAGICS (https://magics.fi/) and AmplifAI (https://amplifyingthemind.com/)

2026 / Real-Time MIDI Transformer Inference in Pure Data

Bringing Transformer Models into Real-Time Performance

We have developed a real-time MIDI transformer inference system integrated into Pure Data as a custom external. Using ONNX Runtime for cross-platform compatibility, the system processes live MIDI input through a transformer-based generative model. This implementation brings transformer architecture into live performance contexts. The technical challenge lies not only in achieving real-time inference but in generating responses that maintain musically meaningful correlations with the pianist's input, creating a dialogue where the AI's contributions relate coherently to the performance, while enabling emergent creative exchange.

The project brings together expertise in cognitive neuroscience, music technology, composition and performance, human–AI interaction, and AI system design. The team combines technical implementation, research methods, and artistic practice to develop real-time human–AI co-creation in live musical performance.

Koray Tahiroğlu (University Lecturer, Aalto University School of ARTS) leads the project's development, bringing together artistic and technical perspectives. As founder of the SOPI research group, he works across digital musical instruments and generative audio AI systems. His performance practice connects research and artistic experimentation, with work presented at venues including Ars Electronica, Sónar, and other international venues.

Mikko Sams (Professor of Cognitive Neuroscience, science director of MAGICS infrastructure) contributes expertise in cognitive neuroscience, with research currently focused especially on social interaction and emotions. His background in experimental studying of experiences shapes the project's scientific methodology.

Robin Welsch (Assistant Professor of Engineering Psychology, Aalto University Department of Computer Science) contributes expertise in human–AI interaction, perception, and adaptive systems. He leads the project's evaluation design and user studies to understand human–AI interaction from both performer and audience perspectives.

Jukka Nykänen (pianist, composer, arranger, conductor) composed the three pieces for this project and performs them in collaboration with the AI system. A graduate of the Sibelius Academy and recipient of the Finland Prize, his practice includes opera, musical theatre, chamber music, and multimedia performance. He brings both compositional expertise and performance experience to the project's exploration of human–AI co-creation.

Mikael Hokkanen (Research Assistant, SOPI Research Group) is a master's student in Machine Learning, Data Science and Artificial Intelligence at Aalto University. He develops the project's model architecture and trains the transformer with alternative datasets to explore different generative behaviors.

Agnes Kloft (Doctoral Student, Aalto University Department of Computer Science) conducts the project's evaluation studies, collecting performance data from rehearsals and concerts and investigating audience responses to the collaborative performances.

project repository https://version.aalto.fi/gitlab/sopi/pd-midi-transformer-external/

Jukka Nykänen testing the updated version of the Augmented Performer AI system in SOPI studio.


A Premiere Concert of Human–AI Co-Creation

The premiere concert presents three original compositions by Jukka Nykänen, each written specifically for performance with the Augmented Performer AI system. The event demonstrates the real-time human–AI co-creation process in action and includes research presentations and an open panel discussion exploring the technical and artistic dimensions of the work. The concert takes place on Friday, 22 May 2026 at Marsio Cinema, Otaniemi (doors 13:30, concert 14:00).