Sound and Physical Interaction (SOPI) research group's main interests are centred on the broad area of Sound and Music Computing (SMC), New Interfaces for Musical Expression (NIME) and AI and Music Creativity (AIMC). SOPI focuses on the emerging role of audio and music technologies in digital musical interactions. It includes building, implementation and performance of digital musical instruments, interactive art and audio-visual production. It is led by Koray Tahiroğlu and received funding from Business Finland, Academy of Finland, Finnish Ministry of Education and Culture’s (MEC) Global Program Pilots for India & USA, Nokia Research Center, TEKES, Aalto Tenure Committe and A!OLE Aalto Online Learning network.  

CrossModal VAE; Musician in the Loop

coming soon....

projects

Sonic Move

Aalto, VTT and UEF join efforts to develop and apply movement analysis and sonification technologies to facilitate the creative exploration of novel human movement experiences. The project brings together motion tracking systems with expressive and effective procedural audio techniques to create new interactive experiences where audio and tactile signals are generated based on user actions. full project

AI-terity

AI-terity is a deformable, non- rigid musical instrument that comprise computational features of a particular AI model for generating relevant audio samples for real-time audio synthesis.It has been our research interest to explore and integrate generative AI models applied in audio domain into development of new musical instruments. This in turn has the potential to bring an alternative synthesis of knowledge about musical instruments, as well as enhancing the ability to focus on the sounding features of the new instruments. full project

Sonic Move

Aalto, VTT and UEF join efforts to develop and apply movement analysis and sonification technologies to facilitate the creative exploration of novel human movement experiences. The project bringd together motio tracking systems with expressive and effective procedural audio techniques to create new interactive experiences where audio and tactile signals are generated based on user actions. full project

AI-terity

AI-terity is a deformable, non- rigid musical instrument that comprise computational features of a particular AI model for generating relevant audio samples for real-time audio synthesis.It has been our research interest to explore and integrate generative AI models applied in audio domain into development of new musical instruments. This in turn has the potential to bring an alternative synthesis of knowledge about musical instruments, as well as enhancing the ability to focus on the sounding features of the new instruments. full project

CCREAIM

The main objective of the CCREAIM is to enable the next generation of AI models that stimulate creative processes and enhance music practitioners’ creative strength. full project

DMI – I – P

Digital Musical Interactions, is an Academy of Finland Research Fellow project (316549) exploring a framework to better understand the principles of emerging technologies and cultural constraints behind digital musical interactions. full project

InSpace with the Otherness

Step into the installation, grab the magic wand, and experience how your movement changes the music around you. InSpace with the Otherness is a co-located collaboration with a deep learning algorithm. full project

KET – audiovisual art project

KET is an audiovisual performance with Thomas Bjelkeborn and Koray Tahiroğlu as musicians and video artists on stage for a live sound and video experience. full project

events

Symposium: Technoscientific Practices of Music: New Technologies, DMIs and Agents
11.11.2022 | Oodi, Helsinki
more info
Symposium: Socio-Cultural Role of Technology in Digital Musical Interactions
14.11.2019 | Oodi, Helsinki
more info

teaching

AXM-E6003 Composing with New Musical Instruments

This course gives opportunity to students to learn and experiment with advanced sound processing and synthesis techniques to create novel interfaces for new musical practices. Composing with New Musical Instruments course invites students to learn how to use sensor technologies, microcontrollers, physical interfaces and thus confront fundamental concepts and technical issues faced during the process of making, building, composing and performing with new musical instruments. see in mycourses

AXM-E6004 Deep Learning of Audio

In Deep Learning of Audio course, we will introduce students to the state of the art in deep learning models and AI methods for sound and music generation. The course will provide an overview of recent AI implementations such as, Google Magenta's (AI Duet, GANSynth, DDSP) and GANSpaceSynth (optional SampleRNN and RAVE). We will provide code templates that integrate the functionality from open source deep learning audio projects into Pure Data programming environment. see in mycourses

AXM-E6003 Composing with New Musical Instruments

This course gives opportunity to students to learn and experiment with advanced sound processing and synthesis techniques to create novel interfaces for new musical practices. Composing with New Musical Instruments course invites students to learn how to use sensor technologies, microcontrollers, physical interfaces and thus confront fundamental concepts and technical issues faced during the process of making, building, composing and performing with new musical instruments. see in mycourses

AXM-E6004 Deep Learning of Audio

In Deep Learning of Audio course, we will introduce students to the state of the art in deep learning models and AI methods for sound and music generation. The course will provide an overview of recent AI implementations such as, Google Magenta's (AI Duet, GANSynth, DDSP) and GANSpaceSynth (optional SampleRNN and RAVE). We will provide code templates that integrate the functionality from open source deep learning audio projects into Pure Data programming environment. see in mycourses

AXM-E6009 Procedural Audio

Procedural Audio is a design philosophy that places process at the heart of understanding everyday sonic interactions. Unlike traditional models, it views sound as something generated dynamically in real time. see in mycourses

Previous Offering - ELEC-E5531 Speech and Language Processing

Advances in voice recognition and digital technologies offer novel forms of speech and auditory interaction for engaging user services and immersive experiences in the home, school, workplace, museums and everyday public spaces. see in mycourses

Previous Offering - DOM-E5043 Physical Interaction Design

Physical Interaction Design course aims to explore and investigate the tools, concepts and practices for planing and building new interactions with digital environments. see in mycourses

Previous Offering - DOM-E5067 Sound and Music Interaction

This course gives opportunity to students to learn how to process and organise sounds in digital environment throughout different sonic experimentation strategies.

publications

“Musical intra-actions with digital musical instruments”, 2024

Tahiroğlu, K. Journal of New Music Research 53 (1-2), 126-138

[open access]To cite this article: Tahiroğlu, K. (2024). Musical intra-actions with digital musical instruments. Journal of New Music Research, 53(1-2), 126-138, DOI: 10.1080/09298215.2024.2442350
“Latent Spaces as Platforms for Sonic Creativity", 2024

Tahiroğlu, K., Wyse, L. International Conference on Computational Creativity, ICCC’24

[open access]To cite this paper: Tahiroglu, K., & Wyse, L. (2024). Latent Spaces as Platforms for Sonic Creativity. In Proceedings of the 16th International Conference on Computational Creativity, ICCC (Vol. 24).
“Introduction to the special issue of technoscientific practices of music: critical implications of new technologies”, 2024

Tahiroğlu, K. & Wyse, L. Journal of New Music Research 53 (1-2), 1-4

[open access]To cite this article: Tahiroğlu, K., & Wyse, L. (2024). Introduction to the special issue of technoscientific practices of music: critical implications of new technologies. Journal of New Music Research, 53(1-2), 1-4, DOI: 10.1080/09298215.2025.2452072
“Dance Movement and Sound Cross-Correlation; Synthesis Parameters on the Micro and Meso Musical Time Scales”, 2024

Koruga, A. & Tahiroğlu, K. Proceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures

[open access]To cite this paper: Koruga, A., & Tahiroğlu, K. (2024, September). Dance Movement and Sound Cross-Correlation; Synthesis Parameters on the Micro and Meso Musical Time Scales. In Proceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures (pp. 445-456).
“Deep learning with audio: An explorative syllabus for music composition and production”, 2023

Tahiroğlu, K., Wang, S., Tampu, E., & Lin, J. The International Conference on AI and Musical Creativity 2023.

[open access]To cite this paper: Tahiroğlu, K., Wang, S., Tampu, E., & Lin, J. (2023). Deep learning with audio: An explorative syllabus for music composition and production. In Proceedings of the International Conference on AI and Musical Creativity.
“Augmented Granular Synthesis Method for GAN Latent Space with Redundancy Parameter”, 2022

Tahiroğlu, K., & Kastemaa, M. International Conference on AI Music Creativity 122, 196-210.

[open access]To cite this paper: Tahiroğlu, K., & Kastemaa, M. (2022, September). Augmented Granular Synthesis Method for GAN Latent Space with Redundancy Parameter. In Conference on AI Music Creativity.

people

Koray Tahiroğlu
Research Group Leader
firstname.lastname@aalto.fi
Mikael Hokkonen
Associated Researcher
firstname.lastname@aalto.fi
Maël Archenault
Associated Researcher
firstname.lastname@aalto.fi

Eduard Tampu
Associated Researcher
firstname.lastname@aalto.fi
Ariana Marta
Associated Researcher
firstname.lastname@aalto.fi