I’m actually evolving the starting sound idea born @ZKM on the last October (2018).
There I sonified a real-time EEG dataset taken from some people of the public, as part of an interaction project by the MuArts team.
The instrument has been conceived and beta tested to create music from simple raw data coming from an egg interface.
The project’s aim was to help us explore the deepest connections with ourselves: a link between what is outside and what, hidden within us, hardly emerges.
A music in which everything becomes a shape in its various expressions, beyond space and time, even beyond our sight.
The audio patch was meant as a breathing creature that could assume different behaviours and can react to various situations receiving in input external parameters from the real-time EEG dataset.
Thanks to the programming of a large number of separate and independent instruments, the tonal possibilities and the reactions of the sound environment to external inputs occured naturally. The individual sounds added up to each other with different dynamics, to recreate complex tones, as an auto-mixing process.
The result was a complex timbral digital evolving orchestra, sometimes full of emptiness, sometimes dense of thick illusions.
In this sense, the exo-system (which comprises all the technical software/hardware/programming apparatus that allows the system to work properly) disappears behind what could be called the eso-system or better, the natural and intimate sound experience of evolving sound shapes.
I keep going on this concept developing the synthesis component and tidying up some internal structures to make them more efficient.
Not having an eeg interface I’m just using some random and non-random generators to feed the sound ecosystem, letting it producing sounds and creating atmospheres.
The audio component of the experience/installation was designed to be as modular as possible and I’ve totally kept this concept as one of the first rules..
This to provide different timbre identities within the same sound environment.
There are different polyphonic instruments (up to 20 voices for the single instruments) based on various synthesis methods (from simple sine waves, to wave shaping, FM, subtractive, tuned banks of resonators, tropical synthesis).
The responsiveness of the audio is strictly dependent on the link between the sound ecosystem and a couple of parameters that the system can generate by itself or it can receive from the outside (MIDI/OSC).
The system development, still completely a work in progress on both technical and creative sides, would like to go toward an increasingly form of independence from human choices, patterns and direct control, assuming the real meaning of a living (sound) creature.
Here below a random example of the tonal possibilities of it, focusing on long envelopes and kind of multi-layered pad sounds.
Tests still going_