Over the weekend I took part in a two-day event at Goldsmiths’ College, London. Hosted by Tom Mudd, the event was:
“23 artists exploring the acoustics of the Great Hall via an unusual 8-speaker system.
This event is continuous. The audience are free to come in and out, sit in the space, walk around the space, eat, drink and listen.”
This was my first performance for a while, and wanted to try something different to the electronics + percussion sets I had been performing previously (and not just because I didn’t want to lug my kit across London…though that was a large factor).
Keeping in mind my interest in reduction and simplicity, I decided to build a patch in Max (first time using version 7!) with 8 oscillators, one directed to each speaker. Each oscillator allowed me to:
- Generate a sine wave
- Generate noise
- Change the frequency
- Change the amplitude
- Change from a continuous wave into a pulse
- Change the rate of each pulse
Part of the reason I decided to use only sine waves and noise, is that we have been learning about using these functions to control the movement of objects on screen when creating visuals in C++ and Processing. This is an avenue I’d like to follow, so potentially this Max patch could become part of a bigger audiovisual project where visuals and sound are generated or controlled simultaneously using sines and noise. But more on that later…
I had been wanting to experiment with the interaction between these simple, core elements, and thought that being able to do this in a great-sounding hall with 8 speakers was an opportunity I probably shouldn’t miss. The results were unpredictable (to me at least), which made it a lot of fun to perform: I find it exciting to be kept on my toes by the software I’m using, only half-knowing what will happen with the next change of a fader or press of a button. This means I constantly have to be ready to react and adapt to the situation; though conversely, it also means that things are more likely to go wrong (and they do).
One of the main techniques I used throughout the performance was setting frequencies and pulse speeds of multiple oscillators to very similar – but not equal – values. Shifting the values within even a small range presented a lot of complex interaction. And, theoretically, because of the length of the waves and distances of the speakers, each person should have experienced a slightly different sound.
Here’s an example of some low-frequency oscillation affecting other frequencies (big thank you to Sabina Ahn for taking the video!). Good headphones are recommended to hear the highest and lowest frequencies:
I also tried to couple contrasting elements: namely very high frequencies and very low frequencies, leaving a large space of the frequency spectrum unoccupied. Changing the balance of high/low (or even removing one side completely) and hearing how it would affect our perception of the remaining sound was one of my goals here.
I think I’d like to keep evolving this very mobile performance setup and, as I mentioned before, see about adapting it for audiovisual use too.
Thanks again to Tom Mudd and everyone involved for setting this up, and for the other performers. It was very interesting to hear how everybody approached the 8 channel setup in their own individual way!