comment 0

Performance with New Technology: Developing Final Performance

Click for short video clip.

For my final performance, my main aims are:

  1. To continue refining a system where I can step away from the computer and focus on playing an instrument. 
  2. To continue reducing my setup and exploring what sounds can be drawn from one/few sound sources

Aim 1: Playing an instrument

In focusing on aim 1, I have been focusing my research largely on parameter mapping. I would like to develop a system that enables me to concurrently produce sound and manipulate it, rather than having to break these actions down into separate stages.

A criticism of some past performances was that these two stages were too disjointed – I would start playing around with the sound source, only to have to stop and become absorbed in manipulating the sounds with knobs/faders.  As such the performances became disjointed.

I have experimented with using sequences to switch between effects/parameters (e.g. when x number of transients has been detected [where x is a number chosen at random from a predetermined range], generate a number 1, 2, or 3. This number turns effect 1, 2, or 3 on/off), however I felt like this was giving too much control over to the computer. In this case I was in more of a ‘reactive’ state, having to adapt to changes made by the computer. I feel that this method does have some use, since it forces the performer to listen and improvise to their changing performance environment, but I felt the need for more direct control.

Using FFT analysis as control parameters [1] is something that I would like to explore further, as I believe it could allow a more direct degree of control, relating to playing techniques used during performance.

However my main focus at the moment is using interpolation spaces [2] to smoothly travel between different sets of parameter values. This way I can transform low-dimensional input (currently in the form of two 100mm SoftPot membrane potentiometers adhered to drum sticks, controlling x/y coordinates of position in the interpolation space) into high-dimensional output (simultaneous control of multiple parameters). I am currently researching ways of implementing this method, as well as building the physical control system using an Arduino.

Aim 2: One/few sound sources

I have reduced my previous set up of snare drum + cymbal + assorted percussion to just a single snare drum. I am currently experimenting to see if I can create an engaging performance with just this one sound source, which would be ideal.

Recently I have been using buffers and granulators to store and manipulate sounds (as seen in the Vine at the top of this post), the parameters of which will eventually be controlled using the sensors/interpolation spaces system described above.

Initially I envisaged the Arduino to be mounted inside the drum itself, with wires connected to the sensors coming out of the small hole in the side of the drum. It would be running off of a battery or mains power supply if the cable could also fit, and information would be sent wirelessly to the computer. However, the cost of employing such a system has proven to be inhibitive, meaning the Arduino would have to be attached to the computer via USB. I decided that removing the snares and resonant head of the drum would have to suffice.

Yet while experimenting today (with the drum intact) I found a lot of use in playing with the snares using my fingers, getting some nice higher frequencies out of them. Consequently I’d like to keep them on the drum.  I could either remove just the resonant head and see if the snares would still work (in fact they may work better since I won’t accidentally hit the resonant head with them), or make the Arduino system wearable, and a separate entity from the drum entirely. This method would mean that the drum sticks/sensors could be used on other instruments/materials too. Both solutions have their benefits – maybe I should combine the two.

[1] Young, M. and Lexer, S.  ‘FFT Analysis as a Creative Tool in Live Performance’.  London.  2003.

[2] Momeni, A. and Henry, C.  ‘Dynamic Independent Mapping Layers for Concurrent Control of Audio and Video Synthesis’ in Computer Music Journal, Vol. 30, No. 1.  2006.  pp. 49-66


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s