Saturday, November 23, 2013
Art, Science, Neither or Both?
If you are one of the very few that follow my blogging or music, you probably know that one of my favorite tools is Noatikl, which is essentially a super-sequencer or music composition tool.
Recently on a few pieces I received some nice feedback on my "playing" or the performance aspects of generative pieces and that got me thinking a bit on how to take that. All of the pieces that I put together and release are usually 100% MIDI or close to that which means I am using computer generated sound in one form or another.
If my pieces are synthesizer based, I am using software synths as plugins on the iPad or on my desktop. If my pieces use "traditional" instruments, I am resorting to samples played on either a keyboard or other creative MIDI input devices using Logic or Kontakt on the desktop or any number of samplers/Romplers on the iPad.
Some of my pieces are performed track by track with MIDI keyboards and then edited (for mistakes), tweaked for sound and/or timing issues and then combined. This is a bit closer to traditional performing in a studio environment. I NEVER perform anything live.
Other pieces, such as those from Noatikl are more akin to composing. I program each voice with rhythmic patters, keys to use, rests, probabilities, instructions for harmony, etc and more or less "turn them loose". This is certainly not performing in the traditional sense but is setting music into motion and then tweaking the program until it sounds "done" to me.
One aspect very close to performing is when I use the EWI (Electronic Wind instrument) in my pieces. This device requires fingering the keys, blowing into the mouthpiece and translates all of this into MIDI events that are passed into a (usually) sample-based instrument. This can be a flute, trumpet, sax or even synthesizer - all modulated and controlled with my breath. Despite the "disconnect" from an analog wind instrument, it reacts and acts almost exactly the same.
So with all that, is this just a shortcut to music? Is this just winding up a music box or turning on a player piano? I think with all the setup and parameters, its more than that but also its not quite live either - even if done in one take.
I think music has evolved further and further away from direct creation over the years and that computers are just one more step along the way.
From the initial singing or chanting, we have evolved more complicated ways of making sounds throughout history. First with pipes and blowing or percussion with sticks, then strings or harps. These evolved into mechanical versions - harpsichords, pianos. Is the musician still performing when "just" pushing keys that turn levers that make hammers strike strings? What happens when it is electronic as in an organ? Sample based as started with rotating drums or tapes?
Overall I think studio work is a mix of composing and performance and the tools used don't really define that. So when my beautiful violin part is merely a mutating rhythm based on:
<100 R 30 -30 60 60>
it is still in some way a musical composition. I get to play "George Martin" to the performers in these cases and hopefully come up with something palatable!
Here are a few pieces "generated" more or less in that fashion - one from the iPad and the other from the desktop: