Saturday, November 23, 2013

Art, Science, Neither or Both?

If you are one of the very few that follow my blogging or music, you probably know that one of my favorite tools is Noatikl, which is essentially a super-sequencer or music composition tool.

Recently on a few pieces I received some nice feedback on my "playing" or the performance aspects of generative pieces and that got me thinking a bit on how to take that. All of the pieces that I put together and release are usually 100% MIDI or close to that which means I am using computer generated sound in one form or another.

If my pieces are synthesizer based, I am using software synths as plugins on the iPad or on my desktop. If my pieces use "traditional" instruments, I am resorting to samples played on either a keyboard or other creative MIDI input devices using Logic or Kontakt on the desktop or any number of samplers/Romplers on the iPad.

Some of my pieces are performed track by track with MIDI keyboards and then edited (for mistakes), tweaked for sound and/or timing issues and then combined. This is a bit closer to traditional performing in a studio environment. I NEVER perform anything live.

Other pieces, such as those from Noatikl are more akin to composing. I program each voice with rhythmic patters, keys to use, rests, probabilities, instructions for harmony, etc and more or less "turn them loose". This is certainly not performing in the traditional sense but is setting music into motion and then tweaking the program until it sounds "done" to me.

One aspect very close to performing is when I use the EWI (Electronic Wind instrument) in my pieces. This device requires fingering the keys, blowing into the mouthpiece and translates all of this into MIDI events that are passed into a (usually) sample-based instrument. This can be a flute, trumpet, sax or even synthesizer - all modulated and controlled with my breath. Despite the "disconnect" from an analog wind instrument, it reacts and acts almost exactly the same.

So with all that, is this just a shortcut to music? Is this just winding up a music box or turning on a player piano? I think with all the setup and parameters, its more than that but also its not quite live either - even if done in one take.

I think music has evolved further and further away from direct creation over the years and that computers are just one more step along the way.

From the initial singing or chanting, we have evolved more complicated ways of making sounds throughout history. First with pipes and blowing or percussion with sticks, then strings or harps. These evolved into mechanical versions - harpsichords, pianos. Is the musician still performing when "just" pushing keys that turn levers that make hammers strike strings? What happens when it is electronic as in an organ? Sample based as started with rotating drums or tapes?

Overall I think studio work is a mix of composing and performance and the tools used don't really define that. So when my beautiful violin part is merely a mutating rhythm based on:

<100 R 30 -30 60 60>

it is still in some way a musical composition. I get to play "George Martin" to the performers in these cases and hopefully come up with something palatable!

Here are a few pieces "generated" more or less in that fashion - one from the iPad and the other from the desktop:

Monday, November 18, 2013

Throwing hardware at it!

I tend to keep with a policy of skipping each generation of hardware and since I dutifully skipped the 4th generation iPad, I broke down and bought an iPad Air. There was a $200 buyback that I took advantage of for my ancient and unused iPad first generation as well so it wasn't quite as much of a wallet shock. I went with the 64GB model because I do store tons of music samples on my iPad and the size does matter! I opted for the full size model instead of the Mini because again, with synth apps, the size matters.

Early reports have been very good related to music making on the Air and I can confirm that most of my performance concerns on CPU have been addressed. If you follow my tutorials or postings on Noatikl,  you might remember that I used mainly Sampletank due to its ability to give 4 MIDI channels without too much CPU overhead provided you use a light DAW along with Audiobus.

So, with the Air, I decided to try a piece with a heavy DAW and with 3 concurrent synths recording. The heavy DAW is my favorite, Auria and the synths are the Dxi synth, Thor and Alchemy Mobile. I drove all 3 synths with Noatikl which would usually bring my 3rd gen iPad to its knees. Without a hitch I was able to record all three concurrently into Auria, which is a very heavy DAW by itself.

Another recent addition to music making on the iPad is inter app audio as I blogged before and I added some additional tracks using the Arturia Oberheim SEM synth and my old standby Sampletank for the violin.

Throughout the piece, Auria was responsive and the tracks all recorded without issue. I think I can now add many more concurrent tracks with Noatikl and Auria is now a go-to DAW for me. I'll be posting more as I experiment further. Thanks for reading!

Sunday, November 3, 2013

More DAW-like than ever - iOS7, Auria and Garageband

Arturia Mini with IAA transport shown

With iPad's iOS7, one of the more significant additions is inter-app audio. We've had Audiobus for quite awhile now but inter-app audio makes audio sharing more integral to the operating system (Apple is allowed to make use of all the internal routines - unlike 3rd parties - gives them a bit of an unfair advantage, but I digress).

At this time, only a handful of apps support IAA but more are adding it everyday and it will become increasingly common in the coming months. IAA makes synths on the iPad act much more like VST or AU plugins do on the desktop. You can easily plug-in compatible synths to any DAW that supports it and record right from the synth into the DAW without Audiobus routing or the need for other apps running (this means a lot on the limited memory/cpu of the iPad!).

Two DAWs that already support IAA are Auria and Apple's Garageband. In the case of Auria, you simply plug-in the synth like you would any of its insert effects. For garageband, you select an IAA track type directly. Both work reasonably well at this point and not too surprising, there are occasional bugs in both too. It will probably take a few patches/releases to get things completely stable, but IAA is already very usable.

Auria is becoming my favorite DAW lately due to its fantastic automation and effects bus but it is a bit pricey. One nice development is that if you have any of the Sugarbyte effects (Turnado, Wow) for the iPad, you don't have to pay twice to use them in Auria. It recognizes them and lets you use them as native plugins - Auria native plugins do work better than IAA provided you are using Auria.

Garageband has many great additions in the iOS7 release - the track limit is up to 16 on older iPads and 32 on the brand new ones - I think there is a purchase in my not too distant future. Another more hidden but significant feature in Garageband for iPad is that for the first time, you can easily transfer lossless mix downs out of Garageband. Garageband used to be the "roach motel" of DAWs - you could get sounds in but couldn't get them out. Now you can mix down and select "Open with" to open the mixed down AIFF in another app. Not all apps support "open in", but one significant one that is is AudioShare which in turn lets you copy/paste anywhere you wish - so for the first time, Garageband plays well with others.

The piece below was created in Auria with only Inter-app audio synths. I used the Arturia Oberheim SEM synth for the arpeggios, Waldorf's Nave for some of the leads and the Arturia Mini for the bass. There were a few crashes but the entire thing was driven from within Auria and mixed/mastered and posted. I think this will be a huge development for iPad music moving forward!

Just an aside, the Arturia synths are available on desktop for roughly $100 each - on the iPad, they are $9.99 and sound extremely close to the desktop versions!