Obviously after all the hype, demos and finally a release, Audiobus is one of the most influential apps put out in 2012 for iPad music making. There are plenty of in depth reviews, previews and demos out there already so I finally got to making my own piece with it and will let you know my impressions. First and foremost though, here it is:
This particular piece was almost entirely done on Audiobus with live recording into Harmonic Dog's MultiTrack DAW. The source synths for this piece included: Animoog, Sunrizer, Rebirth and the Korg iMS-20.
The only non-Audiobus pieces were the vocal effects which used Audiocopy/paste and were captured from freesound.org via the iPad GoodReader app and processed in Hokusai before pasting into MultiTrack DAW.
Audiobus allows iPad users to finally put together a sort of best-in-breed DAW. Rather than relying on sandboxed all-in-one solutions that are limited to one vendor, by providing live audio recording across apps, we can use any compliant synth, any compliant effects and any compliant target. Note the word "compliant"! This means that for apps to work with Audiobus, they need to update and include code to support the standard. The great news is that many apps have already done that. The not-so-great news is that not every app is there yet.
Already, tons of synth apps support recording their audio into target apps. On the target side, however, the choices are still a bit limited. I suspect it is harder to be a "target" than a source, so we will see this grow over time.
As you can see elsewhere in greater detail, audiobus supports "sources", "effects processors" and "targets" (apologies if I have the terms wrong). Many apps already exist in the "sources" and "effects processors", but for serious recording you need DAWs to support the target side.
I am very anxious to see Auria support, but that is going to take a few releases before it and Audiobus work well together. Since I'm impatient, I invested $10 in MultiTrack DAW which already supports Audiobus. I also bought "Loopy" which is a live looper app, but of the two, MultiTrack fits my workflow much better. Loopy is better suited to live performance but is by the authors of Audiobus, so its support is not surprising.
One of the biggest advantages of audiobus is that it allows you to live record audio from any source application - particularly those that do not have convenient or easy recording facilities on their own. In the piece above, I recorded Sunrizer and Animood directly into Multitrack DAW with Audiobus rather than rely on the flakey recorders in each app. This was really much easier than audio copy/paste.
In the sandboxed environment of the iPad (which is gradually becoming the norm on desktops as well), this may be the future for inter-application recording. We may see similar schemes eventually replace the VST and AU plugin architectures on desktop DAWs since sandboxing solves so many of the security concerns on desktops (but at a price of convenience).
I will be doing much more with Audiobus in the future and will let you know how it goes. As things stand right now, I think audiobus works well in conjunction with audio copy/paste. Things will only get better as time goes on.
Meanderings about amateur music creation on iMac and iPad using Logic, Garageband or any number of software synths for the iPad
Friday, December 28, 2012
Thursday, December 20, 2012
Korg gets iOS
Its been a few years now with music apps for iOS. Many new companies jumped right in and started creating analog synth emulations, DAWs and tons of other music apps that no one thought would be possible on such limited hardware.
I think CPU wise, the iPad is maybe equivalent to my old Apple G4 table lamp iMac on which I first started making music with Garageband around 2004 or so. So its not going to outpace laptops or computers but it more than makes up for that with the touch interface. There is something more intimate or direct about dragging musical ideas around. An analogy might be when Nintendo brought out the Wii. Very underpowered compared to its competition but it took off due to a more direct interface.
So with all the music apps and software coming out, several established synthesizer companies are starting to take notice. Moog has 2 excellent apps - one basically just an effects processor - Filtatron and the other, Animoog is a wave table synth with great samples from Moogs and other sounds - just today they put out an update with Audiobus support, a 4-track recorder and what got me (sucker that I am!), Grateful Dead samples to use as timbres - Dark Star no less!!!
I think that the company best positioned to deliver on iOS though, might be Korg. Korg is a large company with a rich history of analog and digital synthesizers and some years ago they created incredible emulations or ports of their classic synths as plugins for computer DAWs. With iOS, Korg has ported a few of these to the touch screen with great success (and the help of DeTune - a contracted company).
The first app was the iElectribe which is an almost perfect emulation of their hardware drum machine. Every knob moves - every button works and the app goes well beyond what the actual hardware can do by adding in audiobus support, saving patches etc.
My favorite app is probably the iMS-20 which is an incredible emulation of their MS-20 analog synth. They do a great job of recreating the entire hardware along with virtual patch cables. In fact, when I first got it, I looked at videos by Marc Doty on the actual hardware synth and everything he showed, worked on the iMS-20 exactly as the hardware. They went further though and included a virtual sequencer and the Kaoscillator based on their hardware add-ons. Factor this in with 6 tracks of drum or synth backgrounds and its a sort of retro all-in-one DAW.
They followed this up with the ikossilator which perfectly emulated their hand-held hardware only it did it much better.
Just a few weeks ago they came out with the iPolySix which provides 2 instances of the iPolySix with up to 6 drum synths (or limited polysix parts). This is even better than the iMS-20 in many respects due to the polyphony. All of these offerings also support Audiobus so you can record them into a DAW.
The original Korg legacy collection for Windows and Mac includes many more soft synths and I expect that one by one these will find their way onto iOS since they are not all that CPU demanding yet they are emulations of classic hardware. I anxiously await the M1 (which I have on OSX already).
Its always interesting to see how hardware companies will treat software emulations. They have to tread a fine line of offering something valuable without making their hardware irrelevant in the process. I have seem a few examples where the software versions in most ways surpass the hardware. The ikaossilator and Yamaha's TNR-I are examples.
It will be exciting to see what other companies come up with next. PPG has the Wavetable - Moog, the Animoog. Roland? We're waiting!
I think CPU wise, the iPad is maybe equivalent to my old Apple G4 table lamp iMac on which I first started making music with Garageband around 2004 or so. So its not going to outpace laptops or computers but it more than makes up for that with the touch interface. There is something more intimate or direct about dragging musical ideas around. An analogy might be when Nintendo brought out the Wii. Very underpowered compared to its competition but it took off due to a more direct interface.
So with all the music apps and software coming out, several established synthesizer companies are starting to take notice. Moog has 2 excellent apps - one basically just an effects processor - Filtatron and the other, Animoog is a wave table synth with great samples from Moogs and other sounds - just today they put out an update with Audiobus support, a 4-track recorder and what got me (sucker that I am!), Grateful Dead samples to use as timbres - Dark Star no less!!!
I think that the company best positioned to deliver on iOS though, might be Korg. Korg is a large company with a rich history of analog and digital synthesizers and some years ago they created incredible emulations or ports of their classic synths as plugins for computer DAWs. With iOS, Korg has ported a few of these to the touch screen with great success (and the help of DeTune - a contracted company).
The first app was the iElectribe which is an almost perfect emulation of their hardware drum machine. Every knob moves - every button works and the app goes well beyond what the actual hardware can do by adding in audiobus support, saving patches etc.
My favorite app is probably the iMS-20 which is an incredible emulation of their MS-20 analog synth. They do a great job of recreating the entire hardware along with virtual patch cables. In fact, when I first got it, I looked at videos by Marc Doty on the actual hardware synth and everything he showed, worked on the iMS-20 exactly as the hardware. They went further though and included a virtual sequencer and the Kaoscillator based on their hardware add-ons. Factor this in with 6 tracks of drum or synth backgrounds and its a sort of retro all-in-one DAW.
They followed this up with the ikossilator which perfectly emulated their hand-held hardware only it did it much better.
Just a few weeks ago they came out with the iPolySix which provides 2 instances of the iPolySix with up to 6 drum synths (or limited polysix parts). This is even better than the iMS-20 in many respects due to the polyphony. All of these offerings also support Audiobus so you can record them into a DAW.
The original Korg legacy collection for Windows and Mac includes many more soft synths and I expect that one by one these will find their way onto iOS since they are not all that CPU demanding yet they are emulations of classic hardware. I anxiously await the M1 (which I have on OSX already).
Its always interesting to see how hardware companies will treat software emulations. They have to tread a fine line of offering something valuable without making their hardware irrelevant in the process. I have seem a few examples where the software versions in most ways surpass the hardware. The ikaossilator and Yamaha's TNR-I are examples.
It will be exciting to see what other companies come up with next. PPG has the Wavetable - Moog, the Animoog. Roland? We're waiting!
Monday, December 3, 2012
Giving and getting feedback (or Carnegie 101)
One interesting point for amateur musicians is why do it at all? Of course, in an amateur case, music is not a livelihood - it isn't putting food on the table and it isn't in life changing terms, essential. So why does it matter so much to me? Tons of psychological reasons, no doubt, but it is a fact that I publish what I create and look forward diligently to feedback of almost any kind. At some level, for me, it matters!
I can at some level accept that music makes my life better and is also a healthy escape when life or life experiences are um...less than optimal. So again, I produce this "stuff" and publish it with no other expectations of renumeration.
One very nice thing about publishing musical works - finished or unfinished, good or bad, is that you will inevitably get feedback from other musicians. I think this is what I enjoy most about my "hobby".
If you do publish something to Soundcloud, or MacJams or iCompositions (or someplace else), you will hopefully see some form of feedback based on what you submitted.
Many of the listens and feedback that you receive will work best on a quid-pro-quo basis (i.e. I listen to you, please listen to mine) and if you contribute your own opinions to other works, you will have a much more positive experience.
The types of listeners will vary and I can't stress enough that even if all someone does is click onto your profile, you are winning in a big way. Out of thousands or millions of possibilities, someone has stumbled onto you!
So every touch is valuable to me, but I do tend to value some listens more than others. Here is my take on what happens with my own amateurish compositions:
The "Pings" on your submissions - These are folks that clicked on your song in MacJams or Soundcloud and decided whether or not to listen to it. You win in that at least you caught their attention! If they don't actually listen, at least you managed to catch their attention with the song title, song art or something.
Speed listeners - This is a bit more subtle in that you will get feedback (often positive) on your submission and the user will comment on your songs (quite possibly, 5 - 10 of them) all within a 2 minute window (soundcloud, for example, will actually timestamp when someone comments or likes something) which is remarkable when your average song length is 4 minutes! Be grateful that you got the attention at all, but this is definitely an attempt at quid-pro-quo and don't feel obliged to listen to their 20 minute opuses based on their 10 comments all within a minute. Seriously, if you don't have the time to listen to my stuff, its ok! Really! But lets not pretend :)
Speed and Scatter listeners - There are other folks who will comment on many songs within minutes of each other. To be quite honest here: If someone is truly listening to your work and assessing it, it will probably take at least the play time of your piece for them to come to a critical conclusion. If you see on soundcloud or anywhere else that users comment on 20 pieces in 2 minutes - um...its great to get the feedback, but it isn't really valuable (see previous comments on "pretending").
Pure Quid-pro-Quo - This isn't really a hard rule, but I would say this is the norm for amateur musicians and it is terrific. Groups of amateurs wanting to perfect their craft will comment on each others' pieces with the hope that the recipient will reciprocate. This is honest, worthwhile and will certainly drive more traffic to your own submissions if you listen in turn and comment. Now the egoist in me wants music lovers to flock to me for some bizarre reason, but the norm is that other musicians are more likely to take the time when I take the time to comment on their work.
Serendipity - Some folks just seem to listen to amateur music even when they don't produce their own and they like to comment. This is RARE! If you are this lucky, its really good feedback to get. Some music lover stumbled upon you and liked something - or hated something! I once had someone pour out negative feedback on something I published only to later delete their comment. I was able to privately ask what happened and they responded that they had remorse about their comments (which were very direct and critical). I responded that honest feedback was the best kind and learned a bit more about what they didn't like about my piece which ultimately caused me to rethink some of what I do. Its important not to freak out about critical feedback but to balance it with any other feedback you get.
If you accept the many forms of feedback you get in the categories above, or other categories, you will undoubtedly get better at what you do. The truly honest feedback from non-musicians, when you get it, might prove the most valuable. Other musicians do tend to be overly positive in their comments which is fine, but doesn't challenge you to improve.
I can't stress enough that you should not snap back at critical comments or get into long arguments if you really want to get better at what you do. On occasion you might run into someone irrational who hates your "stuff". That's fine - they will tire of trolling or if they keep doing it, there is some undocumented fascination that you seem to provide them which is art in itself!
So anyone who takes the time in any fashion to click on my own submissions or works - Thank you!
Thursday, November 29, 2012
Keeping it Real(ish)
My musical background is mostly limited to playing with recorders (the wooden kind) as a child and then years of playing trumpet in jazz bands, pop orchestras and symphonies all the way through high school.
I think that these experiences have shaped my tastes and tendencies to where I often prefer hearing "traditional" or organic sounds when composing or playing music. I did acquire a taste for electronica and synth sounds over time, but the traditional sounds of brass, winds and/or violins has never really left my mind.
When I first started making music with computers (and stopped blowing air through tubes), I was very enthused and at the same time disappointed by what sound cards and early computers were capable of producing. In fact, having never had synth gear, I thought that FM synthesis was limited to the pacman-like sounds coming out of cheap sound cards (they did use FM, but with very poor chips).
As I blogged in a very early post, one thing that changed everything for me was the Creative Labs AWE32 sound card with onboard memory and an early implementation of "wave table" synthesis. Using General Midi with this somewhat expensive ($245) card, let me hear actual instrument sounds with MIDI music. The expressiveness wasn't there and the envelopes were pretty boring, but the sounds were worlds better than cheap sound cards.
This created what remained for me, a holy grail of "real sounding" synthesized sounds. The actual sounds of orchestras, brass sections, sax solos seemed getting closer every day... except the day never seemed to arrive. My first forays into sampling or MIDI composing petered out in the 90s and I returned to other hobbies (lots of shareware programming, brewing beer, etc).
Then something happened. I first decided that OSX made Apple hardware interesting finally (never was a fan of pre-OSX). Having bought a G4 table lamp iMac, I quickly discovered Garageband in the iLive suite. This was a simple DAW but compared to the Cakewalk "light" I had used on Windows, this was amazing.
I was able to pop loops together with real recorded sounds and tweak, twist and combine sounds to make whole compositions. For any of you with far too much time on your hands, my page here goes all the way back to those dark days of 2004 where loops were king!
My Garageband skills eventually grew to including more and more keyboard parts and software instruments instead of loops and eventually I replaced Garageband with Logic Express 9 which brought yet another learning curve but tons more sampled and software instruments.
New instruments were sampled from live musicians and except for the expressiveness, the sounds got closer and closer to "real". The more I played around with brass samples or read up in books and magazines, the more all common knowledge said "forget it"! If you want brass, use musicians or record them. Same for violins, winds etc. For a very long time I think this was accurate.
With each year and generation of sampling technology and hardware, "real" sounds get closer and closer to attainable. I've recently purchased the Akai EWI-USB which is a wind instrument that lets me blow into it to control volume envelopes, cutoffs and vibrato with whatever software instruments I point it at. It really is a phenomenal instrument that emits MIDI events based on live performance.
Coupling this with either software modeled instruments or samples brings things closer and closer to "real". In fact, in the right mix, I don't think it is easy to tell the difference. I've been using the Wivi Band samples in many tracks and while my talent is lacking in terms of playing, the sounds work very well and sound pretty close to actual instruments. I don't think that soloing is quite there yet with the tools I have bought, but every day I hear new examples and see software that brings that closer to reality.
Recently, I also purchased the Miroslav Philarmonik orchestra samples from IKMultimedia which brings an entire orchestra into my mixes.
I often wonder why I enjoy "faking" real instruments so much - must be some character flaw!
I think that these experiences have shaped my tastes and tendencies to where I often prefer hearing "traditional" or organic sounds when composing or playing music. I did acquire a taste for electronica and synth sounds over time, but the traditional sounds of brass, winds and/or violins has never really left my mind.
When I first started making music with computers (and stopped blowing air through tubes), I was very enthused and at the same time disappointed by what sound cards and early computers were capable of producing. In fact, having never had synth gear, I thought that FM synthesis was limited to the pacman-like sounds coming out of cheap sound cards (they did use FM, but with very poor chips).
As I blogged in a very early post, one thing that changed everything for me was the Creative Labs AWE32 sound card with onboard memory and an early implementation of "wave table" synthesis. Using General Midi with this somewhat expensive ($245) card, let me hear actual instrument sounds with MIDI music. The expressiveness wasn't there and the envelopes were pretty boring, but the sounds were worlds better than cheap sound cards.
This created what remained for me, a holy grail of "real sounding" synthesized sounds. The actual sounds of orchestras, brass sections, sax solos seemed getting closer every day... except the day never seemed to arrive. My first forays into sampling or MIDI composing petered out in the 90s and I returned to other hobbies (lots of shareware programming, brewing beer, etc).
Then something happened. I first decided that OSX made Apple hardware interesting finally (never was a fan of pre-OSX). Having bought a G4 table lamp iMac, I quickly discovered Garageband in the iLive suite. This was a simple DAW but compared to the Cakewalk "light" I had used on Windows, this was amazing.
I was able to pop loops together with real recorded sounds and tweak, twist and combine sounds to make whole compositions. For any of you with far too much time on your hands, my page here goes all the way back to those dark days of 2004 where loops were king!
My Garageband skills eventually grew to including more and more keyboard parts and software instruments instead of loops and eventually I replaced Garageband with Logic Express 9 which brought yet another learning curve but tons more sampled and software instruments.
New instruments were sampled from live musicians and except for the expressiveness, the sounds got closer and closer to "real". The more I played around with brass samples or read up in books and magazines, the more all common knowledge said "forget it"! If you want brass, use musicians or record them. Same for violins, winds etc. For a very long time I think this was accurate.
With each year and generation of sampling technology and hardware, "real" sounds get closer and closer to attainable. I've recently purchased the Akai EWI-USB which is a wind instrument that lets me blow into it to control volume envelopes, cutoffs and vibrato with whatever software instruments I point it at. It really is a phenomenal instrument that emits MIDI events based on live performance.
Coupling this with either software modeled instruments or samples brings things closer and closer to "real". In fact, in the right mix, I don't think it is easy to tell the difference. I've been using the Wivi Band samples in many tracks and while my talent is lacking in terms of playing, the sounds work very well and sound pretty close to actual instruments. I don't think that soloing is quite there yet with the tools I have bought, but every day I hear new examples and see software that brings that closer to reality.
Recently, I also purchased the Miroslav Philarmonik orchestra samples from IKMultimedia which brings an entire orchestra into my mixes.
I often wonder why I enjoy "faking" real instruments so much - must be some character flaw!
Saturday, November 17, 2012
Misanthropic Composing
I've been recently been reading "Thinking in Jazz: The Infinite Art of Improvisation" by Paul Berliner. This is an excellent book and goes into detail about how jazz artists learn and improve their improvisational skills. As an amateur, I don't take what I make musically too seriously, but still....
I have to admit that I am trying to find ways to improve what I create and do. I think hobby or otherwise, that is sort of human nature and all this has me thinking a bit about how things have changed and the processes that online musicians follow now.
In the past, the collaboration and jam sessions were the normal means of picking up tips, learning chord progressions, mimicing styles and adding them into your own repertoire. Musicians would jam after hours or in apartments just to pick up styles and to play off of each other. Learning in this way must have been a unique experience. The back and forth between players often formed great synergies that carried over into the studios.
I recently got to hear a local jazz combo play with Randy Brecker which was a bit of a humbling experience. I've been putting a number of jazz pieces together recently and this was a chance to hear live musicians interact (in this case after only a few days rehearsal!). Of course I realize there is a big gap in talent between my own hacked out pieces and life long performers but there was more than that. I think in live improvisational performance, you see players build on each solo and off of each other's playing styles. This is something you don't see in the online amateur musician - like myself.
I can put many tracks together - drums, saxes, trumpets etc, but every one is me playing or tweaking the sounds. You get my perspective on everything and aside from the skills gap, I think there is a creativity difference there. I can put some things together that I think are very listenable and occasionally pretty good but they are homogenous in a sense. It is presumptuous of me to attempt to put jazz charts together in the shadow of Miles Davis, Wayne Shorter or many others, but since its a hobby, hubris is OK and failing is without consequences beyond slightly wounded pride.
Collaboration in electronic music is often done virtually and disjointed. It is easy to collaborate anywhere in the world which is tremendous but you do lose that immediacy and the back and forth that goes on with performers. What impresses me most is the live improvs between jazz masters. My own process of getting a half decent track involves take after take, practice and finally recording. In live jazz, you see musicians following rapid 16 note patterns immediately with no rehearsal - pretty amazing!
So I am well aware that the process I use in music is different, probably inherently flawed and probably won't match the incredible efforts of skilled jazz combos. But I'm vain and stupid enough to keep trying!
I have to admit that I am trying to find ways to improve what I create and do. I think hobby or otherwise, that is sort of human nature and all this has me thinking a bit about how things have changed and the processes that online musicians follow now.
In the past, the collaboration and jam sessions were the normal means of picking up tips, learning chord progressions, mimicing styles and adding them into your own repertoire. Musicians would jam after hours or in apartments just to pick up styles and to play off of each other. Learning in this way must have been a unique experience. The back and forth between players often formed great synergies that carried over into the studios.
I recently got to hear a local jazz combo play with Randy Brecker which was a bit of a humbling experience. I've been putting a number of jazz pieces together recently and this was a chance to hear live musicians interact (in this case after only a few days rehearsal!). Of course I realize there is a big gap in talent between my own hacked out pieces and life long performers but there was more than that. I think in live improvisational performance, you see players build on each solo and off of each other's playing styles. This is something you don't see in the online amateur musician - like myself.
I can put many tracks together - drums, saxes, trumpets etc, but every one is me playing or tweaking the sounds. You get my perspective on everything and aside from the skills gap, I think there is a creativity difference there. I can put some things together that I think are very listenable and occasionally pretty good but they are homogenous in a sense. It is presumptuous of me to attempt to put jazz charts together in the shadow of Miles Davis, Wayne Shorter or many others, but since its a hobby, hubris is OK and failing is without consequences beyond slightly wounded pride.
Collaboration in electronic music is often done virtually and disjointed. It is easy to collaborate anywhere in the world which is tremendous but you do lose that immediacy and the back and forth that goes on with performers. What impresses me most is the live improvs between jazz masters. My own process of getting a half decent track involves take after take, practice and finally recording. In live jazz, you see musicians following rapid 16 note patterns immediately with no rehearsal - pretty amazing!
So I am well aware that the process I use in music is different, probably inherently flawed and probably won't match the incredible efforts of skilled jazz combos. But I'm vain and stupid enough to keep trying!
Sunday, October 21, 2012
It don't mean a thing....
Its no secret to anyone who listens to my pieces that I've been on a bit of a jazz kick since getting my EWI wind controller. The EWI lets me use my breath to control any number of instruments which I find much easier than keyboarding since I used to play trumpet. Coupled with good software instruments, it sounds very authentic. My favorite software instrument for jazz is Wallander's Wivi Band. Wivi band provides about 14 software modeled instruments that sound extremely authentic. The one knock is that they sort of have an orchestral bias so for jazz, some of the sounds are a bit "clean". I find that with some equalization or subtle distortion in Logic 9, I can get what I want.
Another great set of sounds that I am investigating is samplemodeling. They provide more sample-based instruments (rather than software modeled), but are much more expensive. So far, the budget has been holding and I haven't sprung for them!
This interest in jazz has got me researching as well as playing. Lately I've been reading The Studio Recordings of the Miles Davis Quintet 1965-1968 by Keith Waters. It goes into great depth about the recording techniques, improvisational methods and modal jazz used in these recordings. The more I read the book, the more I realize that my music theory is rusty! I've gone back and have been reviewing jazz theory and chord progressions. Miles Davis is sort of like jumping into the deep end! The speed and techniques are beyond my playing skills - particularly with the EWI.
I have one piece I attempted in a phrygian mode with Noatikl where I composed a combo piece - the Miles quintet was the inspiration, but it doesn't reach that level:
Its a fun and fast-paced piece, but probably doesn't have the "live" or real feel to it.
Since Miles is a bit of a stretch still, I started reviewing more basic jazz progressions which brought me to Big Band jazz. In the late 1970s, early 80s, my high school years, I played trumpet in a community symphony, a pops orchestra and a high school jazz band. The jazz band played tons of big band numbers since our director was a big band freak.
I put some basic chord progressions together, changed to 6/8 time since 4/4 gets boring and started hearing more Bennie Goodman phrases. The piece started with the piano chord progressions and I added in the wind ensemble using ewiVoicing. As I've blogged before, this is a great library that will auto-harmonize your notes. With the EWI, this lets me set up 4 instruments, and play them all in diatonic chords at the same time. Basically, the wind ensemble - I used a trombone, tenor sax and 2 trumpet parts for the wind section.
The next step was playing some of the solos. Here I just noodled out a basic clarinet solo, tenor solo and muted trumpet solo. The great thing with the EWI instrument is that I can use the EVI fingering method (based on trumpet fingerings) for all of the instruments.
The last part was to pop in a piano solo - my usual cheap trick there. Record the chords in one track and the lead on another (I'm a lousy keyboardist!). The results are I think pretty good:
I'll probably keep experimenting as long as I am on the jazz kick. In between, I still pop out quite a bit of synth noise on the iPad and desktop. Another cool trick is to use the EWI with synthesizer lines - the breath control adds a new aspect to electronica as well. Sort of a "super" mod-wheel.
Another great set of sounds that I am investigating is samplemodeling. They provide more sample-based instruments (rather than software modeled), but are much more expensive. So far, the budget has been holding and I haven't sprung for them!
This interest in jazz has got me researching as well as playing. Lately I've been reading The Studio Recordings of the Miles Davis Quintet 1965-1968 by Keith Waters. It goes into great depth about the recording techniques, improvisational methods and modal jazz used in these recordings. The more I read the book, the more I realize that my music theory is rusty! I've gone back and have been reviewing jazz theory and chord progressions. Miles Davis is sort of like jumping into the deep end! The speed and techniques are beyond my playing skills - particularly with the EWI.
I have one piece I attempted in a phrygian mode with Noatikl where I composed a combo piece - the Miles quintet was the inspiration, but it doesn't reach that level:
Its a fun and fast-paced piece, but probably doesn't have the "live" or real feel to it.
Since Miles is a bit of a stretch still, I started reviewing more basic jazz progressions which brought me to Big Band jazz. In the late 1970s, early 80s, my high school years, I played trumpet in a community symphony, a pops orchestra and a high school jazz band. The jazz band played tons of big band numbers since our director was a big band freak.
I put some basic chord progressions together, changed to 6/8 time since 4/4 gets boring and started hearing more Bennie Goodman phrases. The piece started with the piano chord progressions and I added in the wind ensemble using ewiVoicing. As I've blogged before, this is a great library that will auto-harmonize your notes. With the EWI, this lets me set up 4 instruments, and play them all in diatonic chords at the same time. Basically, the wind ensemble - I used a trombone, tenor sax and 2 trumpet parts for the wind section.
The next step was playing some of the solos. Here I just noodled out a basic clarinet solo, tenor solo and muted trumpet solo. The great thing with the EWI instrument is that I can use the EVI fingering method (based on trumpet fingerings) for all of the instruments.
The last part was to pop in a piano solo - my usual cheap trick there. Record the chords in one track and the lead on another (I'm a lousy keyboardist!). The results are I think pretty good:
I'll probably keep experimenting as long as I am on the jazz kick. In between, I still pop out quite a bit of synth noise on the iPad and desktop. Another cool trick is to use the EWI with synthesizer lines - the breath control adds a new aspect to electronica as well. Sort of a "super" mod-wheel.
Saturday, October 13, 2012
Figuring it out
Recently, Propellerhead put out a new version of their Figure app on the iPad and with the new update finally provides audiocopy to the program. This makes it one of my favorite beat-making apps and provides a great starting point for many of my songs.
Figure uses the underlying sound engine from Propellerhead's famous Reason DAW but presents the interface in a touch manner fully suited to the iPhone/iPad. The app itself is an iPhone app but scales beautifully on the larger iPad screen.
Originally the Figure app would only play 2 bars of music and had no save or copy abilities. Now you have the option of recording 2,4 or 8 bar sequences and with Audiocopy, you can easily paste the resulting sounds into a DAW to create larger compositions.
The "wheels" in the app let you set up syncopated rhythms that cycle in the pattern and my lifting and lowering your fingers rhythmically, you can also syncopate the syncopations! This lets me put together some very funky rhythms very quickly.
My favorite technique is to record a few 8 bar licks, then copy the individual parts into Garageband or some other DAW to assemble into larger pieces. I add in additional synths and parts from there.
If you are making use of the "Pump" option, however, you should audiocopy the parts together and not in pieces since the "pump" causes a side-chaining effect to occur in the melody and basslines based on the bass drum.
With the ability to put together complex rhythms, the ability to tweak and bend the sounds with just your fingers, Propellerhead have taken full advantage of the touch interface of the iPad without just throwing a "virtual" keyboard in front of your face.
I think products like Figure, Animoog and iElectribe are where music making on the iPad should be going and I'll be making good use of them moving forward!
Here's a few recent pieces I did using Figure as a starting point - one was composed in Garageband and the other in Tabletop:
Figure uses the underlying sound engine from Propellerhead's famous Reason DAW but presents the interface in a touch manner fully suited to the iPhone/iPad. The app itself is an iPhone app but scales beautifully on the larger iPad screen.
Originally the Figure app would only play 2 bars of music and had no save or copy abilities. Now you have the option of recording 2,4 or 8 bar sequences and with Audiocopy, you can easily paste the resulting sounds into a DAW to create larger compositions.
The "wheels" in the app let you set up syncopated rhythms that cycle in the pattern and my lifting and lowering your fingers rhythmically, you can also syncopate the syncopations! This lets me put together some very funky rhythms very quickly.
My favorite technique is to record a few 8 bar licks, then copy the individual parts into Garageband or some other DAW to assemble into larger pieces. I add in additional synths and parts from there.
If you are making use of the "Pump" option, however, you should audiocopy the parts together and not in pieces since the "pump" causes a side-chaining effect to occur in the melody and basslines based on the bass drum.
With the ability to put together complex rhythms, the ability to tweak and bend the sounds with just your fingers, Propellerhead have taken full advantage of the touch interface of the iPad without just throwing a "virtual" keyboard in front of your face.
I think products like Figure, Animoog and iElectribe are where music making on the iPad should be going and I'll be making good use of them moving forward!
Here's a few recent pieces I did using Figure as a starting point - one was composed in Garageband and the other in Tabletop:
Saturday, September 29, 2012
iPad Kitchen sink track
I was an early upgrader to iOS 6 and one of the best features is the update to Garageband for the iPad that lets it run in the background. I didn't realize quite how amazingly useful this feature was to me until making my last piece.
I have tons of synths on my iPad that I always underutilize because it is too difficult to play them in isolation, copy the audio and then reassemble it with other tracks. There are a few other apps that allow background playing such as BM2, but I don't have that one yet.
So, with Garageband in the background acting as an "aggregator", I can slowly assemble my piece track by track and have it playing in the background while recording overlay tracks in Animoog or Sunrizer.
Here is the piece I put together with Garageband, Figure, Animoog, Sunrizer and iVoxel:
It has a middle-eastern vibe to it - the warbling vocals are actually my voice pitched up several octaves with an LFO vibrato added to it.
The main drawback to Garageband as a collecting DAW is that you only get 8 tracks to work with. This piece takes exactly 8. If you need to go beyond that, Garageband lets you bounce together tracks and combine them - so it is a pretty nice workaround if you need it.
I will probably make use of this quite a bit until AudioBus finally hits!
I have tons of synths on my iPad that I always underutilize because it is too difficult to play them in isolation, copy the audio and then reassemble it with other tracks. There are a few other apps that allow background playing such as BM2, but I don't have that one yet.
So, with Garageband in the background acting as an "aggregator", I can slowly assemble my piece track by track and have it playing in the background while recording overlay tracks in Animoog or Sunrizer.
Here is the piece I put together with Garageband, Figure, Animoog, Sunrizer and iVoxel:
It has a middle-eastern vibe to it - the warbling vocals are actually my voice pitched up several octaves with an LFO vibrato added to it.
The main drawback to Garageband as a collecting DAW is that you only get 8 tracks to work with. This piece takes exactly 8. If you need to go beyond that, Garageband lets you bounce together tracks and combine them - so it is a pretty nice workaround if you need it.
I will probably make use of this quite a bit until AudioBus finally hits!
Monday, September 24, 2012
Careful how you bite
So I recently put together a nice simple rhythmic piece on Garageband for the iPad:
It was nice to go back to the iPad and keep it pretty simple. One contributing factor was a compounded mishap with my Akai EWI and Logic's Environment. I had some MIDI filtering in place that filtered out note bends, which interestingly enough, the EWI sends when you bite lightly (lightly being the operative word there).
With the light biting not working, of course I bit harder and then near the top of the mouthpiece. I then realized the filter was on and removed it. The next time I played the EWI I was getting very weird sounds and it felt weird. I actually put a tear in the rubber mouthpiece which caused it to "leak". Worse still, this put a leak into some 'not ready for water' parts on top of the EWI.
This caused me to order a new mouthpiece and take a week off the wind instrument. So, back to the iPad and some electronica. Now that the new mouthpiece is back and working, I'll get back to EWIing. So be careful with your environments and try not to bite holes in your instruments.
It was nice to go back to the iPad and keep it pretty simple. One contributing factor was a compounded mishap with my Akai EWI and Logic's Environment. I had some MIDI filtering in place that filtered out note bends, which interestingly enough, the EWI sends when you bite lightly (lightly being the operative word there).
With the light biting not working, of course I bit harder and then near the top of the mouthpiece. I then realized the filter was on and removed it. The next time I played the EWI I was getting very weird sounds and it felt weird. I actually put a tear in the rubber mouthpiece which caused it to "leak". Worse still, this put a leak into some 'not ready for water' parts on top of the EWI.
This caused me to order a new mouthpiece and take a week off the wind instrument. So, back to the iPad and some electronica. Now that the new mouthpiece is back and working, I'll get back to EWIing. So be careful with your environments and try not to bite holes in your instruments.
Thursday, September 13, 2012
Dusty iPad!
Just felt like posting since most of my recent efforts in music making have been back to my iMac. For quite awhile I've been doing tons of iPad music creation but lately not as much even with some very cool iPad releases such as PPG Wavegenerator, Magellen, Impaktor and Drumjam.
So what gives? Have I given up on portable music? I don't think so, and truth be told, I have about 10 works in-progress that might someday see the light of day. I think what is happening is that I am obsessing on two products: Noatikl and my EWI USB. Noatikl lets me "program" musical scores and control as much or as little of the resulting work as I want which I find incredibly inspiring.
On the other end of the spectrum is my old desire to perform or play music. My trumpet chops are far out of practice and the EWI MIDI controller lets me use my breath and technique to control either virtual instruments, software modeled instruments or even synths all with a sort of "sax"/"recorder" type instrument. It really makes MIDI feel more like playing than ever before for me.
Another factor is that I really want to use all of the major new synthesizers on the iPad but the workflow, frankly sucks! I have to audio-copy/paste everything from app to app - try to get parts playing in the background etc. etc.
If you want to hear some amazing sounds from someone mastering the "only on iPad" music, check out Michael Wadlow on soundcloud. He manages to produce and master some great electronica completely on the iPad - no post-processing or DAWs at all! Even more minimalist - (meaning iPhone!) is some of the work by Galaxyexplorer. He does all of his pieces on iPhone. Both of these musicians have far more patience than I !
There is an emerging standard that may help called AudioBus which should let iPad apps share audio in realtime and process record over the shared "bus". I can' tell you how excited I am to see this develop. As things stand today, unless you are using an "all in one" type DAW on the iPad, putting together various sounds into a cohesive piece of music is an exercise in patience!
Maybe once this is in place and all the apps upgrade to use it, I will be back to more mobile music. Hell, I will probably bang out a few in the meantime.
Monday, September 3, 2012
Trade secrets!
I recently posted a quiet meditative piece somewhere between Jazz and Classical and a few folks asked for some details on how it was composed and produced. I am a bit hesitant sometimes to describe the entire process since I like to see the piece critiqued on its own merits or lack thereof, but for the very few who click through to my blog, here is the tell-all for "Echoing Thoughts"
Echoing thoughts features an imaginary combo (J422 combo - lame I know!), that I start the process with. Usually this is a small number of jazz performers. In this case I started with drum kit, upright bass, tenor sax, piano and later added a flute. As the piece progressed, the tenor sax was a bit too harsh and I fitted in a cello section which sort of made the piece somewhat classical sounding. Originally I was thinking of Boren and der Club of Gore but the piece morphed into something more New Age.
The piece was captured in Logic Express 9 with software instruments - everything is MIDI based. The piano is the Yamaha grand provided with Logic 9. The cellos come from software instruments in the Apple Symphonic Jampack. The flute comes via WIVI band - a software modeling suite that creates very realistic instrument sounds. The upright bass is also from Logic's sampler.
The piece is composed and directed via Noatikl 2 (intermorphic.com), which is a generative music engine. With Noatikl, you define the voices, whether they play melodically in a scale or play fixed rhythms or notes, what notes they play, what chords they play, the probability of each note, the number of rests versus notes - whether or not to follow another voice, etc. etc.
Basically you feed in all the parameters and tendencies that you want the musicians to follow and Noatikl, in turn, feeds these MIDI events to each track - which you can either just listen to or record. The process in Noatikl is much more akin to composing than it is performing. Its a composer's dream! You tell every musician what to do and how rigid or loose to play, and here's the great thing - they do it! Frank Zappa would be drooling if he were alive to see it.
The piece was started with the drum voices - simple kick, ride cymbal and snare - very minimalist. I created 3 differing patterns for the cymbal and snare and assigned probabilities to them. One main pattern played "most" of the time with alternating triplets and variations mixing in occasionally. Each of these voices was assigned to an acoustically sampled drum kit on MIDI channel 10 in Logic.
The bass plays a slowly evolving pattern of either whole or half notes with a restricted range of notes. The cello plays a slow pattern with the flute eventually playing a following pattern one beat after the cello.
For the piano, there are 2 voices - one per "hand". The left voice plays 3 or 4 note chords in C-minor on the lower part of the keyboard. There is minor variation in timing to make some chords feel a little bit arpeggiated. The "right hand" is another voice, higher up the keyboard playing only one note at a time. Both of these are fed into MIDI channel 6 for the piano sounds.
Once composed and I liked the sound, I recorded the parts into several tracks in Logic 9 that were set up on individual MIDI channels. I did some very minor editing post - MIDI capture to find a decent stopping point.
That is basically the whole process. Below is a screen capture of Noatikl with the MIDI parts "wired" to the MIDI channels. There are pages and pages of rules that also come into play such as the scale (C Minor in this case), the probability of each note, whether to evolve patterns or not etc.
Echoing thoughts features an imaginary combo (J422 combo - lame I know!), that I start the process with. Usually this is a small number of jazz performers. In this case I started with drum kit, upright bass, tenor sax, piano and later added a flute. As the piece progressed, the tenor sax was a bit too harsh and I fitted in a cello section which sort of made the piece somewhat classical sounding. Originally I was thinking of Boren and der Club of Gore but the piece morphed into something more New Age.
The piece was captured in Logic Express 9 with software instruments - everything is MIDI based. The piano is the Yamaha grand provided with Logic 9. The cellos come from software instruments in the Apple Symphonic Jampack. The flute comes via WIVI band - a software modeling suite that creates very realistic instrument sounds. The upright bass is also from Logic's sampler.
The piece is composed and directed via Noatikl 2 (intermorphic.com), which is a generative music engine. With Noatikl, you define the voices, whether they play melodically in a scale or play fixed rhythms or notes, what notes they play, what chords they play, the probability of each note, the number of rests versus notes - whether or not to follow another voice, etc. etc.
Basically you feed in all the parameters and tendencies that you want the musicians to follow and Noatikl, in turn, feeds these MIDI events to each track - which you can either just listen to or record. The process in Noatikl is much more akin to composing than it is performing. Its a composer's dream! You tell every musician what to do and how rigid or loose to play, and here's the great thing - they do it! Frank Zappa would be drooling if he were alive to see it.
The piece was started with the drum voices - simple kick, ride cymbal and snare - very minimalist. I created 3 differing patterns for the cymbal and snare and assigned probabilities to them. One main pattern played "most" of the time with alternating triplets and variations mixing in occasionally. Each of these voices was assigned to an acoustically sampled drum kit on MIDI channel 10 in Logic.
The bass plays a slowly evolving pattern of either whole or half notes with a restricted range of notes. The cello plays a slow pattern with the flute eventually playing a following pattern one beat after the cello.
For the piano, there are 2 voices - one per "hand". The left voice plays 3 or 4 note chords in C-minor on the lower part of the keyboard. There is minor variation in timing to make some chords feel a little bit arpeggiated. The "right hand" is another voice, higher up the keyboard playing only one note at a time. Both of these are fed into MIDI channel 6 for the piano sounds.
Once composed and I liked the sound, I recorded the parts into several tracks in Logic 9 that were set up on individual MIDI channels. I did some very minor editing post - MIDI capture to find a decent stopping point.
That is basically the whole process. Below is a screen capture of Noatikl with the MIDI parts "wired" to the MIDI channels. There are pages and pages of rules that also come into play such as the scale (C Minor in this case), the probability of each note, whether to evolve patterns or not etc.
Sunday, August 19, 2012
More EWI tricks - adding 4 part harmonies with EWIVoicing
Got some time to play more with the Akai EWI-USB MIDI wind instrument and put together a funk piece. Still getting the techniques down but used some very interesting software in this one:
I put the piece together in Logic 9 and used WIVI Band for the sounds (Trumpets, trombones, Tenor Sax and Clarinet). WIVI Band is a scaled down version of the Wallander software modeled instruments giving you several of the best instruments in the set which sound pretty realistic particularly when you can use breath controls for expression.
I also came across a very cool application (Mac only), that creates harmonies for any scale you specify and lets you spread them across multiple MIDI channels and consequently multi instruments. This lets you play a brass section all at once in harmony! Very impressive program and its free. Its made in Japan with a semi-broken english page available but I got most of it figured out. The program is EWIVoicing and runs as a stand-alone application on OSX.
This makes using it in Logic a little different than a plugin. You have to use a virtual MIDI port which you set up in the MIDI control panel:
The 4 MIDI channels for the 4 parts is perfect for WIVI band which provides 4 MIDI channels for each instance of the plugin. All you have to do is select one instrument in each tab in WIVI Band, fire up the EWIVoicing and select the desired scale, octave and algorithm to use. Now when you play on the EWI, all 4 parts come right in!
I used it for the ensemble part of the piece above. I also added solo tracks for Tenor sax and Trumpet. Still learning the tool but I was pretty happy with the sound. Won't fool any jazz musicians but I think the parts sound reasonably realistic.
Tuesday, August 7, 2012
House cleaning
Just a quick note if you notice that many of my old posts have broken soundcloud links. I've had to do a little housecleaning since I'm a cheap bastard and won't upgrade beyond the first tier paid membership on soundcloud (I'm out of minutes!).
That's what happens when you're more prolific than talented. In any event, for any fans of archeology, I also have MOST of my stuff dating waaay back to 2004 available on:
Making room on soundcloud for more new noise!
Sunday, August 5, 2012
The new Cat
One thing I can be counted on to do is to ignore most advice about safely upgrading my system. After a very quick survey of Google, I went ahead and upgraded to Mountain Lion on my circa 2009 iMac. Common wisdom is to wait until all plugins and DAWs have certified that they work and have had the bugs shaken out. Hey, who wants their wisdom to be "common"?
So after allocating an hour (and taking 3 hours), upgrade is complete. I ran into a few updates that had to go along with it. My virtual machine software (Parallels) had to be replaced with a newer version as did the drivers for my Canon printer. Easy, but time consuming - particularly parallels since I had to then update the virtual Windows XP version I run under it - just a reminder why I don't enjoy Windows quite so much!
So after all that what broke? Sadly there was one casualty. I use a now defunct utility called "VSTAU" to allow me to run Mac VSTs under logic. The middleware coverts them into an AU wrapper so they can be used in Logic which sadly does not support VSTs. One of my favorite freeware synths (Synth1) no longer works. There are rumors that they will put out an AU version eventually so I'm ok waiting.
The good news is that all of my "mainstream", AU plugins seem to work without issue. I am using the Korg M1, Arturia Mini and Modular, Aalto and many others without any problems. I'm sure I'll see many updates and patches following on but so far so good.
So, since I don't make a living with my iMac and don't make a living with my music software, no reason not to upgrade. If your circumstances differ...well, maybe wait awhile to see what shakes out.
So after allocating an hour (and taking 3 hours), upgrade is complete. I ran into a few updates that had to go along with it. My virtual machine software (Parallels) had to be replaced with a newer version as did the drivers for my Canon printer. Easy, but time consuming - particularly parallels since I had to then update the virtual Windows XP version I run under it - just a reminder why I don't enjoy Windows quite so much!
So after all that what broke? Sadly there was one casualty. I use a now defunct utility called "VSTAU" to allow me to run Mac VSTs under logic. The middleware coverts them into an AU wrapper so they can be used in Logic which sadly does not support VSTs. One of my favorite freeware synths (Synth1) no longer works. There are rumors that they will put out an AU version eventually so I'm ok waiting.
The good news is that all of my "mainstream", AU plugins seem to work without issue. I am using the Korg M1, Arturia Mini and Modular, Aalto and many others without any problems. I'm sure I'll see many updates and patches following on but so far so good.
So, since I don't make a living with my iMac and don't make a living with my music software, no reason not to upgrade. If your circumstances differ...well, maybe wait awhile to see what shakes out.
Saturday, July 28, 2012
Hands on/Hands off
One nice thing about being a hobbyist with music creation is that your productivity levels aren't that constrained. Common wisdom states that you should take the time to master a few instruments and focus your time and effort there. When trying to make a living or meet deadlines, good advice!
Since I am doing neither musically, safely ignored! I can instead keep my musical A.D.D. and flit from tool to tool or technique to technique. My most recent set of conflicting interests involve generative music and performance.
On the generative side, I've been writing about Mixtikl and Noatikl which are basically AI engines for music creation. I've been playing with these tools to create algorithmically produced music that sounds "human-like". I intend to do much more here in creating classical and/or jazz pieces that follow an evolving ruleset.
On the other hand, I have also been playing with my new EWI-USB MIDI wind instrument. This is a MIDI device that uses breath and fingerings to produce music. When I first got interested in electronic music, it was the Wave-table synthesis of the Creative Labs AWE-32 (see earlier blogpost) that managed to make some instruments sound almost real. At least on the computer of the mid-90s, they were recognizable if not expressive. Real sounding instruments with MIDI or electronic keyboards has remained somewhat elusive.
This is getting closer and closer to attainable though as technology continues to evolve. With the EWI-USB, there are two types of sound generators I've been using. The more traditional approach is with high quality samples. If you have a multi-sampling instrument such as Logic's ESX24, you can get "real sounding" sounds if you meticulously associate breath control (CC #2 in MIDI speak) with the appropriate amplifiers, envelopes and filters. More interesting is the approach taken by Wollander in their WIVI band offering (a light version of their WIVI orchestra). These sounds are software-modeled which is a fancy term for additive synthesis.
Many sine waves are combined algorithmically to simulate the sonic frequencies emitted by actual analog instruments. I find the results suprisingly good. This sort of instrument also reacts more to the performer. If you are using high quality samples, you tend to sound like the performer who recorded them. The software method, however, reacts to your own nuances of play and breathtone which makes the performance more personal.
I also have used the EWI-USB in non-traditional ways to control analog synthesizers with breath control. In the piece below, I have the WIVI trumpet along with the Arturia MiniMoog. Both are played on the EWI. I also include an electric piano and bass performed on the more traditional keyboard.
Its an early effort for me but I think it has promise. I find the breath control brings much more expressive even to electronic instruments and will be using this quite a bit more.
In my arsenal for breath-controlled instruments now, I have:
Since I am doing neither musically, safely ignored! I can instead keep my musical A.D.D. and flit from tool to tool or technique to technique. My most recent set of conflicting interests involve generative music and performance.
On the generative side, I've been writing about Mixtikl and Noatikl which are basically AI engines for music creation. I've been playing with these tools to create algorithmically produced music that sounds "human-like". I intend to do much more here in creating classical and/or jazz pieces that follow an evolving ruleset.
On the other hand, I have also been playing with my new EWI-USB MIDI wind instrument. This is a MIDI device that uses breath and fingerings to produce music. When I first got interested in electronic music, it was the Wave-table synthesis of the Creative Labs AWE-32 (see earlier blogpost) that managed to make some instruments sound almost real. At least on the computer of the mid-90s, they were recognizable if not expressive. Real sounding instruments with MIDI or electronic keyboards has remained somewhat elusive.
This is getting closer and closer to attainable though as technology continues to evolve. With the EWI-USB, there are two types of sound generators I've been using. The more traditional approach is with high quality samples. If you have a multi-sampling instrument such as Logic's ESX24, you can get "real sounding" sounds if you meticulously associate breath control (CC #2 in MIDI speak) with the appropriate amplifiers, envelopes and filters. More interesting is the approach taken by Wollander in their WIVI band offering (a light version of their WIVI orchestra). These sounds are software-modeled which is a fancy term for additive synthesis.
Many sine waves are combined algorithmically to simulate the sonic frequencies emitted by actual analog instruments. I find the results suprisingly good. This sort of instrument also reacts more to the performer. If you are using high quality samples, you tend to sound like the performer who recorded them. The software method, however, reacts to your own nuances of play and breathtone which makes the performance more personal.
I also have used the EWI-USB in non-traditional ways to control analog synthesizers with breath control. In the piece below, I have the WIVI trumpet along with the Arturia MiniMoog. Both are played on the EWI. I also include an electric piano and bass performed on the more traditional keyboard.
Its an early effort for me but I think it has promise. I find the breath control brings much more expressive even to electronic instruments and will be using this quite a bit more.
In my arsenal for breath-controlled instruments now, I have:
- Garritan's Aria that came with the EWI - mediocre samples but they can be layered up into groups
- Wivi Band - The best quality and realistic sounding instruments I have for the EWI
- Korg M1 patches from Patchmanmusic.com - patches for the Korg legacy synth optimized for EWI
- Moog Modular/MiniMoog - Its easy to create my own patches for this using breath control
- Synth1 - Likewise, I have a set of patches to react to CC #2 breath control
- Madrona Aalto synth - Another soft synth that lets you patch breath control to any filter or effect you wish.
For many of my single voice sounds in the future, I will probably make heavy use of the EWI-USB.
Saturday, July 21, 2012
After several months with my new Akai EWI-USB MIDI wind instrument, I finally got a piece out. I am nowhere near mastering this beast yet, but here is my first effort:
The EWI-USB is a great addition to my gear in that it merges my ancient wind instrument memories (trumpet) with my electronic MIDI studio setup. I am pushing 50 and don't really have the chops anymore to push out trumpet pieces. The EWI lets me use breath control and wind/fingerings to perform any and all software instruments that I have in my collection - literally 100s.
The only catch is that you have to configure the synths or software to respond to breath control (MIDI CC#2) and/or to aftertouch. The device itself, as I have already blogged is round $240 street price but as always, that is sort of the tip of the iceberg.
The EWI comes with a set of Garriton samples that are mediocre at best and allow you to play samples that to paraphrase Douglas Adams, sound almost, but not entirely unlike the real instruments they portray.
After mucking for awhile with the EWI, I decided that I needed a better trumpet sound and went with Wollander's WIVI band (as I also blogged earlier). This product is a set of software-modeled instruments which is a fancy way of saying, it sounds REAL and doesn't use samples! A very difficult feat!
The piece above, however, is using 1980s sample technology in the Korg M1 legacy synth. In this piece I am using Korg patches that have been created specifically for wind instruments by Patchman Music. These are modifications that use aftertouch (since the Korg does not directly use breath control) to affect the volume and timbre of the sounds. The sounds aren't entirely real, but they do blend together well and support breath control extremely well.
This is where the EWI leads to more purchases. The patches for the M1 were another investment but well worthwhile. I now have several very interesting ways to use the wind controller:
The EWI-USB is a great addition to my gear in that it merges my ancient wind instrument memories (trumpet) with my electronic MIDI studio setup. I am pushing 50 and don't really have the chops anymore to push out trumpet pieces. The EWI lets me use breath control and wind/fingerings to perform any and all software instruments that I have in my collection - literally 100s.
The only catch is that you have to configure the synths or software to respond to breath control (MIDI CC#2) and/or to aftertouch. The device itself, as I have already blogged is round $240 street price but as always, that is sort of the tip of the iceberg.
The EWI comes with a set of Garriton samples that are mediocre at best and allow you to play samples that to paraphrase Douglas Adams, sound almost, but not entirely unlike the real instruments they portray.
After mucking for awhile with the EWI, I decided that I needed a better trumpet sound and went with Wollander's WIVI band (as I also blogged earlier). This product is a set of software-modeled instruments which is a fancy way of saying, it sounds REAL and doesn't use samples! A very difficult feat!
The piece above, however, is using 1980s sample technology in the Korg M1 legacy synth. In this piece I am using Korg patches that have been created specifically for wind instruments by Patchman Music. These are modifications that use aftertouch (since the Korg does not directly use breath control) to affect the volume and timbre of the sounds. The sounds aren't entirely real, but they do blend together well and support breath control extremely well.
This is where the EWI leads to more purchases. The patches for the M1 were another investment but well worthwhile. I now have several very interesting ways to use the wind controller:
- Wivi Band - this gives me very realistic sounding trumpets, trombones, tenor saxes, oboes, tubas etc.
- My old ESX24 sampler in Logic - if you set up the CC#2 (breath control) in the modulation matrix to control relative volume, sample selection and cutoff, you can make sampled instruments in Logic 9 sound very realistic.
- Aalto synth - this is a Buchla inspired synth that you can "wire up" to use breath control which can be routed to anything. Sounds "synthy" but its great to use with the EWI.
- Synth1 - The ubiquitous freeware synth has some nice patches set up for breath control as well.
The EWI so far is fantastic. I can play trumpet parts once again as well as sax or any other wind based interments and it also adds expressive possibilities to my software synths that aren't possible with just a keyboard.
Saturday, July 14, 2012
Is that art???
Every once in awhile, I find myself in a classical mood and one thing I notice is that I start thinking of the composer I want to listen to. This is very different than anything with jazz or popular music. With minor exceptions, classical music is always associated with its composer rather than the performer. Rarely am I thinking, I want to hear the Philadelphia symphony or London philharmonic - I'm thinking Mozart or Bach.
There are a few exceptions such as a particular singer or performer such as Yo yo Ma or other virtuoso, but usually its much more about the composition than the artist's interpretation. When it comes to other forms of music, it is mostly about the performer rather than the song writer. Interesting stuff. Few, however would question the artistry of a great composer nor of a performer.
So, with electronic music... There are many ways to create compositions. There are keyboard based synths that are "played". There are trackers or sequencers that can be edited manually and there are other ways to create musical compositions. Usually, the process involves playing midi type controllers into a DAW of some sort that records the events in individual tracks for more editing.
In the case of trackers (Sunvox, iSequence, Renoise etc), the events can also be manually entered via QUERTY keyboards or pecked out a note at a time with a midi keyboard. So, is this still art if it is more an engineering process rather than performing? I think most would argue yes since it is a bit more in the vein of composing - not to compare my electronic "noise" directly with Beethoven, but you get the point.
Taken one step further, there is generative music. This style of composition abstracts the process one step further and has the composer giving various voices and parts a set of musical "rules" to follow in a piece. Rather than banging in exact notes and meters, the composer creates a set of rules and guidelines to put into an algorithm that in turn generates the sounds. This is analogous to a jazz musician sketching out a piece, telling the musicians which keys to noodle around in and then letting them improvise.
I've written before a bit about Intermorphic's Mixtikl program and recently I've purchased the "big brother" version called Noatikl. I haven't found any generative music software quite so deep or approachable, though to be sure, there is a steep learning curve!
Mixtikl was the first product I used - initially on the iPad but I also bought the desktop version. Mixtikl emphasizes the "mixing" process and allows you to combine standard waves or loops along with generative elements that evolve their sounds according to the rules provides. There are many tiklpaks available with pre-built generative (and non-generative) sounds but you can also create your own.
The internal sounds generated by Mixtikl use a built-in modular synth called Partikl which is very powerful though somewhat thin sounding. I have made many of my pieces on SoundCloud with pieces of mixtikl or even entirely in mixtikl. The limiting factor, in my opinion, is that the internal synth isn't all that rich sounding. The features are great, but the sounds themselves and filters, are sort of meh - not surprising considering that they have to run on less powerful CPUS.
Enter Noatikl - This product can also create sounds and run them through the built-in Partikl synth. More impressive though, you can just pump out MIDI on channels and put the MIDI stream directly into your DAW and use any synth or software instrument you have! When paired with Logic 9, this allows me to generate music in any instrument I have on my desktop (I have hundreds if not more!).
The creative process here starts with a Logic 9 template with 16 tracks - one for each MIDI channel. Using Noatikl, you can wire any of its voices to MIDI channels - even having multiple voices per channel if you want polyphony. Within the Noatikl environment, I set up the "rules" for the piece including:
There are a few exceptions such as a particular singer or performer such as Yo yo Ma or other virtuoso, but usually its much more about the composition than the artist's interpretation. When it comes to other forms of music, it is mostly about the performer rather than the song writer. Interesting stuff. Few, however would question the artistry of a great composer nor of a performer.
So, with electronic music... There are many ways to create compositions. There are keyboard based synths that are "played". There are trackers or sequencers that can be edited manually and there are other ways to create musical compositions. Usually, the process involves playing midi type controllers into a DAW of some sort that records the events in individual tracks for more editing.
In the case of trackers (Sunvox, iSequence, Renoise etc), the events can also be manually entered via QUERTY keyboards or pecked out a note at a time with a midi keyboard. So, is this still art if it is more an engineering process rather than performing? I think most would argue yes since it is a bit more in the vein of composing - not to compare my electronic "noise" directly with Beethoven, but you get the point.
Taken one step further, there is generative music. This style of composition abstracts the process one step further and has the composer giving various voices and parts a set of musical "rules" to follow in a piece. Rather than banging in exact notes and meters, the composer creates a set of rules and guidelines to put into an algorithm that in turn generates the sounds. This is analogous to a jazz musician sketching out a piece, telling the musicians which keys to noodle around in and then letting them improvise.
I've written before a bit about Intermorphic's Mixtikl program and recently I've purchased the "big brother" version called Noatikl. I haven't found any generative music software quite so deep or approachable, though to be sure, there is a steep learning curve!
Mixtikl was the first product I used - initially on the iPad but I also bought the desktop version. Mixtikl emphasizes the "mixing" process and allows you to combine standard waves or loops along with generative elements that evolve their sounds according to the rules provides. There are many tiklpaks available with pre-built generative (and non-generative) sounds but you can also create your own.
The internal sounds generated by Mixtikl use a built-in modular synth called Partikl which is very powerful though somewhat thin sounding. I have made many of my pieces on SoundCloud with pieces of mixtikl or even entirely in mixtikl. The limiting factor, in my opinion, is that the internal synth isn't all that rich sounding. The features are great, but the sounds themselves and filters, are sort of meh - not surprising considering that they have to run on less powerful CPUS.
Enter Noatikl - This product can also create sounds and run them through the built-in Partikl synth. More impressive though, you can just pump out MIDI on channels and put the MIDI stream directly into your DAW and use any synth or software instrument you have! When paired with Logic 9, this allows me to generate music in any instrument I have on my desktop (I have hundreds if not more!).
The creative process here starts with a Logic 9 template with 16 tracks - one for each MIDI channel. Using Noatikl, you can wire any of its voices to MIDI channels - even having multiple voices per channel if you want polyphony. Within the Noatikl environment, I set up the "rules" for the piece including:
- What key and scale type should the piece play in
- For each voice - should it create single notes or chords?
- For the chords - should it use keyed harmonies or offsets to pitches?
- For rhythmic voices - fast, moderate or slow - dotted notes or only regular notes - what percentage/probability for each note duration?
- For patterns - what rhythms should be used? What note velocities or pitches - what is the probability of each one
The list goes on from there. You can set rules for MIDI CC events and almost anything you can think of. Noatikl and Mixtikl are basically artificial intelligence agents for creating sounds.
Once the rules are in place, I hit play in my Logic DAW and listen to the tracks. When I find a combination that works, hit record and all of those MIDI events are recorded in Logic 9. At this point, I disconnect the piece from Noatikl and pick up work in the DAW. I might want to remix, add effects or mute/unmute certain sections.
Once completed, you end up with a generated piece that you can then mix/master or mangle as you see fit. This too is only the surface of these powerful programs. They also offer a scripting language, can act as "hyperinstruments" (they listen to what you play into them and then harmonize with what you are doing) etc. I will write more as I learn more but I am using Noatikl and Mixtikl extensively in my modest works.
Wednesday, June 6, 2012
This MIDI controller blows!
I've added many new tools to both my iPad and my desktop setup recently. The most interesting addition is the Akai EWI-USB Wind MIDI instrument. This is the coolest thing I've added to my gear setup in a long time.
In my more "analog" days growing up I played trumpet, so this really brings more into my music making. I can now use my breath to bend or adjust volume in anything I'm playing. There are two versions of the Akai EWI - the 4000S and the USB version. I opted for the cheaper (naturally!) at around $230 shipped. The "big brother" version contains a full synthesizer in the device itself whereas the USB version has to be plugged into a computer both for power and to use a soft synth. The cool part is that ANY soft synth or sample library can be used.
I can now play sax, trumpet, strings or any synth using breath controls which come in as MIDI CC#2 events. Any synth that can react to this can be set up to do almost anything with the breath. The Akai also provides a nice vibrato if you bite down on the mouthpiece lightly.
There is a learning curve somewhat larger than some expect so you can find many of these on Ebay! The bad news is that they seem to go for very close to the "new" price so I just ordered a new one. The samples that come with the AKAI are sort of "meh", but they are sufficient to get you started at least for learning the tool and you can layer the Garritan samples that come with it to get decent (if not realistic) sounds.
After playing with these for awhile I did indulge myself and bought the WIVI band modeled instruments (another $112 unfortunately). The instruments are modeled using additive synthesis and sound fantastic! Trumpets sound suspiciously like trumpets for a change with any combination of mutes available.
I've also found out how to map the CC#2 to the Logic synths including their sampler so even with the Logic sampled instruments, I can make them sound credible.
Interestingly enough, for sax sounds I still tend to prefer my Korg M1 legacy edition even though it is 80s tech. They sound amazing even after all these years.
The AKIA USB offers several fingering methods and thankfully, one is based on trumpet or brass fingerings so I don't have to learn sax fingerings or deal with all those keys! The EVI method uses brass fingerings with the right hand and the left hand helps with 1/2 octave transitions (lifting the index finger between G and A flat does it). There are rollers that the left thumb rolls over to change octaves.
All in all this takes some getting used to, but it is familiar. One odd thing is that for all octaves, you have to use a trumpet "lower octave" fingering which is a little unnatural. C# is 123 and D is 13 always regardless of octave.
I am still in "practice mode". I've made a few simple walking baseline and drum loops that I practice soloing to. I hope to have something created with this in a few weeks. I like this much more than noodling on my keyboard!
Saturday, May 26, 2012
Trackers again
I definitely take an engineering approach to making music in my compositions. I am usually working through patterns, tweaking sounds, oscillators and mucking with software so I find the numerical approach to music making appealing. Enter the old school trackers.
Tracker music software uses the 'other' keyboard when creating music. Using a vertically scrolling window of notes and hex numbers instead of the more traditional piano roll makes the process much more scientific. The trackers first appeared in early Commodore computers and took advantage of samples and the limited CPUs and user interfaces of that time. Since trackers make very good use of limited CPUs, the iPad is a good candidate for theses types of DAWs despite the limited keyboard.
My first exposure to trackers was the excellent Sunvox app on the iPad which also has many other versions for almost any computer or cell phone out there. I downloaded the free version for my iMac as well and use that for final mix down and mastering.
I think this gave me a bit of a skewed view on trackers since SunVox is in many ways unique among trackers. Rather than relying primarily (or solely) on samples, SunVox includes a very flexible and powerful modular synthesizer. It is optimized for light CPU usage so some of the sounds reflect that but you have an extremely rich set of building blocks to craft your own sounds with it.
I was simultaneously enamored of the concept and confused by the interface which is odd - typical of cross-platform tools. While I tried to figure SunVox out, I took a look at another option on the iPad called iSequence by BeepStreet.
BeepStreet took the traditional tracker concepts and built an interface ideal for touch screens and the iPad in particular. The iSequence "tracking" occurs atypically left to right on the iPad which better suits the landscape screen. Also, more like older trackers, iSequence depends on samples and offers hundreds free and still more available as in app purchases - most at $2 per pack - pretty reasonable overall. The program also lets you bring in your own loops and samples, so it is very flexible.
At the same time, BeepStreet has some limitations which make it run well on iOS CPUs. There are only 8 tracks available. Not only that, but each voice takes one track - for example a chord takes 3 tracks. There are some nice ways around this such as bouncing groups of tracks to loops, using chorded samples or drum loops, but it does require you to keep things sort of simple. Added to the old school tracker interface is a fairly modern effects processing chain which lets you route any tracks through whatever effects bus you want. Automation is done by recording live button movements rather than banging in and interpolating hex codes. All in all, iSequence is a hybrid - part tracker and part modern and I think it is probably the best tracker option in terms of user interface on the iPad.
I eventually gave SunVox more of a try and created a few nice tracks with it. In terms of sheer power, it has it and seems to be updating new features every month. I think this is the most fun tracker I own. While it is not a perfect fit on the iPad interface, it is very nice to be able to move my work back and forth between the iPad and iMac. This is the only tool I own that makes that so simple to do. Garageband is a one way trip "up" and NanoStudio is a bit flaky with their desktop versions.
I have gotten so used to the trackers on the iPad that I am evaluating another sort of "hybrid" tracker that is extremely powerful on the desktop called Renoise. Renoise is much closer to the traditional trackers in its look and feel but also allows for effects processing and supports VSTs and AU plugins - so while it also uses samples quite a bit, there is a great way to incorporate soft synths in it. Renoise is not free but is reasonable for a desktop DAW at $78. I am still on the test drive version but may spring for it at some point. Might be a great addition when I don't want to use Logic 9.
AHX Tracker on Amiga |
Tracker music software uses the 'other' keyboard when creating music. Using a vertically scrolling window of notes and hex numbers instead of the more traditional piano roll makes the process much more scientific. The trackers first appeared in early Commodore computers and took advantage of samples and the limited CPUs and user interfaces of that time. Since trackers make very good use of limited CPUs, the iPad is a good candidate for theses types of DAWs despite the limited keyboard.
My first exposure to trackers was the excellent Sunvox app on the iPad which also has many other versions for almost any computer or cell phone out there. I downloaded the free version for my iMac as well and use that for final mix down and mastering.
I think this gave me a bit of a skewed view on trackers since SunVox is in many ways unique among trackers. Rather than relying primarily (or solely) on samples, SunVox includes a very flexible and powerful modular synthesizer. It is optimized for light CPU usage so some of the sounds reflect that but you have an extremely rich set of building blocks to craft your own sounds with it.
I was simultaneously enamored of the concept and confused by the interface which is odd - typical of cross-platform tools. While I tried to figure SunVox out, I took a look at another option on the iPad called iSequence by BeepStreet.
BeepStreet took the traditional tracker concepts and built an interface ideal for touch screens and the iPad in particular. The iSequence "tracking" occurs atypically left to right on the iPad which better suits the landscape screen. Also, more like older trackers, iSequence depends on samples and offers hundreds free and still more available as in app purchases - most at $2 per pack - pretty reasonable overall. The program also lets you bring in your own loops and samples, so it is very flexible.
At the same time, BeepStreet has some limitations which make it run well on iOS CPUs. There are only 8 tracks available. Not only that, but each voice takes one track - for example a chord takes 3 tracks. There are some nice ways around this such as bouncing groups of tracks to loops, using chorded samples or drum loops, but it does require you to keep things sort of simple. Added to the old school tracker interface is a fairly modern effects processing chain which lets you route any tracks through whatever effects bus you want. Automation is done by recording live button movements rather than banging in and interpolating hex codes. All in all, iSequence is a hybrid - part tracker and part modern and I think it is probably the best tracker option in terms of user interface on the iPad.
I eventually gave SunVox more of a try and created a few nice tracks with it. In terms of sheer power, it has it and seems to be updating new features every month. I think this is the most fun tracker I own. While it is not a perfect fit on the iPad interface, it is very nice to be able to move my work back and forth between the iPad and iMac. This is the only tool I own that makes that so simple to do. Garageband is a one way trip "up" and NanoStudio is a bit flaky with their desktop versions.
I have gotten so used to the trackers on the iPad that I am evaluating another sort of "hybrid" tracker that is extremely powerful on the desktop called Renoise. Renoise is much closer to the traditional trackers in its look and feel but also allows for effects processing and supports VSTs and AU plugins - so while it also uses samples quite a bit, there is a great way to incorporate soft synths in it. Renoise is not free but is reasonable for a desktop DAW at $78. I am still on the test drive version but may spring for it at some point. Might be a great addition when I don't want to use Logic 9.
Wednesday, April 25, 2012
It Figures
Its been awhile since I posted. I've been working with a few tools including iSequence HD with this tune:
I am enjoying both Sunvox and iSequence tracker interfaces. It sort of fits my lack of keyboarding talent! I can plug notes into time and focus on the construction of the piece rather than on the performance. The two apps share the tracker style interface but are under the hood, very different.
Sunvox is actually a modular synthesizer that you can then control via a tracker interface. ISequence is more iPad like with a better interface but it works solely off samples - either from their rich library or samples that you create yourself. Both are very cool programs and I will be using them extensively in the future.
But the coolest and maybe most controversial tool out recently is Propellerhead's Figure. This tool was very highly hyped and was touted as being "Reason" for iOS. The delivered application is far from that, but at all of one dollar, its a no-brainer to add to your collection.
I am enjoying both Sunvox and iSequence tracker interfaces. It sort of fits my lack of keyboarding talent! I can plug notes into time and focus on the construction of the piece rather than on the performance. The two apps share the tracker style interface but are under the hood, very different.
Sunvox is actually a modular synthesizer that you can then control via a tracker interface. ISequence is more iPad like with a better interface but it works solely off samples - either from their rich library or samples that you create yourself. Both are very cool programs and I will be using them extensively in the future.
But the coolest and maybe most controversial tool out recently is Propellerhead's Figure. This tool was very highly hyped and was touted as being "Reason" for iOS. The delivered application is far from that, but at all of one dollar, its a no-brainer to add to your collection.
Some Propellerhead history with iOS
In the early dark days of iOS, Propellerhead saw an opportunity to resurrect their old ReBirth as an iOS application. Since iOS was new to them, they subcontracted a company (Retronyms), to do the programming and they delivered ReBirth for the iPhone/iPod. This was a great proof of concept, but on such a small screen, rather difficult to use. The follow-up for the iPad, however, in my opinion was a home run. On the larger screen, being able to twiddle all the 303, 808 and 909 knobs with a touch interface was better than their circa-2000 PC/Mac app ever was.
As good as ReBirth was/is, everyone was clamoring for "Reason" for iOS and expecting Figure to be just that. Good news and bad news - Figure does use some of the Reason sounds and has an innovative interface. Bad news - it only records 2 bars and there is no way internally to save them (workaround below).
If you really want "iOS Reason", there is one alternative. Remember Retronyms? Well, they went on to create their own program, Tabletop that is similar in many ways to Reason though I think the sound quality is way behind. I have finally gotten around to using Tabletop on the new iPad and with sufficient CPU power (beyond the iPad 1), it really is pretty cool. Every knob tweak or movement is recordable and you can make some wild sounds with all of its instruments and effects. They do charge for every add-on as in-app purchases, so the price can get a bit high but overall I like it.
Back to Figure
Since figure does not allow for any kind of native recording, it is often written off as a toy. I think it will evolve and that future releases will make recording much easier.
In the meantime, there are some cumbersome ways to record:
- Use the headphone jack out and into your computer. This usually adds some noise so if you record into your DAW, you should use low levels and maybe a de-esser or noise gate to remove the artifacts. Nonetheless this is easy and it works with no latency.
- Use a dock such as the Alesis. I don't have one, but if you have the hardware, you can dock an iPad and go through better cabling again with little or no latency.
- Software solution - the one I use!
- I happen to have Airfoil from Rogue Amoeba that allows my Mac to act as an "Airplay" device much like the Apple TV. With this software I can play back my Figure or any other sound and route the digital, lossless audio to the computer.
- I also have Audio Hijack Pro from Rogue Amoeba which will "capture" audio from any device on the computer and save it as AIFF, WAV, MP3 or whatever. Using this to "audio capture" the Airfoil speakers allows me to record anything digitally that I play on my iPad or iPod.
The downside of the software approach is cost (both are commercial products) and latency. The cost for me is not an issue since I already had the tools - they do much much more than simply record iOS music so they are worth investigating. The latency on the wireless playback is anywhere from 1 second to 1.5 seconds. Not an issue to play back recorded items but if you want to record a performance on the iPad, this doesn't work.
At any rate, Figure is innovative enough that I am going to keep using it and will probably include snippets of its sounds in future compositions.
Sunday, April 1, 2012
New iPad impressions (musical)
I usually skip a generation with hardware and the iPad was no exception. I limped my original iPad 1 along all during the iPad 2 years and finally bought a replacement the iPad (sort of 3). My biggest use for the iPad is music making so the incredible retina screen is 2nd on my list of features. The biggest bump for me musically was the processor speed. I believe this is near the same speed as the iPad 2 but from the iPad 1, the change is very welcome.
Garageband is now a pleasure to use since it isn't constantly mixing down the tracks to conserve memory. I've also finally rediscovered Retronyms Tabletop which really didn't run at all well on the original iPad. Tabletop is pretty much a Reason knock-off for the iPad and is a very good "virtual studio" approach although it does nickel and dime you to death with in app purchases.
Something about it must be appealing because I keep trying it out. I'll probably get a song out of it shortly and I do love that every knob and button can be recorded.
Another recent accomplishment, I finally put out a small piece with Sunvox. This is an old-school tracker with very odd user interface but I am impressed by how powerful it is. If you have the patience, you can pull off almost anything with this tool. It includes a very modular synthesizer and runs on every platform imaginable. Its easy to move your work from device to device. I must say that the iPad interface is a bit cumbersome - it was sort of made for either stylus or mouse use but it does work.
Here's my recent effort with it:
My other favorites also work more or less unchanged with the new iPad. I've been working quite a bit with Garageband now that it supports note editing and is almost a complete DAW!
Garageband is now a pleasure to use since it isn't constantly mixing down the tracks to conserve memory. I've also finally rediscovered Retronyms Tabletop which really didn't run at all well on the original iPad. Tabletop is pretty much a Reason knock-off for the iPad and is a very good "virtual studio" approach although it does nickel and dime you to death with in app purchases.
Something about it must be appealing because I keep trying it out. I'll probably get a song out of it shortly and I do love that every knob and button can be recorded.
Another recent accomplishment, I finally put out a small piece with Sunvox. This is an old-school tracker with very odd user interface but I am impressed by how powerful it is. If you have the patience, you can pull off almost anything with this tool. It includes a very modular synthesizer and runs on every platform imaginable. Its easy to move your work from device to device. I must say that the iPad interface is a bit cumbersome - it was sort of made for either stylus or mouse use but it does work.
Here's my recent effort with it:
My other favorites also work more or less unchanged with the new iPad. I've been working quite a bit with Garageband now that it supports note editing and is almost a complete DAW!
Friday, March 9, 2012
Back to DAWs - Garageband update
While I wait for my new iPad to arrive, I already am enjoying the updates to Garageband for the iPad. There has been a lot of discussion around whether or not it is a complete DAW now that it has note editing and of course the comparisons to other iPad offerings.
So, here is my take on it all. The product itself at $5 is a no-brainer. If you make music on your iPad, buy it! Even if you use only a fraction of its features, it is phenomenal. DAW-wise however, there are areas in which it excels and still some weaknesses.
The note editing is a welcome addition and makes it a bit more complete as a music tool. I think NanoStudio's editing is superior, but in most respects the Garageband solution is "good enough".
The addition of smart strings is phenomenal and is a very nice addition to its instruments. The performance instruments is one place where Garageband shines over most others. So one thing I look at now is what will be most impacted by the new Garageband? Which of my tools might I use less? If you follow the blog, you know that like most, I have a huge collection of music tools on my iPad.
I think there are sort of 2 categories of DAWs on the iPad at this point. One type is the DAW that encourages patch creation and custom sound sculpting. NanoStudio is one such DAW where you dial in your oscillators, filters and carefully craft your sounds. SunVox is similar in the tracker subcategory in that you create your own sounds from the ground up and then use them. The other category is DAWs that rely on samples and pre-built instruments. This group includes Xewton Music Studio (and FL Studio for iPad by same company), iSequence and in most respects, Garageband. You use the sounds provided and can do minor tweaks or adjustments to them via a limited range of dials or modulators.
So, Garageband is most likely in my environment to supplant Xewton Music Studio. It is much cheaper, provides superior samples and is a joy to use. Music Studio is pretty nice in many respects but is more or less the same category.
One common knock on Garageband is that it doesn't interact that well with other tools which is true to some extent. If you are in a Mac environment at home, it is very easy to begin a piece in Garageband for iPad and then complete it on your iMac with Garageband for iMac or Logic 9. This works beautifully though it is a one-way trip.
Garageband doesn't audio copy exactly, but it DOES let you audio paste in sounds. So if you want to incorporate your Korg or Sunrizer tracks, its pretty easy to do. You can also bounce down tracks to get around the 8 track limit if you need to.
One other knock is that Garageband doesn't background play. If I want to play a Sunrizer track in time with the Garageband tracks I have to use a little workaround. I have written before that emailing the track to yourself with Garageband is not a good idea because it degrades the sound to 128kbs AAC. BUT, if all I want to do is use the track to guide my recording of another one, I can email the track to myself, do an "open with" and play the track via Thumbjam or some other background program. I then record my Sunrizer track in time with the music and then paste it back into the original uncompressed Garageband song as audio.
In my environment, Garageband is now my go-to DAW for any sample based music I want to put together. For synth creation I either paste in audio to Garageband or go back to my standby - NanoStudio!
So, here is my take on it all. The product itself at $5 is a no-brainer. If you make music on your iPad, buy it! Even if you use only a fraction of its features, it is phenomenal. DAW-wise however, there are areas in which it excels and still some weaknesses.
The note editing is a welcome addition and makes it a bit more complete as a music tool. I think NanoStudio's editing is superior, but in most respects the Garageband solution is "good enough".
The addition of smart strings is phenomenal and is a very nice addition to its instruments. The performance instruments is one place where Garageband shines over most others. So one thing I look at now is what will be most impacted by the new Garageband? Which of my tools might I use less? If you follow the blog, you know that like most, I have a huge collection of music tools on my iPad.
I think there are sort of 2 categories of DAWs on the iPad at this point. One type is the DAW that encourages patch creation and custom sound sculpting. NanoStudio is one such DAW where you dial in your oscillators, filters and carefully craft your sounds. SunVox is similar in the tracker subcategory in that you create your own sounds from the ground up and then use them. The other category is DAWs that rely on samples and pre-built instruments. This group includes Xewton Music Studio (and FL Studio for iPad by same company), iSequence and in most respects, Garageband. You use the sounds provided and can do minor tweaks or adjustments to them via a limited range of dials or modulators.
So, Garageband is most likely in my environment to supplant Xewton Music Studio. It is much cheaper, provides superior samples and is a joy to use. Music Studio is pretty nice in many respects but is more or less the same category.
One common knock on Garageband is that it doesn't interact that well with other tools which is true to some extent. If you are in a Mac environment at home, it is very easy to begin a piece in Garageband for iPad and then complete it on your iMac with Garageband for iMac or Logic 9. This works beautifully though it is a one-way trip.
Garageband doesn't audio copy exactly, but it DOES let you audio paste in sounds. So if you want to incorporate your Korg or Sunrizer tracks, its pretty easy to do. You can also bounce down tracks to get around the 8 track limit if you need to.
One other knock is that Garageband doesn't background play. If I want to play a Sunrizer track in time with the Garageband tracks I have to use a little workaround. I have written before that emailing the track to yourself with Garageband is not a good idea because it degrades the sound to 128kbs AAC. BUT, if all I want to do is use the track to guide my recording of another one, I can email the track to myself, do an "open with" and play the track via Thumbjam or some other background program. I then record my Sunrizer track in time with the music and then paste it back into the original uncompressed Garageband song as audio.
In my environment, Garageband is now my go-to DAW for any sample based music I want to put together. For synth creation I either paste in audio to Garageband or go back to my standby - NanoStudio!
Saturday, March 3, 2012
My underutilized iPad music tools
As I wait for the next iPad announcement, I am dusting off some of my portable music tools that I have yet to use much in my music. I often return to NanoStudio, Mixtikl, Garageband, Music Studio and the usual suspects when making music on the iPad but I've picked up a number of other tools that are taunting me to use them!
All of the Korg tools are great on the iPad and recently I've acquired iKaossilator which has been sitting on my devices as a pattern toy. With a little experimentation, I find that I can string together variations on the patterns in other DAWs and add some very nice sounds to my pieces. Such as here:
This piece uses the iKaossiliator patterns along with some NanoStudio tracks overtop of them.
One thing Korg does very well is to integrate their tools with Soundcloud, but to upload to their groups, the ENTIRE piece has to use only the Korg tool in question. This can be somewhat limiting but it lets other Korg users download your patches and setups as well as the music and gets TONS of hits, listens etc., so on my to-do list is to make a piece with only the iKaossilator. The only way to really get a full piece composed is to use the live recording function and to manually switch between patterns, parts or play them live while recording. This works well but if you want to tweak or edit the final piece, you are limited to audio editing.
Another tool that has been on my devices and even desktop for ages is the Tracker sequencer Sunvox. I tried on several occasions to fire it up, start programming patterns and....ran away screaming! Its a bit um...fugly in its interface but there's a reason for that!
All of the Korg tools are great on the iPad and recently I've acquired iKaossilator which has been sitting on my devices as a pattern toy. With a little experimentation, I find that I can string together variations on the patterns in other DAWs and add some very nice sounds to my pieces. Such as here:
This piece uses the iKaossiliator patterns along with some NanoStudio tracks overtop of them.
One thing Korg does very well is to integrate their tools with Soundcloud, but to upload to their groups, the ENTIRE piece has to use only the Korg tool in question. This can be somewhat limiting but it lets other Korg users download your patches and setups as well as the music and gets TONS of hits, listens etc., so on my to-do list is to make a piece with only the iKaossilator. The only way to really get a full piece composed is to use the live recording function and to manually switch between patterns, parts or play them live while recording. This works well but if you want to tweak or edit the final piece, you are limited to audio editing.
Another tool that has been on my devices and even desktop for ages is the Tracker sequencer Sunvox. I tried on several occasions to fire it up, start programming patterns and....ran away screaming! Its a bit um...fugly in its interface but there's a reason for that!
Sunvox runs on almost every device known to man. Windows, OSX, Palm devices, old tablets and iOS. Therefore its odd interface is consistent in every environment - sort of reminds me of Mixtikl in that respect. Equally bad everywhere!
Nonetheless it is a very powerful DAW if you can get your head around the tracker concepts. I guess it brings out the geek in me typing in hex numbers for volume values.
I have yet to get a full piece put together with this but it is a very interesting process. Like Mixtikl, Sunvox lets me move my pieces from device to device and work on them anywhere.
I will eventually figure this thing out!
Still another synth that baffles me is the TC-11 which everyone raves about. It is a completely new idea on interface and is made from the ground up for touch screens. Incredible depth and sounds but I have yet to get anything remotely musical out of it. I think if you want a melodic line, you really have to use its internal sequencer since there is nothing remotely similar to a keyboard in the synth. It was a bit expensive as well, but its another piece I want to add to my sounds soon.
Subscribe to:
Posts (Atom)