Saturday, December 21, 2013
I've continued posting there for 10 years now and even though it is waning in popularity, some of the best folks and feedback on my music making seemed to come from there.
Sadly the site has been up and down for the past few weeks due to technical problems but it is still one of my favorites despite posting more to Soundcloud and iCompositions of late.
Recently there was a challenge called LSP (Lost songs project) where you get the chance to create a song that should be "missing" from a commercial album. It was terrific fun with many postings emulating everything from classical to the Doors or Beatles.
I went obscure with my entry and went back to one of the early Jazz albums I fell in love with. In the early 70s there was a post-bop, fusion movement bringing electronic instruments to Jazz. Bitch's Brew by Miles Davis and the birth of Weather Report were highlights of this era. Herbie Hancock also put out a few albums in this style and my favorite was Sextant - worth popping over to Spotify for a listen!
For my LSP entry, I tried to emulate the afro-electronic jazz sound of this album, though I used more modern synths. The original used a few old ARPs and a Moog III. I used Absynth, a synth I'm trying to learn much more about, the Korg M1 legacy edition with my wind controller (EWI) and the Aalto synth (modeled after the Buchla).
While I don't even have a fraction of Herbie's talent, here is my humble attempt:
Saturday, November 23, 2013
If you are one of the very few that follow my blogging or music, you probably know that one of my favorite tools is Noatikl, which is essentially a super-sequencer or music composition tool.
Recently on a few pieces I received some nice feedback on my "playing" or the performance aspects of generative pieces and that got me thinking a bit on how to take that. All of the pieces that I put together and release are usually 100% MIDI or close to that which means I am using computer generated sound in one form or another.
If my pieces are synthesizer based, I am using software synths as plugins on the iPad or on my desktop. If my pieces use "traditional" instruments, I am resorting to samples played on either a keyboard or other creative MIDI input devices using Logic or Kontakt on the desktop or any number of samplers/Romplers on the iPad.
Some of my pieces are performed track by track with MIDI keyboards and then edited (for mistakes), tweaked for sound and/or timing issues and then combined. This is a bit closer to traditional performing in a studio environment. I NEVER perform anything live.
Other pieces, such as those from Noatikl are more akin to composing. I program each voice with rhythmic patters, keys to use, rests, probabilities, instructions for harmony, etc and more or less "turn them loose". This is certainly not performing in the traditional sense but is setting music into motion and then tweaking the program until it sounds "done" to me.
One aspect very close to performing is when I use the EWI (Electronic Wind instrument) in my pieces. This device requires fingering the keys, blowing into the mouthpiece and translates all of this into MIDI events that are passed into a (usually) sample-based instrument. This can be a flute, trumpet, sax or even synthesizer - all modulated and controlled with my breath. Despite the "disconnect" from an analog wind instrument, it reacts and acts almost exactly the same.
So with all that, is this just a shortcut to music? Is this just winding up a music box or turning on a player piano? I think with all the setup and parameters, its more than that but also its not quite live either - even if done in one take.
I think music has evolved further and further away from direct creation over the years and that computers are just one more step along the way.
From the initial singing or chanting, we have evolved more complicated ways of making sounds throughout history. First with pipes and blowing or percussion with sticks, then strings or harps. These evolved into mechanical versions - harpsichords, pianos. Is the musician still performing when "just" pushing keys that turn levers that make hammers strike strings? What happens when it is electronic as in an organ? Sample based as started with rotating drums or tapes?
Overall I think studio work is a mix of composing and performance and the tools used don't really define that. So when my beautiful violin part is merely a mutating rhythm based on:
<100 R 30 -30 60 60>
it is still in some way a musical composition. I get to play "George Martin" to the performers in these cases and hopefully come up with something palatable!
Here are a few pieces "generated" more or less in that fashion - one from the iPad and the other from the desktop:
Monday, November 18, 2013
I tend to keep with a policy of skipping each generation of hardware and since I dutifully skipped the 4th generation iPad, I broke down and bought an iPad Air. There was a $200 buyback that I took advantage of for my ancient and unused iPad first generation as well so it wasn't quite as much of a wallet shock. I went with the 64GB model because I do store tons of music samples on my iPad and the size does matter! I opted for the full size model instead of the Mini because again, with synth apps, the size matters.
Early reports have been very good related to music making on the Air and I can confirm that most of my performance concerns on CPU have been addressed. If you follow my tutorials or postings on Noatikl, you might remember that I used mainly Sampletank due to its ability to give 4 MIDI channels without too much CPU overhead provided you use a light DAW along with Audiobus.
So, with the Air, I decided to try a piece with a heavy DAW and with 3 concurrent synths recording. The heavy DAW is my favorite, Auria and the synths are the Dxi synth, Thor and Alchemy Mobile. I drove all 3 synths with Noatikl which would usually bring my 3rd gen iPad to its knees. Without a hitch I was able to record all three concurrently into Auria, which is a very heavy DAW by itself.
Another recent addition to music making on the iPad is inter app audio as I blogged before and I added some additional tracks using the Arturia Oberheim SEM synth and my old standby Sampletank for the violin.
Throughout the piece, Auria was responsive and the tracks all recorded without issue. I think I can now add many more concurrent tracks with Noatikl and Auria is now a go-to DAW for me. I'll be posting more as I experiment further. Thanks for reading!
Sunday, November 3, 2013
|Arturia Mini with IAA transport shown|
With iPad's iOS7, one of the more significant additions is inter-app audio. We've had Audiobus for quite awhile now but inter-app audio makes audio sharing more integral to the operating system (Apple is allowed to make use of all the internal routines - unlike 3rd parties - gives them a bit of an unfair advantage, but I digress).
At this time, only a handful of apps support IAA but more are adding it everyday and it will become increasingly common in the coming months. IAA makes synths on the iPad act much more like VST or AU plugins do on the desktop. You can easily plug-in compatible synths to any DAW that supports it and record right from the synth into the DAW without Audiobus routing or the need for other apps running (this means a lot on the limited memory/cpu of the iPad!).
Two DAWs that already support IAA are Auria and Apple's Garageband. In the case of Auria, you simply plug-in the synth like you would any of its insert effects. For garageband, you select an IAA track type directly. Both work reasonably well at this point and not too surprising, there are occasional bugs in both too. It will probably take a few patches/releases to get things completely stable, but IAA is already very usable.
Auria is becoming my favorite DAW lately due to its fantastic automation and effects bus but it is a bit pricey. One nice development is that if you have any of the Sugarbyte effects (Turnado, Wow) for the iPad, you don't have to pay twice to use them in Auria. It recognizes them and lets you use them as native plugins - Auria native plugins do work better than IAA provided you are using Auria.
Garageband has many great additions in the iOS7 release - the track limit is up to 16 on older iPads and 32 on the brand new ones - I think there is a purchase in my not too distant future. Another more hidden but significant feature in Garageband for iPad is that for the first time, you can easily transfer lossless mix downs out of Garageband. Garageband used to be the "roach motel" of DAWs - you could get sounds in but couldn't get them out. Now you can mix down and select "Open with" to open the mixed down AIFF in another app. Not all apps support "open in", but one significant one that is is AudioShare which in turn lets you copy/paste anywhere you wish - so for the first time, Garageband plays well with others.
The piece below was created in Auria with only Inter-app audio synths. I used the Arturia Oberheim SEM synth for the arpeggios, Waldorf's Nave for some of the leads and the Arturia Mini for the bass. There were a few crashes but the entire thing was driven from within Auria and mixed/mastered and posted. I think this will be a huge development for iPad music moving forward!
Just an aside, the Arturia synths are available on desktop for roughly $100 each - on the iPad, they are $9.99 and sound extremely close to the desktop versions!
Wednesday, October 16, 2013
Wednesday, September 11, 2013
Well I'm a sucker for a sale. Cubasis for iPad, a high-end DAW recently went on sale and I grabbed a copy. As is often the case, I put it to work with my other favorite - Noatikl for iPad. Using Noatikl with Cubasis is much closer to the Noatikl desktop experience in that you can actually concurrently record multiple MIDI tracks and later edit them.
Anyone who checked out my earlier Noatikl for iPad tutorials saw that I recorded the sounds as audio files into MultiTrack DAW. Here I recorded the sounds as MIDI into Cubasis. The bad news is that you are limited to the software instruments in Cubasis - the good news is that they are pretty good!
One thing I noticed is that when doing virtual MIDI with Noatikl on the iPad, right when the song is played, Noatikl sends a BLAST of MIDI messages which often cause problems for the receiving application. For example, with iGrand Piano, the app will "reset" to the default piano every time when you start recording. In the case of Cubasis, you have to reselect the app to restart it immediately after hitting "play" in Noatikl.
To work around this problem, I would start Noatikl playing with all parts except the drum muted. I would then click on Cubasis to restart it and let the drum start playing. Then I switch back to Noatikl and can mute/unmute tracks while recording the MIDI.
Here is the end result - in this case a syncopated frenetic piano piece in 6/8 time with a jazz drum kit and piano - all recorded as MIDI.
I hope I find a way to prevent the "app reset" issue when starting the song but I have managed to work around it for now. It is great to be able to edit the generated MIDI after the fact and to have every track on separate channels.
Saturday, August 31, 2013
While I've been off on my Noatikl Generative Music for the iPad tutorials, something (finally) happened on the desktop. Apple released the long expected update to Logic - Logic Pro X.
There are by now tons of video walkthroughs and reviews - summing up, its a great upgrade, unfortunately no 32 bit plugin support - new interface and features.
For me it was an easy upgrade and I am loving the program overall. My favorite musical genre is Jazz and most of my songs, if you follow them are done with MIDI controllers - indeed I don't even have a decent microphone and rarely work with audio files with the exception of mixdowns or drum loops.
Here is a piece I'm pretty pleased with that I put together on Logic Pro X:
This is a jazz piece featuring brush kit drums, Suitcase Mk IVelectric piano, electric guitar, fretless bass, Tenor sax, Trumpet and Korg M1. Semi-fusion sort of thing. I wanted to get a bouncy feel to the rhythm section first and made use of Logic X's Track Stacks and MIDI effects.
First, the drums - Logic Pro X has an incredible drummer track feature that will build beats for you in many styles - alter them based on the section of the music and let you have them automatically follow other tracks if you like - sounds perfect....except, of the many genres supported, Jazz isn't one of them!
Jazz drums usually feature rides, hihats and snares much more than kicks. I hope they add that to the Drummer track facility someday. But, for this piece, I resorted to some good sampled brush kits that I assembled into sections and fills. I don't use many loops anymore but for drums I still will at times. I used The Loop Loft's Art of the Brushes for this one.
For the rest of the rhythm section, I wanted to have the bass, piano and guitar sync'ed up and I used the track stack feature. Track stacks in Logic X let you select multiple tracks, group them and then it lets you have them all respond to the same MIDI events at once. I used this facility along with some of the MIDI effects to put together the bass, piano and guitar parts.
For the guitar, I used the MIDI Chord feature to automatically generate chords from one-note played on the keyboard. With just the Bass and guitar selected, I could play one key on the track stack and have the bass play the note "as is" and have the guitar play the corresponding chord.
Next I wanted the piano part to bounce around a bit. Here I also put the MIDI Chord effect on the piano track but after the MIDI chord, I put the MIDI arpeggiator. This took the chord as played and then "arpeggiated" it through the chord in 16th notes. I used some of the arpeggiator options to randomize the note lengths, octaves played and randomized the order of notes.
Now by selecting the track stack and playing single notes, I get bass, guitar chords and a nice bouncy piano lead. I did some tweaking and used automation to mute the piano in places and unmute it in others to have some variety.
Having that done, I went to my "usual suspects" in other MIDI tracks and created a track for Samplemodeling's SWAM Tenor Sax and SampleModeling's Kontakt based Trumpet. These I performed with the Akai EWI Wind controller and recorded the resulting MIDI.
For variety I also included the Korg Legacy M1 synth which I also played with the EWI using the Patchmanmusic M1 patches. This I included in another track stack along with the sax.
In some parts, I selected the sax only and played solo and in others (near the end) I selected the track stack to have the M1 and Sax play in unison. I LOVE the track stack feature for unison parts with trumpet/sax or other combos of instruments.
I don't think I'll ever go back to Logic 9 even with the 32 bit plugins I own. Its far too much fun working with Logic X.
Sunday, August 18, 2013
The ability to have a generative music engine in such a portable form is quite useful. I'd been using Mixtikl on the iPad for awhile and its great to have its 'big brother' there now as well.
As I've experimented, I've found the iPad version much more limited in number of tracks which is primarily due to the CPU on the iPad rather than anything else. When I try 4 different Virtual MIDI programs, my 3rd gen iPad gasps for air.
As this tutorial showed, Noatikl with Virtual MIDI works best if you can use apps that can handle multiple MIDI channels with one instance such as SampleTank.
So, where and how might you use Noatikl going forward? If you can limit your tracks to one or two host Virtual MIDI sessions, you can easily create complete pieces in Noatikl and use Audiobus to record the results. If you want to go well beyond that, I think you should consider the desktop version (roughly $50 at the time of this writing).
The desktop has the added advantage that it integrates with your DAW and will record the actual MIDI notes into the DAW and not just Audio. You can then edit the MIDI data further if you wish or replace instruments.
On the iPad, I've used Noatikl in several ways - as a complete composition tool, as a "jump start" for a hybrid song and as just a chord progression starter.
Here is a complete composition from the iPad version:
Here is a composition from the iPad with an added track played "manually" after the Noatikl (the flute is played in Thumbjam):
And just for grins, this "alliterative" piece, I used Noatikl with Addictive synth just to create some chord progressions and the rest are tracks I recorded afterwards:
I hope this gave you a good overview of Noatikl and how to use it. There are many great tutorials and videos at the Noatikl site at: http://intermorphic.com.
Next post I need to start getting into Logic X which I recently purchased :)
Monday, August 5, 2013
In the past six parts of this tutorial, we have built up a nice piece using drum voices and several melodic voices all in Sampletank. Played together, the parts harmonize and sound very good together. To add interest to the piece though, we probably want to position the parts in the stereo field, bring them in and out of the piece over time and record the final results.
You'll notice that Noatikl really doesn't have a record function. Sampletank does, sort of, but is limited to track by track recording - my advise, don't use it! The best solution is to utilize audiobus and record your piece into a DAW such as MultiTrack DAW or GarageBand. My recommendation is Multitrack DAW since it uses very little CPU but you can use any supported Audiobus target app.
Before recording, let's position the voices and set volume. Noatikl supports volume and panning for each MIDI channel. Remember, though, that we are just pumping out MIDI messages with Noatikl. It is up to the receiving app to react to the messages. Different apps have different levels of support for MIDI. The good news for us is that Sampletank supports both volume and panning MIDI messages.
In Noatikl for iPad, the Blend screen lets you set up both volume and panning by dragging each voice up or down for volume, and left or right for panning. You can see below that I've moved the voices around slightly.
One thing to note is that the volume and panning are per MIDI Channel. Remember that all of our drum voices go to channel 10. What this means, is that only the first voice (Kick) will have any effect on volume or panning. If you move the kick voice around, you will hear the changes. Any other voice on channel 10, such as the snare, will have no effect. Since all other voices are on their own channels, they will work as expected.
So now the voices are panned left and right and are the proper volume. Next we would like to have the song open with a beat and gradually bring in the other sounds. For that we will press the speaker with an 'x' button and bring up the voices - mute screen. While playing the piece in Noatikl, we can tap individual voices to mute and unmute them.
Here you should take some time practicing. Start the song playing with only the drums active and gradually unmute the parts. At the end, gradually mute the parts to exit to drums. In between, mute and unmute for variety. Once you have a plan, its time to start recording!
In Audiobus, we will set the target to be MultiTrack DAW and the source to be Sampletank. We won't be using the effects bus in this piece. You may need to set the buffer size to 512 in Audiobus using the button at the top.
For recording, we will first mute all voices except the drums. We will then switch to the Multitrack DAW app to press the record buttons. Make sure you know how to task switch in iOS! We will need to start recording, double-click the iPad button and quickly switch to the Noatikl app and press play. This will put some blank space at the start of the song but we can easily edit that out later.
As the song is playing, gradually unmute and mute parts to add variety. Remember that if you have any "following" voices in Noatikl, they will not play if the voice they follow is muted. This may take you a few tries to get right, but this is basically it! Once you have a take you like, you can eq or compress in the DAW, add a Fade in or fade out and edit as needed.
I will wrap up in the next post on where you might go next with Noatikl. I hope this tutorial helps whet your appetite!
Wednesday, July 31, 2013
In this part of the tutorial, we will finish out the voices for our song with a flute and a vocal choir sound. For these voices we will use another voice type - rhythmic (which is the default for Noatikl).
To add the voices, press the + button on the design window and select Rhythmic to merge in a new voice.
Do this for each voice and name the first, "flute" and the second "vocals". Make sure that flute is connected to MIDI channel 2 and vocals to MIDI channel 3.
For the flute, we want just one part playing but we need to set the rhythm rule and set the pitch to a higher range. The Pitch is set to 55 (higher numbers mean higher notes) and the Pitch Range is set to 32. This determines the highest and lowest notes.
The rhythm rule is set to "All But Dotted" for the flute. Clicking on the rule will show you the note types and you can tweak the probabilities for each duration if you wish.
You'll notice that each grey bar is set to 100% so each of the grey notes is equally likely to play.
For the vocals voice, the rhythm rule is set to slow.
We want the vocals to sing in harmony so we will go to the chord screen and select a chord range. For the vocals, we use a Depth of 1 and a range of 3 which means each note will be between 1 to 4 voices and harmonies.
If we listen now, we should hear a fully harmonized song! In the next part, we'll cover how to perform and record the song.
Saturday, July 27, 2013
In the last tutorial, we finished off our drum parts which were all hooked up to MIDI channel 10. This tutorial is going to add a nice acoustic guitar to our song. As the picture above shows, we have Sampletank playing its guitar part on MIDI channel 1 using samples for a nylon string acoustic.
In Noatikl, we are going to add a new voice by press ing the + button in the design window, selecting fixed pattern voice and pressing the Add button on the upper right hand of the screen.
Press OK to merge it into the current song file and rename it to Guitar. By default, it should be connected to MIDI channel 1. If it isn't, just drag for the voice to the MIDI channel.
For the guitar, we will use several patterns but we also want the guitar to play its own thing 25% of the time. As before, we will create some patterns and set the percent use to 75%. So far, we have used only rhythmic patterns such as <100 R -60 30 30 -60 60>. For the guitar part, we will use another pattern type in Noatikl where we specify both the rhythm and the note interval. The note interval is a relative number within the scale we specified and means that the 2nd, 3rd, 4th etc. note of the scale will be played.
Instead of the "R" in the pattern, we will use "B" and we will specify both note length and note interval. Our first pattern is:
<100 B 60 5 60 8 60 12 60 2>
100 is the relative weight or probability, "B" means both rhythm and notes, and the next numbers are pairs that specify duration and pitch. This pattern plays a quarter note 5th, a quarter note 8th (meaning first note of scale but one octave up), a quarter note 12th (octave up 5th), and then a quarter note 2nd.
We are going to adjust the root pitch to be 33 (this is I think A#) in the pitch slider in the basics screen for the voice.
Notice that the use patch is turned off for this voice. We are not using General MIDI codes here. We are using a sampled instrument. The patch name shows guitar but it is not relevant (I selected the patch just for documentation purposes).
While we are here, I'm going to set the note rest % to 10 to give the patterns some variation. 10% of the notes will be replaced with silence giving us a little syncopation.
If we play at this point, you should hear a boring one note guitar sound as well as your drums. Let's add two more patterns with same 100 weight (makes each of them equally likely) so our pattern looks like:
Let's make some chords now. We want the chords to be strummed and to be either 3,4 or 5 strings. Go to the Chords screen and specify 3 for the chord depth (meaning 3 strings will sound) and 2 for the depth range (this will vary the number up to 3+2 strings).
The default strategy says Chordal harmony but we will change it to Interval within Scale Rule to use sounds closer together as a real guitar would sound. In the Shift/Interval field put 2 so we get 3rds or 5ths. I also put the Shift Interval range at 2 to provide some variety. Experiment with these! The Delay on the chard is set to zero with a range of 4. This means each note of the chord may have a few ms delay which makes them sound 'strummed'. Again, experiment. If you play now, you'll hear some nice rhythms and chord progressions.
Lastly, remember that we put 75% on the use pattern percentage, so we need a rhythm rule to use for the other 25%. Set it to Slow:
Voila! We now have drum and guitar playing together. Next parts will cover a flute and some vocals.
Monday, July 22, 2013
In the last tutorial, we got our Noatikl engine to start playing a beat with some built-in variations. In this part, we will finish out our drums.
Currently, our design window has a kick drum, snare and high hat and looks like:
Just a reminder that since we are using virtual MIDI, the only parts of this screen that are relevant are the voices and the MIDI channel. The other buttons, Synth, effect, etc. are only used when using the built-in Partikl synth.
To finish out the drums, we are going to add a 'shadow kick' drum and some percussive sounds. Let's start with the kick. We want the kick drum two to play one beat behind the main kick drum but only if the kick drum plays (not when it is skipped in a measure). To do this, we will again use a following voice with a delay of one beat. Tap the current kick voice, press copy. Tap it again and press paste. Tap the new voice and change its type to Follows and change the patch to D035-Kick Drum 2.
As before, we need to press the Follows button on the left of the screen, set the follow voice to 'kick', change the delay type to 'Beats ( 60th of a)', and carefully move the slider on delay to 60.
Now if you press play, hopefully you hear a nice shadow kick.
Our last percussion voice will be an electronic sounding percussion hit. We want it to normally be a fixed pattern but with occasional variation. We will use the Fixed Pattern type for this one so copy the kick voice and press paste again to 'prime' the new voice.
Change the voice name to 'Perc'. The design should look like this:
We are going to give this voice a fixed pattern but we are also going to let the voice construct its own pattern 25% of the time. To do this we will go to the pattern window and use <100 R -60 30 30 -60 60> for our pattern ( quarter note rest - 2 eighth notes - quarter rest - quarter note ). We also will change the use percent to 75.
What this means is that roughly 75% of the time, this pattern will be played but 25% of the time, the voice will make up its own pattern using the rhythm rules you provide. So on the Patterns screen you should see:
Almost done! Now press the Rules button on the left of the screen. On this screen, tap the rhythm rules option and select the "Semiquavers Only" option. (Semiquavers are 16th notes - told you a musical background helps!).
Finally, we are done the drums....mostly! All the parts play and they do sound ok together, but if you listen extremely closely, every beat is right on the money and machine-like. Real drummers aren't perfect and vary timing just ever so slightly. I'll show you on one voice but you might want to vary timing on each drum voice.
Taking the kick as an example, press edit and then select the "Micro Delays" button. We will set the Delay range to 2 and the Delay Change to 1. This means that each note may be 0,1 or 2 microseconds off its target. You can play with the range to make it more or less random. This tends to "humanize" the beats.
A lot of work for beats! What is great about Noatikl is that the beats will vary themselves. No need to create lots of patterns and go measure by measure.
Tip: save the project with just the drum kit to use in future projects as a template.
Next time we'll go into the melodic voices.
Wednesday, July 17, 2013
In the last tutorial, we got as far as attaching Noatikl to Sampletank via virtual MIDI and actually getting our kick drum working. To do that, we created a fixed pattern voice and provided a pattern for the kick drum which has two lines. In this post, we'll go into explaining the pattern a bit more in depth. See links on right for other tutorial parts.
If you listen carefully to the kick drum pattern, you may notice a few things:
- Obviously there is some variation in the rhythm since two patterns are used
- The volume of each kick varies somewhat
Let's start with the first pattern: <50 R 60 -180>
All patterns in Noatikl appear between <> brackets (which is a pain to type on an iPad!). There are several types of patterns supported by Noatikl - this pattern is a rhythmic pattern as indicated by the "R". For a drum part, the note pitch isn't usually important since you just strike the instrument. Each beat in Noatikl is represented by 60. For a rhythmic pattern, positive numbers are notes, and negative numbers are rests. The initial number (50) is used as a weighing factor determining the probability that the pattern gets played assuming you have more than one pattern.
For the kick drum, our other pattern is : <50 R 60 -90 30 -60>. Following the logic above, you can see that this pattern will play a kick on beat 1 and the 2nd half of beat 3. The total of the numbers is 240 or 4*60 which is 4:4 time. Intermorphic has more documentation on patterns here.
Using what we already know, we are now going to add a sort of high hat part to the drums. We could add a new voice from the design page by clicking + and selecting another voice type, but to save time, we will copy and past pe our kick sound. Tap on the kick and press the copy button to put it in the clipboard. Tap kick again and press paste. This gives us another voice connected to MIDI channel 10.
Tap the new voice, press edit and rename it to "hh".
We need to select another patch to get our desired sound. Bring up the voice edit screen for the hh and experiment with different patches (use the patches starting with "D" for drum sounds. I found that for the drum kit we selected in Sampletank, patch D047 sounds good even though it isn't labeled as a high hat.
When you have a pattern defined, I like to hit play and then audition different patches to pick one I like. For our pattern, press the pattern button on the left of the voice edit window and enter the following 2 patterns:
The first pattern plays a series of 8th and 16th notes and the second pattern is silence! Th 75/25 numbers mean that 75% of the time, the notes will play. 25% of the time the hh will not play. This provides some variety but we want to hear a bit more syncopation. We could create a number of alternate patterns, but there is another way to provide variation.
In the Voice edit screen, select the Basics button on the left and move the slider for "Notes rest %" to 10. This will randomly replace notes with rests 10% of the time which makes the hh tapping more interesting.
We're starting to get a beat going. Next we want to add a snare sound between the kicks. We could create another pattern on alternate beats but this time we are going to use a different voice type. The voices we have so far are "Fixed Pattern" voices. For the snare we are going to use a Following voice. Basically, we want the snare to follow the kick drum one beat behind the rhythm.
To create the snare, let's copy the hh and paste it like before and edit the name to "snare". While on the Basics edit screen, we will change the voice type to "Follows" and we will select patch D045 - I happen to prefer its sound to the snare patches.
Next, we need to select the Follow button on the left and tell the voice which voice it should follow. We will also change the Units selection to Beats (60th of a) and will set the delay slider to 60 to give a one beat delay. We will also change the Percent to 80% so the snare only plays 80% of the time for more variety. Note- setting the delay to 60 is difficult! Get a number on the slider that is close and then press the + or - button to get the value.
This is closer and closer to a full kit! Next post, we will finally finish off the drums. Often, the drums in Noatikl take me the longest to set up.
Saturday, July 13, 2013