First off, let me tell you that I’m not a professional musician or audio engineer. It has always been my main hobby and I feel like I have decent understanding of most of the workings. These music production tips are not alpha-omega to everyone and will not work to everyone’s workflow and/or DAW, so don’t look at this as a complete guide to anything, but as something to keep in mind or a pointer for something to research for yourself.
Keep Your Project Files Organized
This is important for many reasons. If you do this, you can easily pick up an older project and recognize the different elements without having to figure out what you thought back then, and you will navigate your projects more efficiently.
1. Use color codes for different elements in your mixer and playlist
E.g. dark blue for drums, light blue for cymbals, green for melodic elements, orange for FX, purple for bass, etc.
2. Do the same with midi patterns and sounds (and even automation if your DAW allows it) in the playlist/arrangement view.
3. Have different elements placed apart from each other in the playlist
4. Name things accordingly
I use a 2/3-letter prefix for all elements, followed by something that indicates what sound it is (E.g. DR Kick, LD Synth Lead, VOX Lady singing, etc. DR = Drums, LD = Lead, VOX = Vocals).
5. Create time markers or cue points in the arrangement view and name them if possible
6. Not all DAW’s are created equally
Some people will argue that you can achieve the same results with any DAW out there, and I guess in general, or for “basic” music (1 bass, 1 guitar, drums, 1 voice, all of which are recordings) you can,. However, DAW’s have different things that make them unique, and equipped for different tasks. Of the top of my head; FL Studio’s piano-roll is exceptionally easier to use and more efficient than others. Ableton Live has a very intuitive and refreshing workflow. Reason is easier to use for old school recorders who are used to out-board equipment layout and routing, and it has almost everything you need as native plugins. My point here is that there’s no right or wrong DAW to use, as long as it does what you need it to. I myself produce mostly in Ableton these days, but I also have Logic and FL Studio, and use them if I feel like what I’m about to do is going to be easier in that DAW. If you are open to learning several DAW’s and getting the best of each of them, you have a larger toolshed and will be better equipped for every job.
7. Use sub-mix channels for different elements
I usually have a drum, synth, acoustic, FX, and bass sub-mix. A sub-mix is a mixer track/channel that has a group of instruments routed to it, which then routes out to the master channel. This allows you to process each sounds individually, but also as a group. It’s very handy for fades and special effects (like a filter automation), but it can also be used for less pronounced effects. A little bit of compression on a drum sub-mix can make the individual sounds have a similar feel to them, making them sound more like a drum kit and less like individual samples. Sub mixes can be CPU intensive however, so if you try this trick out and your DAW is getting sluggish, you might have to change the DAW, or try to find another way to achieve this.
8. Set up AUX channels for certain effects
Having things like reverb or delay on its own mixer channel, and sending sound (via a bus or send function) from the channels you want the effects on to them, makes the processing less heavy for your computer than having multiple instances of reverb or delay processors. It also helps with consistency (if the same reverb is used on multiple elements, they sound like they originated in the same room or setting – this can sometimes glue a custom drum kit together too). If you need a special kind of reverb or delay for an element, you can always chuck the effect on that channel, and not send it to the AUX.
9. Beware of clicks and pops
The membrane in your speakers create sound waves by vibrating. When there is no audio being played, these membranes are at what we call the resting position, and in order to produce sound, they need to go from that position, to a state of vibration. For this transition to be smooth, the sounds being played must be eased in, so that the membranes start to vibrate slowly and ramps up from there. To make sure that the transition is smooth, we have to create fades on every audio clip that has sound information all the way from start, or all the way to the end. The fade doesn’t have to be long; even a millisecond will do the job. A typical sample from a sample pack will already be faded in and out accordingly, but if we cut an audio clip somewhere, e.g. in the middle of it, we will probably have to create a tiny fade to make sure that the sound is eased in. Another way to avoid this is to cut the audio at the Zero Crossing Point, which I’ll let you google yourself.
10. Explore the concept of parallel processing
If you have a sound on one mixer channel, experiment with sending (with a bus or send function) the sound to another channel and process them with different effects or
11. Minimize processing load on your computer by bouncing (recording, resampling) individual elements, and disabling the plugins used
By disabling, I’m not necessarily saying you should remove them from the project, but many DAW’s have a freeze or smart disable function for plug-ins, that limits your DAW’s CPU usage on the plug-ins you enable it on. If you are satisfied with a melody and sound design for e.g. a synth lead, hit record on the mixer channel it is routed to, select the parts you want to record in the arrangement, and record it. Some DAW’s have made this a feature so that you don’t have to play through the selection manually while recording. Look up “bouncing”, “disk recording”, “freeze” functions for your DAW. This can also help you clear up mixer channel effect slots, by getting all the effects applied onto the sound.
Logic Pro X Tutorial: How to Freeze Tracks & Optimize Computer Performance
There is only so much you can do with mixer channel effects. If you really want to mess around with a sound, try recording it, and mess with it in an audio editor like Audacity or your DAW’s native audio editor. Here you can chop, reverse, flip, and a lot more, which can lead to some interesting results. Your imagination is the limit, if you want to create something unique or experimental, this is a great way to play with a sound.
How to Use Ableton: Resampling
13. Save presets of most plugins when you are happy with a result, and name the preset something that helps you remember
That sweet synth sound, lush reverb, or crazy Flanger might be something you want to use outside of the project file you made it in. If you do this, you build your own bank of effects and sounds that speaks to you as an artist and colors your music the way you like it. I usually always save a synth sound I’m happy with before I try the new tweaks I have in mind, and if I’m happy with a sound I’ll save the preset before removing the synth from my project (after resampling it into an audio file). That way, if I have doubts about the melody, or have an idea later on, I can always load the synth in again, do the changes I want, and bounce that element into a new audio file. This can also work as a workaround for DAW’s that don’t have a smart disable/freeze function.
A little, carefully tuned compression on most elements can’t really tighten up your mix
14. Too much compression on a single element can make it sound bad, but a little compression on most elements help you clear up the mix
Over-compressing takes away the dynamic range of a sound, which will most likely make it sound weird or just plain wrong. But a little, carefully tuned compression on most elements can really tighten up your mix, make each element stand out more, and make space for other elements. There’s no blueprint for what elements to put it on or not, but I personally try to avoid it on bright sounds, like cymbals, and multi-layered melodic elements, like a string ensemble, unless I need to. This is sort of my own rule of thumb, because bright elements tend to stick out in the mix whether they are compressed or not, and they typically don’t take a lot of space in the frequency spectrum. A string ensemble tends to sound too machine-like if the dynamic range is lost, but again, that’s more a matter of taste than a matter of do or don’t. Look into compression techniques and how to use this tool for best effect.
How to Avoid Over-compressing
15. Use EQ’s multiple places in the effect chain, if you need to
With very few exceptions, I put an EQ as the first effect on every mixer channel. Cleaning up muddy low-ends in bass sounds, or removing harsh high frequencies in other elements is simply a need for clean mixes. Some sounds are great as they are, but a little tweaking can really bring out the best and remove the worst from a sound. If you have a sound with certain resonance frequencies that sound hollow or distorted, try using an EQ with high Q and gain, and sweep it across the frequency spectrum until that harsh sound is amplified, and then turn the gain knob down until it is gone. Experiment with the Q after that. On some EQ’s and sounds, you can do this multiple times across the spectrum to remove all unwanted sounds with only one EQ instance. This is called a comb EQ.
I tend to use an EQ to reduce unwanted frequencies before the compression, so that the compressor can boost them back up efficiently.
Since this is before the compression in the effect chain, I don’t compress and boost the unwanted frequencies in a sound. After the compression, I have another EQ, that I use ever so slightly, or sometimes quite radically, to boost the frequencies that need to be boosted. If an EQ (both boosting and cutting) is applied before the compression, the compressor might reduce the boosted frequencies, or boost the cut frequencies. This depends on your compressor too, as some compressors are only downwards compressing, while others are both upwards and downwards. If the EQ is applied after, you might already have reduced the frequencies that you want to boost, or have boosted the frequencies that you want to reduce. So with this in mind, I tend to use an EQ to reduce unwanted frequencies before the compression, so that the compressor can boost them back up efficiently, and boost the wanted frequencies with an EQ after the compressor, so that the compressor can’t reduce those frequencies back down.
16. Research how to optimize your DAW’s processing and how to optimize your computer for audio processing
There might be a few simple tweaks you can do, either to your DAW’s preferences or computer, which should be looked into. Be careful with this though; don’t change anything you don’t understand or can revert back too, especially not in the BIOS of your computer (which I’ve seen some guides suggest). If you take the time to research this sort of stuff, your DAW and computer will be operating at the maximum potential, instead of the standardized settings.
By routing MIDI channels to the different ports in Kontakt, and routing Kontakt’s outputs to different mixer channels, you still get the full control of each sound and midi melody, with a friction of the processing load.
17. If a certain plugin or sound engine has the ability to process several sounds once, learn how to use it correctly with your DAW
I’ll use Native Instrument’s Kontakt as an example. It is a very powerful sampler with lots of libraries and possibilities. If you set the settings right, you can have a lot (I don’t know the maximum number) of inserts in a single instance of Kontakt, and trigger them by routing a midi channel to it. This lets you have a single Kontakt, with say a string quartet, a few custom samples, a bass, and a drum kit in it, instead of a new instance of Kontakt for every instrument. By routing MIDI channels to the different ports in Kontakt, and routing Kontakt’s outputs to different mixer channels, you still get the full control of each sound and midi melody, with a friction of the processing load. I know Kontakt is not alone in having this feature, so if you are in doubt about a plugin you use, or notice that multiple instances of it makes your DAW laggy, just google it’s capabilities, and then how to route it correctly in your DAW if possible.
18. When you have produced for a while and know how your project files tend to end up, try making a template project file
I have a template project file that has 10 empty mixer channels, followed by a drum master, then 5 empty, bass master, 10 empty, melody master, 10 empty, FX master, and then two AUX channels with reverb and delay already set up. Likewise, in the instrument rack I have my two favorite synths, and 10 midi output channels (ready to be routed to Kontakt if I decide to bring it in – but Kontakt is huge and heavy so I load it manually if I need it). This saves me a lot of time when I already know the gist of how I set up and route my project, and tailors the creative process to be effective for me.
19. Clean up the source
This mostly applies to synth-work and home studio recordings, but it’s a good thing to know non the less. Let’s say you have made a dubstep wobble in a synth, and it has some ugly frequencies that you don’t want in your mix. Trying to EQ them out in the mixer channel can work to some extent, but it is better to tweak the synth so that it doesn’t produce those ugly frequencies in the first place. It can be hard, but getting the source right will sound much better than trying to clean up the signal afterwards. The same goes for samples you have made, or if you recorded your guitar play and you can hear the neighbor slam the door somewhere in there, record it again rather than trying to remove it afterwards. It might be a hassle but it’s the best way to get the clean sound.
Most or all of these tips are very basic, but these music production tips are meant for everyone, even newbies.
20. Look into scales
If you are new to music, scales are a good place to start. A scale is a set of musical tones that go well together. They work as a blueprint for what notes you are “allowed” to use in a melody, without causing disharmony. That being said, if you build your melodies correctly, you can switch scales throughout the melody, or song, by using a few tones that are “allowed” in several scales as a bridge between them.
Music is all about patterns and math, some would say, and our brain loves patterns and repetition, it speaks to our logical sense.
21. Look into relative keys
Relative keys, sometimes called relative scales, are certain chords and ground notes that go well together. It’s better for us both if you look into this yourself, as it is an easy concept which has many images portraying it.
Music Theory for Electronic Producers – The Beginners Guide
22. “Sow something, reap it later”
What I mean by this is that you can use patterns in your melody again later in your song or melody. Let’s say you start a melody with three notes that go upwards in the scale, and build you melody from there. Halfway, or maybe towards the end of your melody, you can build downwards in the scale, and repeat this pattern with a different set of tones, with the same relative distance from each other (if your scale denies it, try transposing the notes that don’t match one semi-tone up or down until it sounds pleasing). This is called a motif, and can help you give your melodies a consistency without sounding repetitive. Music is all about patterns and math, some would say, and our brain loves patterns and repetition, it speaks to our logical sense. If you are happy with a melody you made, try to analyze the patterns and see how they reflect each other or accompany each other, and make a mental note of their relationship.
23. Try transposing a single tone of a chord an octave up or down
This is called chord inversion. If you change (again, no right answers, but especially) the bottom or top note in your chord an octave up or down as you see fit, it sounds different while preserving its scale and relative key. This is a neat trick to play around with if you struggle to find the next chord for you melody – simply re-use a chord you used earlier, but alter it to go with that part of the melody. You don’t really have to be stuck with a chord for me to recommend playing around with chord inversion though, it’s a very useful trick to apply to chords no matter. Sometimes the actual chord will sound better, and sometimes the inverted chord will sound better.
Swing can make a song feel more alive and human-made, rather than stiff and computer-y.
24. Learn the difference between regular and irregular rhythm
Have you ever noticed how some songs seem to “bounce” more in their melody and beat? That might be an irregular rhythm called triplets in play. If a song is written in 4/4ths time signature, then a rhythm that is written in multiples of four (4, 8, 16, 32, etc.) is the regular rhythm. If the time signature is 4/4ths, but the rhythm is written in multiples of three (3, 6, 9, 12, etc.), it is called an irregular rhythm. This example particularly, is called triplets, and is the most common use of irregular rhythms. It is what we get if we put three notes of equal length, in the space where we normally would have four notes of equal length.
25. Learn what swing is and how to use it properly
Swing can make a song feel more alive and human-made, rather than stiff and computer-y. The idea is that you change the start time for some notes and beat hits, to make them closer in time to other notes and beat hits. To make a song swing, transform the groove from an eight-note groove to a dotted eight-note groove, which means that you change the eight-notes to triplets with a quarter-note and eight. I got that formula of the web, because I’ve always made my tracks swing by feel, not by following the grid or guides. It’s not that hard to do by feel, because it’s one of those things that sound awful if done wrong but great if done right. A quick google-search and a few YouTube videos will take you a long way, at least in understanding what swing sounds like, what feeling it gives to the song, and how to recognize it.
26. Other musical terms you should get to know are:
Arpeggio, Portamento, Legato, Tremolo, Staccato, and Pizzicato, to name a few. I’m not going to explain all of them, but when you start getting a decent understanding of music, and want to go further, these are a good place to start.
Music Production Tricks
27. “Sow something, reap it later”
Exactly the same concept as I mentioned before, just on a larger scale. Having a certain effect, or marking of a transition repeat itself, is a simple and powerful trick. It’s even better if you can keep the effect or transition similar, but also evolving at every instance. Let’s say you stop the drums 1/4th of a bar or measure before the melody begins, or the song reaches its peak. The next time you build to something in the same song, explore your options for stopping the drums in the same manner, but maybe keeping the high hats this time. And the 3rd time it comes around, do something new but similar. Like I wrote before, this helps with consistency, but it can also be a nice way to move your song forward, and tied in with other production tricks it can boost the impact of your transitions.
28. When creating melodies in midi, play around with different velocities for individual notes
If done right, this will add a little human touch to the sound, convey more emotions, and empower your melody. The same goes for drums, especially the hi hats and snare drum rolls.
29. Research music-oriented dramaturgy
If you have a certain type of song in mind, and you know a little bit about this, it can help you envision your song at an early stage. It’s all about building up, breaking down, tension and release, and how to mix them together. Where a pop song might start with a short intro, then have sections organized like this; refrain – verse – refrain – verse – break – refrain – refrain – outro, a progressive song might build up slowly over different parts, relying on the soft and steady increase in tension and energy, then have a break with vast emptiness and huge soundscapes, before it actually gets to the main part.
My point is that different songs call for different structure, and if you know a few examples of this you can build your song more efficiently without wondering what should come next. A good place to start getting into this is by analyzing and breaking down a song you like, and recreating the structure of it in a song of your own.
30. Experiment (A LOT) with automation
Automation brings a lot to the table. Subtle
Automation in a DAW: Intro to Music Production
31. Learn to build your own drums with layering
If you like the
32. Layering applies to melodic elements too
The same concept applies to melodic and tonal elements. Beef up a laser-like saw lead with some airy pads, or give your piano that extra thin little twinkle-sound with a plucked synth an octave above. Mix them accordingly, both individually and as a unified element, and you have effectively created a new sound for your song.
33. Tune your drums!
Even if a kick drum is so deep that you don’t recognize it as a tone, it usually has one. Find the tone, either by use of math and a spectrum analyzer (all tones’ hz are specific numbers, theres charts available online), or by using a tuner or tone analyzer plugin, and then tune it to a tone that is within the scale of your music, you will notice that it sits much better within the mix and accompany the rest of the song in a better way. This took me a looong time to realize by myself, but when I did, it was a game changer. Same goes for all percussive sounds and drums. As long as you don’t tune them too far from their original frequency, and with a decent pitch-shifting algorithm, they should sound relatively close to the original sound, but more in tune with the rest of your song.
34. Realize that music is not an exact science, but audio is
This also took me a long time to realize. What I mean with it is that musically, you can do whatever you like, as long as it conveys what you want it to convey. Even disharmony can be used as a tool. However, audio, and audio production is an exact science. You can argue that music also is an exact science, or that audio isn’t, and that would probably be correct to some extent – music is mathematical patterns, and dubstep is all about destroying frequencies the right way, so why not. But what I’m getting at is that audio is more than just music.
Audio is the medium of which music is conveyed. So when you mix your song, or tune a drum, or smash some sounds on top of each other, and it sounds wrong, phase distorted, hollow, or weird in any other way (or if you’re just curious) – bring up a spectrum analyzer plugin (see above) and look for abnormalities in there. Play each mixer channel in solo and see if there’s an overlap or distortion somewhere. See if the bass is crashing with the kick drum. If you incorporate this into your workflow, you will soon learn intuitively how audio works in many ways. Maybe not being able to pin point the mechanics of it, but you get a sense of insight. If all sounds sound good alone, but the mix has weird moments, try to listen to two or three channels that have elements in roughly the same frequency range, and see if some of them cause each-other to phase out. This is why I say it’s an exact science.
35. Try to do something new or unique in every project, and avoid habits
Force yourself to learn new things and pick up new techniques. Avoid specific habits (like always setting the reverb or EQ at a certain setting every time). If you follow this tip, you will continue to grow and learn, rather than locking onto a straight track where you don’t learn anything new. It’s fine to have some habits and go-to presets or plugins, but for the love of music and all things great; listen carefully as you tweak and alter the sound, instead of boosting this and reducing that based on the notion that those numbers/parameter settings worked on a similar sound an earlier time.
36. Make your own sounds!
I’m not talking just about recording yourself playing guitar, but anything at all. A birds howl can make a good sound effect. Rattle your keys and make a hi-hat from it. Put a balloon on a toilet paper roll and make a percussion instrument. This can also force some creativity, and you will get new, unique sounds for your projects. The possibilities are really endless here, I once turned my computers fan-noise into a grandiose pad. If you get yourself a simple handheld recorder, have a microphone setup, you can explore the world of sounds around you, and use them in your songs.
Mathias is a 26 year old composer and producer with a little over 10 years experience.
Having had a keen interest in music already in his childhood, he slowly got more passionate about his favourite medium, and at the age of 15, he finally decided that plucking some Iron Maiden or Tetris on his low-budget guitar, wasn’t enough – he wanted to compose full tracks on his own.
Being a geek and nerd at heart, the turned to DAW’s and started gathering the knowledge needed for such a task.
However, being a young adolescent with his heads in the clouds, the task of learning both music theory and music production by trial and error, was a big one, and it took time. It wasn’t until a full 4 years later that he had gotten the most fundamental aspects down, and started producing more and more.
Shortly after that, it was concluded that this was what he wanted to spend his life on.
Mathias has worked for extended periods if different DAW’s, mostly in FL Studio, Ableton Live, and Pro Tools, but has also dabbled with others.
He is a licensed Sound Engineer from the creative vocational school “Idefagskolen”, where he graduated amongst the top of his class.
His music is inspired by his personal favourites, such as Shpongle, Ott, MantisMash, Pryda, Boris Brejcha, and Astrix, to name a few. Even if the music he produces are more often than not far based on some form of EDM, he is open minded and tend to involve a lot of musical and technical elements from outside this culture.
Mathias is currently preparing the first ever proper release, so stay tuned by following his SoundsCloud, if this sounds interesting to you!