In recent years the line between the mixing and production processes have begun to blur with the rise of electronic music, where mixing effects can – and often are – utilized in the production process. I believe there is a lot to be gained by taking a step back and approaching both production and mixing with a separate, tailored mindset.
In this article I write about my approach to these two processes, and how I treat them as relatively independent. Once I started doing this I noticed an immediate improvement in the quality of my mixes and have done it in every song I’ve finished since.
Keep in mind that this is just how I work, and that there are no rules. A new process, plugin, or piece of gear will not drastically improve your music, only time spent honing technique will. This article simply details my process and the tools I use, feel free to take liberties where you see fit.
I. Problems With Mixing As You Go
Many artists mix as they produce, even successful ones. However, there are some problems to be aware of.
Firstly, a producer is unlikely to have a fully complete and detailed vision of the finished track while they are still working on it. Some elements that end up in the final version of the track will be developed through experimentation, inspiration, vision, logic, or any other number of influences. This means that as you are mixing your earlier elements you are not fully aware of what other elements will end up in the track, and therefore are not able to mix with a detailed picture.
Second, mixing as you go introduces the possibility of becoming used to issues in the mix. An EQ applied to quickly serve some purpose on one signal might not be very well thought through, and leaving it like that throughout the remainder of the composition and production processes can sometimes result in you getting used to these sub-par effects.
Separating the mixing process not only reduces the likelihood of both these issues, but also introduces some positives of its own.
II. Advantages Of Separating The Process
There are several advantages to separating the mixing process. I will go over three: the ability to see context in your mix, the ability to add effects one by one across the board, and the ability to fully let go of the creative process in favour of a more technical one.
“Going through each track one by one and EQing every element at once allows you to fit everything together in the mix properly. Same goes for reverb, distortion, and any other effect you can think of.”
Seeing context in your mix is very important. After all, the purpose of mixing is to give each element its own space – something that’s very difficult to do when you’re only half done the song. Mixing your song all at once at the end means that you can view each element alongside everything else that will be playing with it. This means that you can mix this element with the surrounding ones while having a full and complete picture of your arrangement, as you are no longer adding more elements.
Adding effects one by one is another advantage. I have found that applying each effect across all tracks in series is very effective. For example, going through each track one by one and EQing every element at once allows you to fit everything together in the mix properly. Same goes for reverb, distortion, and any other effect you can think of.
III. mixing “on the go” or after | a question of preferences
Personally, I still enjoy mixing as I go, because I like to know approximately how everything will sound when it’s mixed. I find having a project that sounds good does wonders for motivation and inspiration. Additionally, knowing that I’ll return to the mixing process means I can waste less time perfecting the mix and keep my brain in the creative process during production and composition. I do, however, still strip most of my effects prior to the actual mixing phase.
IV. My tips on music production processes
I use various monitoring VSTs that I keep on my right monitor in my FL Studio default template. I find that these tools help my ability to get a clear picture of any sounds coming out of my DAW. I see a lot of people online say that you should mix with your ears, not your eyes, and while I do agree that your eyes should not take priority over your ears, I definitely believe that they can be a great complementary asset that allow you to interpret your audio using an additional sense.
My monitoring VST chain consists of:
- Voxengo SPAN, a free spectral analysis plugin
- Ozone Imager, a stereo imager from iZotope that I don’t use as an effect but rather to see the stereo image of the signal
- FL’s WaveCandy, set to RMS mode, which helps me to compare loudness levels between my tracks and reference tracks.
- FL’s WaveCandy, set to spectrogram mode, which helps me to see the spectral content of the signal similar to SPAN. However, unlike SPAN, time is represented on the X axis, and there is no smoothing. This means things like quick cuts to silence are accurately depicted, unlike SPAN, where a quick cut to silence would be smoothed over and lost before I get the chance to interpret it.
I keep these plugins on the “Current” mixer insert in FL Studio. This means that they will be applied to the end of any mixer insert I have selected at any time. I used to have them on the master, but I find that keeping them here allows me to monitor individual signals without having to solo that signal, which is nice.
I will now go over the series of steps that I take when I approach the mixing process of any track.
The first thing I do in any mix is export individual instrument stems. I export each instrument as its own wav file the length of the song, and without nearly any effects of automation. The only automation that I keep is automation of parameters within synths, which obviously cannot be automated in a later mix project.
FL Studio has a useful functionality in the export settings titled “Split Mixer Tracks”. This will export multiple audio files, one for each mixer track. If you keep your mixer tracks organized, and route instruments each to their own mixer insert before bussing, this functionality should be fine. Other DAWs may have a similar function, but unfortunately I’m not aware of them.
Normalize Audio (Optional)
Audio normalization is a useful tool at this stage in the process for me, however, it is entirely optional as it is absolutely not crucial to the process. The main reason I do it is to destroy my relative gains that I had developed with the mixer faders in the original production project.
“Normalizing each of your stems results in loud audio files that not only give you more freedom when it comes to volume, as it is easier to turn audio down than
For those of you not familiar with the idea of audio normalization, it is essentially a digital process applied to an entire audio file that brings up the gain of the entire file by enough to make the highest peak hit some db level, likely somewhere around 0 or -1.
Initially I used Audacity’s batch processing functionality to apply normalization to a whole folder of files at once. Then I realized that FL’s sampler has a “normalize” function, located on the main panel as a tickbox.
FL’s sampler has a normalize function located on the main panel, but if you don’t use FL and can’t find a normalize function in your DAW, Audacity is a free audio processing software that supports batch processing of files.
Normalizing each of your stems results in loud audio files that not only give you more freedom when it comes to volume, as it is easier to turn audio down than up, but also allows you to completely reconsider all of your relative gain relationships between elements.
When you import stems straight from your production project, each element will play at the relative volume that you established before. I find that destroying these gain relationships and reconsidering the gain of each one results in a more effective and better levelled mix. Once each element is imported into the new mix project and normalized, I then route the stems into mixer tracks and bring down the volume to -inf db before continuing to the pink noise step.
Pink Noise Levelling
Pink noise mixing is a quick way to rapidly establish gain relationships between elements. It works by playing pink noise, and bringing up each element until you can just barely hear it. This will get every element to a volume level that doesn’t fight too hard with any other element. Keep in mind that this is best used as a jumping off point, and I encourage you to continue tweaking the mixer faders after this step to better refine your gain relationships.
Melda makes a free noise generator effect that can output pink noise and has a gain knob for the dry signal. I put this plugin on my master, set it to output pink noise at about -18db, and bring up each element one by one.
1. Gain Staging
Gain staging is a professional process that has been utilized for decades. Over time, the reasons for using it may have changed, but in our digital environment it is a great tool that will help to ensure our effects are actually introducing positive changes to our audio signals.
The human ear has a tendency to perceive louder audio as sounding better, but this is obviously not always accurate. A louder signal might be more distorted, have a worse spectral response, or any other number of issues. Gain staging helps us to see through this natural human tendency.
“Gain staging is a useful step because some effects such as distortion can modify the loudness of the signal dramatically, and we want to avoid being susceptible to the illusion of “louder = better”.”
Gain staging is implemented by ensuring that the loudness of a dry and wet signal are approximately the same. Some plugins do this automatically (to varying degrees of success), and some provide you with an “out gain” knob. Others require you to insert a gain plugin afterwards in the signal chain, which can act as an “out gain” knob.
Gain staging is a useful step because some effects such as distortion can modify the loudness of the signal dramatically, and we want to avoid being susceptible to the illusion of “louder = better”. If your signal chain is gain staged, you can A/B the audio with the effect on and off and get an accurate comparison of the dry and wet signals.
2. Order Of Effects
There are many different schools of thought on the order of effects and I believe that in the current electronic music scene nearly any order is an option, experimentation will help the most.
I tend to place EQs that cut sub-fundamental tones first, unless another plugin introduces more, in which case I’d put the EQ right after that. Keep in mind that some things like OTT might bring out the flavour of certain effects like distortion when they’re placed after it, but can also undo EQ to some degree if it’s placed afterwards. Experimentation and exposure to various orders of effects is the best way to get a good handle on what you want to go where.
I use various busses and sends in my mix project template, as there are a handful of things that I do the same way every time.
- Sidechain effect bus: one common sidechain setup that allows me to route any signal into here and have it sidechained alongside other signals. I believe this helps to glue together multiple layers. I will however often sidechain other signals on their own if I want a different curve. For example I often sidechain leads and vocals more lightly, not using the bus.
- Instruments bus: one bus for all melodic instruments. This bus generally gets a little bit of compression, some subtle saturation, and minor EQ if needed.
- Drums bus: all drums get routed here and receive some parallel compression (typically using FabFilter Pro-C 2 with some dry signal mixed in), some saturation, and anything else that might be occasionally automated such as filters.
- Vocals bus: any vocals present in my song get treated with some specific effects in various mixer tracks, separated by song section, but then all go through this vocal bus to receive some minor final effects.
- Time based effect sends: I use four send mixer tracks for my sends. Ping pong delay, hall reverb, room reverb, and a send bus where the three others are routed.
Remember to mix your sends. I cut the lows from my reverb and delay by EQing my send bus. This allows me to be more free with what I send to my reverb bus, allowing me to send elements that might normally have frequencies too low without fear of muddying up the low end of the mix.
As I proceed through each mixer track in my song I add an EQ. I cut the lows from anything that has frequency content below the fundamental, which is made easier to see by SPAN. I also do some basic EQ work at the beginning of the effect chain to shape the sound a little and get it ready for the rest of the effects, rounding it out as best I can with just EQ for now.
There’s a great video by Razer where Noisia goes over some creative mixing techniques with EQ. This video is where I learned to EQ space for a choice pocket of harmonics as well as the fundamental. I’d recommend checking that video out here.
Distortion comes next after EQ for me. I use distortion as a way to add more harmonics to a sound when it’s too soft. One of my best pieces of advice with distortion is to play with both the drive and also the dry/wet. I like to think of it as the drive determines how many harmonics are added, and the dry/wet determines how loud they are. By mixing some of the dry signal back in, you can push the drive harder without the signal losing integrity.
Gain staging is very important when adding distortion, because without gain staging most distortion modules will increase the gain significantly. Most distortion plugins have an “out gain” knob, and Soundtoys Decapitator even has an auto-gain switch that will attempt to automatically gain stage for you.
Soundtoys Decapitator Plugin
Compression comes in two main flavours: multiband and single band. Multiband compression will compress various bands across the spectrum independently, while single band compression acts on the signal as a whole. This means that multiband compression will often modify the spectral balance of the sound, and single band compression will not.
Oftentimes with compression, your DAW’s default plugins will work just fine. If you want to check out some third party compression VSTs, my favourites are OTT for multiband and FabFilter Pro-C 2 for single band.
Lastly, just like distortion, parallel compression is a powerful tool. Mixing your dry signal back in can let you push your compression harder before the signal is perceived as overcompressed.
Once I’m done with the effects phase of my mix process, I move on to automation. I like to automate a few things fairly consistently, and here I’ll go over each one.
1. Volume Automation
Volume automation is often overlooked by hobbyist producers, but many people (myself included) credit volume automation as one of the crucial stepping stones toward sounding “professional”.
Automating the volume of various elements slowly and subtly down toward the end of a phrase is a great way to smoothen the transition, especially if that element is no longer present in the next phrase.
“There is one thing to keep in mind when automating volume of any kind, and that’s to not automate the mixer faders.”
Another way to automate volume is on the master. Often when I’m trying to make my drop hit just a little bit harder, automating the volume downwards just a little during a riser works wonders.
There is one thing to keep in mind when automating volume of any kind, and that’s to not automate the mixer faders. Automating the mixer faders means that later, any volume adjustments must be made to every automation point. Instead, use a gain VST like Ableton’s Utility or FL’s Fruity Balance. By using these instead, you’re able to make adjustments to the mixer fader without affecting the automation.
2. Filter Automation
Filter automation is a great tool to use in similar ways to volume automation. Low passes or high passes automating on things like your drum bus, instrument bus, or even master can really liven up a mix and add some much needed variation and interest without the introduction of new elements that may have the potential to clutter up the mix.
3. Send Amount Automation
Automating the send amount between certain tracks and sends can also be useful. For example, I may automate my instrument bus to the reverb send so that my instruments have more reverb in the intro, automating up through risers, down into drops, and back up at the end of phrases. Automating the send amount of any track to your reverb sends is a great way to move elements backwards and forwards in your mix.
4. Send Volume Automation
My method for automating the volume of the reverb to lead the listener into different sections:
Another automation lane I often add is the volume of my send bus. I like to add a Fruity Balance at the end of my send bus, bring the volume down to about 30%, and then automate it up quickly whenever there is empty space or during fills. I find this is an effective way to pull the listener into new sections. When combined with automating up the send amount like I mentioned earlier, it only strengthens the effect.
This is, however, an automation that I often have to spend more time tweaking. The curve and length of the automation is more difficult to perfect, because as you’re bringing up the volume of the reverb/delay, it’s also decaying, so the automation line that you draw is not actually indicative of the volume of the reverb.
Reverse Reverb Effect
One alternative method that doesn’t have that issue is the reverse reverb technique. If you are so inclined, you may want to render out the 100% wet reverb tail that comes out of the very beginning of the next section. This means that you would have to reverse a short section of the next phrase, add 100% wet reverb, record what comes out, and then cut off the section that had dry signal being sent to it.
You can then reverse this reverb and have it lead into the next section. If done correctly, you’ll have reverb that hints at and leads you into the next section. Additionally, this method works with audio, which means that if you adjust reverb send amounts you will not have to reconsider the send volume automation as you’re working with rendered out audio.
5. Other Automation
Finally, I’ll provide you with a few other types of automation that I tend to do. None of these are necessary, and they may not work with every genre, but I have found some success with them.
Automating distortion mix or drive is an effective way to add grit and tension. Pushing distortion too far begins to lower the fidelity of the sound, adding aggression while simultaneously lowering sonic fidelity. It’s up to you where you’d want to use this, but I’ve had some success with reducing distortion amounts during risers only to reintroduce them in the drop, which gives the drop that much more energy.
Stereo width is something that can and should be automated occasionally. A very common use of this is to automate the stereo width of your master inwards during a riser, and then opening it back up for the drop. This method gives the drop more energy and width.
I’ve gone over a lot of steps in this article, but in reality the process is a fairly straightforward and logical system. Begin with dry stems, normalize them to destroy relative gains, adjust the levels, gain stage while adding effects one by one, then automate. I suggest revisiting your gains occasionally and especially after you’ve added effects, as the spectral balance of your elements will have almost certainly drifted from where it was when you first set the levels.
The most important thing when mixing is to consider energy. Try to consider effects not just literally, but philosophically as well. Consider what certain effects or automation do to the energy of those elements. Automating down a low pass filter has a distinct characteristic, but consider what that does to the energy. When higher frequency content is reduced, the energy of that element is reduced as well. Energy reduction is a great way to lead elements out of the arrangement.
There are other effects that have similar effects on energy, but might sound completely different, like automating down the bit rate. Automating down the bit rate also lowers energy, so sometimes that might be interchangeable with low pass automation. When you consider how audio effects manipulate energy you’re working on a higher level of abstraction, not worried about specific effects, but rather the feel of the song. Try applying this concept to other effects, and consider how things like stereo width, distortion, filters, reverb, etc. affect energy levels. Discovering this school of thinking was a major turning point for me.
Remember that music is art, and there are no rules to art. Any time you hear methods from other people, that’s just what they do. It’s not fact, it’s not something you need to do, it’s just how that person managed to accomplish what they did. Everything detailed in this article is simply my workflow and thought process, and other people may do things completely differently. I’m simply offering insight into how I work in hopes that it inspires and educates others. Remember to experiment, and take in information from as many different sources as possible. I hope this information helped you in some way.
My name is Daniel Crawford, and I’m a 21-year-old music producer based in Ontario, Canada. I’ve been producing various styles of music for seven years. I attended The Noize Faktory Academy in Toronto and spent Summer 2018 enrolled in their Master Audio & Recording Arts program. I am currently on hiatus from any official releases while I finish my last year of school as a programmer at Niagara College and I am currently developing my next music-based project that will be comprised of a new sound and a new alias.