Basic Music Production Terms you Must Know

7 Basic Music Production Terms You Must Know!

Have you ever felt that you were aware of a concept but didn’t know the term for it?

Did you ever feel left out in a conversation just because people were “throwing terms around”?

Happens to most of us and you know what? It’s perfectly fine!

With new terms getting added all the time, it’s impossible to keep up, let alone know all of them!

Then again, we shouldn’t leave it on that note. There are some terms we SHOULD know!

Especially in Music Production, there are terms that might sound complicated but are very easy to understand.

Here’s a list of commonly asked questions that’ll leave you with easy-to-understand meanings of some Basic Music Production Terms:

1. What is a DAW in Music Production?

What is a DAW in Music Production

DAW stands for Digital Audio Workstation. It is an umbrella term. A DAW encompasses some or all of the following:

  • A recording software.
  • Outboard Audio Gear.
  • Musical Instruments.
  • Synthesizers and Samplers.
In the Modern Music Production age, a DAW often means the recording software. Why? Because now we have digital simulations (plugins) of Outboard Gear, Musical Instruments, Synthesizers and Samplers.

2. What are Plugins in Music Production?

2 x Burgers, 3 x Wraps, 2 x Fanta.

Fanta here is an add-on. Similarly, plugins are add-ons, i.e. another piece of software that can be used in addition to the main one.

A lot of the time Plugins are digital simulations of existing hardware (VST, AU, AAX). They enhance the existing functionality of the main software or add something new altogether.

Plugins can be broadly categorized into two: Stock Plugins and Third-Party Plugins.

Stock Plugins come with the main software and Third-Party Plugins don’t. The latter can be bought from their individual creators.

Why do we need a Third Party plugin at all? At times you'll come across a Stock Plugin not performing the way you want to. It might add unnecessary artificats you don't require. That's when Third-Party plugins come to the rescue. Some of them will give you the output you need way better and faster than a Stock Plugin.

3. What is a VST in Music Production?

What are Plugins in Music Production

VST stands for Virtual Studio Technology. Plugins come under various formats, two of them being VST and AU (Audio Units). AU is a format recognized by Apple Computers and VST is a format recognized by Windows Systems.

VST Effects are simulations of their hardware counterparts. Almost all hardware effects such as Reverb, Delay, Chorus, Flanger, Compression, EQ etc. have a VST that you can use multiple times within a DAW.

Gone are the days when hardware effect units kept increasing as the channels kept increasing. Now we can use the VST as many times as we want irrespective of the number of channels in a project.

4. What is Reverb in Music Production?

In a natural environment, any kind of sound that we hear is not pure. It is a mixture of pure sound and the reflections generated in its vicinity. The artefact added because of reflections is known as Reverb.

Now those reflections vary according to the surroundings. A huge hall is going to create way more reflections than a small room. A room that has some furniture will have lesser reflections.

A Studio is specially designed to avoid background noise and minimize reflections. But at times the recorded voice doesn’t end up sounding natural as it’s towards the pure side of it. Our ears aren’t accustomed to pure sounds, hence it sounds artificial.

Reverb is used as an Audio Effect on both Voice and Instruments. It gives them a sense of space so that it doesn’t sound dry (pure). There are various Plugins specifically designed to add space (reverb) to recorded audio so that it sounds more natural.

5. What is Phasing in Music Production?

Scenario 1 – Imagine a room that has two sets of speakers. Both are playing the same song that began at the same time. This is what is known as in-phase. You’ll hear double the loudness.

Scenario 2 – Imagine the same room, two sets of speakers, playing the same song. But one of the sets started playing the song a second later. This is what is known as out-of-phase. You’ll hear some kind of interference because of the time difference.

Phasing in Music Production might occur when you’re layering tracks or using multiple mics to record the same kind of sound. The time difference (phase) will either accentuate the layered signal or the layers might cancel each other to some extent (or fully).

At times, phasing ends up affecting an entire section in the frequency spectrum. E.g. the low section might end up losing its punch. Or the voice might end up losing its body.

In an ideal lab-like scenario, if we consider two sine waves, this is how In-Phase and Out of Phase scenarios would look like:

what is phase in music production

6. What is Clipping in Music Production?

Clipping, whether Analog or Digital occurs when any Audio Device is pushed to perform beyond its limits.

See a red light on a level metre? Chances are high that it’s clipping.

A speaker has a moving part called ‘Cone’. It moves back and forth to create pressure variation that our ears recognize as sound. The amount of movement is defined by the voltage the cone is fed.

When the Cone is fed an excessive amount of Voltage, beyond a point it can produce, you start to hear distortion. That distorting sound means your Amplifier is Clipping. If a speaker continues to stay in this state, it’ll probably get permanently damaged.

Digital Clipping on the other hand is recognized by looking at a dBFS metre. FS stands for Full Scale.

The ceiling on a dBFS metre is 0 dBFS, the values under it are negative. If excessive input is fed, a dBFS metre will showing a red signal. It means that the ceiling is getting crossed. You’ll start to hear harsh distortion when that happens.

7. What are Harmonics in Music Production?

Any sound we hear consists of:

  • A Fundamental Frequency
  • Harmonics

To explain this, I’ll be taking a simple example of the human voice but the concept stays the same for all sounds.

You must have noticed that a Female Voice is generally sharper than a Male Voice, right?

The Fundamental Frequency of a Male Voice can lie between 100Hz and 125Hz and that of a female can lie between 200Hz to 250Hz.

The human voice, just like all other sounds, is not only made of a fundamental frequency. There are positive integer multiples of that frequency known as harmonics. Simply put:

If the fundamental frequency is 100Hz, the 2nd Harmonic will be 200Hz, the 3rd Harmonic will 300Hz and the 4th Harmonic will be 400Hz. Positive integer multiples are 1, 2, 3 and 4 and so on.

Fundamental Frequency is also known as the 1st Harmonic. So when you multiply 100Hz with 1, you get 100Hz.

Check out some of my other posts that might be relevant to you:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top