17th December 2018 – Chris
“The trick is to ensure that you have control over as many variables as possible before recording.”
First things first, for those of you who didn’t know, our Alpha 3.0 Demo was released on November 30th, which I whole heartily suggest you check out here. It will be our last release until we launch on Steam for early access. So watch this space!
Lately I have been recording voiceovers with our female voice actor Shoana for the female characters and it’s been going amazingly. We have already managed to record 304 takes during the first session, and 280 in the second.
It is always better to record more content rather than less, as you will get more variety as a result. It’s also helpful just in case certain recordings have poor volume, disruptive pops and snaps (due to plosives or noise). You will also have greater variety to choose from, giving it different tones and timbre ready to be implemented within the game.
Those of you who have played the game since Alpha 2 will know the male dwarves have had voices for over a year now. However as female dwarves have been a recent addition, I now have the challenge of making the new voice-overs sound consistent with the old ones. Recording conditions are important, however as a small indie company we haven’t had a proper sound-proof room to record in. As the male and female voices were recorded in different environments with different levels of background noise, consistency is tricky to achieve.
In my profession it’s common knowledge that the more the audio clip needs to be manipulated, the greater the probability of quality issues. By conducting noise removal on a recording, certain frequencies are removed or distorted by the process (this is called residue). These noise frequencies are unique to each environment. This is what we call room tone. It is standard practice to record around ten seconds of room noise/tone before commencing a recording session, as shown below in Audacity.
In this case, from our most recent recording session, we can hear a computer fan, or some kind of electrical current which is the ‘natural noise’ of the room. This acts as a baseline for Audacity when commanded to remove noise.
Let’s take this line from Shoana, and put it through the noise removal process. Below we can hear the raw file, complete with the room tone in the background.
The first step to remove noise is to get a noise profile of the room tone. Using our room tone recording, we select the whole track in Audacity, then click the ‘Effect’ menu button, and choose ‘Noise Removal’.
We are presented with this box.
By selecting ‘Get Noise Profile’, we are capturing the room tone, or the residue, so Audacity then knows what to filter out of our recordings.
Now we select our raw recording in the same way, highlighting the whole track. We perform the same action, navigating to ‘Effect’, then ‘Noise Removal’. This time there are some more options to consider.
These sliders control the final output. We want to make sure we give the audio the best chance for maintaining quality while removing all noise.
Noise reduction simply asks how much you want to remove. Remember, the room tone is already on the virtual clipboard as we retrieved the Noise Profile previously.
Sensitivity we leave at 0, for the reason that any less will allow for noise bleed through, and any more tends to distort the spoken word. This slider controls what is considered noise. The higher the range the more of the audio will be affected.
Frequency smoothing, set at default to three, improves spoken word recordings the higher it is set. The slider reaches 1000, but 200 will provide equal effect. Small changes are better for determining optimum output.
Attack time dictates how quickly the the effects will occur. We want the noise removal to be present from the start of the recording, so we will leave this slider at 0.
If we select the ‘Isolate’ radio button, we can hear what is being taken out, the residue, when we press ‘Preview’. We can hear this below.
We can hear that most of the room tone has been isolated and removed. We can also hear, however, that the lower frequencies of Shoana’s voice have been removed.
We will need to boost these lower frequencies with EQ later. This is another element that will factor into the equation of consistency. By tailoring the EQ to suit this audio, the result will be different to the EQ applied on a different recording conducted in a different environment.
Below we can see a comparison of the audio waveform before and after conducting noise reduction.
Because the noise was quite prominent to start with, the resultant audio will be affected differently than if the audio was recorded in a quieter environment. We can hear the output below.
If we listen carefully, we can still hear some frequencies that haven’t been effectively isolated. To eradicate any leftover noise, we need to isolate the residue present before or after the spoken words, capture a new Noise Profile then apply Noise Removal. The downside to this is that the quality of the recording will be further affected by completing the process again.
The trick is to ensure that you have control over as many variables as possible before recording. A little checklist of things you did before, the room size, the microphone type, the level of background noise and the level in which the voice actor relays lines. Sometimes you can’t always control those elements before recording so maintaining quality and consistency has its challenges, but is by no means unachievable.
For the Nysko team, our last working day at the office will be Monday the 17th, after which we will be revelling in MALT BEER AND RED MEAT OFF THE BONE at our local tavern.
From all of us at Nysko Games, we wish you all a very Merry Christmas.
For further updates, keep a weather eye on your email notifications and our social media pages! There will be some exciting treats to be had! Also don’t forget our Alpha 3.0 demo is out now, and is our last demo release before we launch for early access on Steam. Sign up to our mailing list now to get your link today!