View Full Version : Both clips normalized... but somehow clipping when mixed?


Deke Ryland
June 17th, 2008, 03:13 PM
I'm using PPro CS3 and have two normalized audio clips in the timeline. Both are normalized and didn't clip originally either, and when played in isolation, both audios do not clip on the PPro VU meters. However, when combined, they occasionally clip into the red.

Now I'm no audio expert, but I thought as long as no audios clipped, that mixing perfectly normalized audio would not put the VU meters into the red? Am I wrong on this? The one audio track is dialog, and it was recorded perfectly at near normalization so as to provide the maximum dynamic range. The other clip is a music track that was prenormalized from the producer (and PPro shows it's -0.4dB from the top... so it's not even normalized 100%).

Anyone know how I can get around this so I don't get clipping on the master render? Thanks for any help or suggestions.

Ben Moore
June 17th, 2008, 03:48 PM
Yeah that happens, Just lower the volume's for the individual audio tracks in the audio mixer until the master is no longer clipping. Two single unclipped tracks can clip when both are played/mixed together. The master is the "sum" of the two tracks.

Hope this helps

Ben

Deke Ryland
June 17th, 2008, 04:06 PM
Hey Ben... wow... I had no idea the master was the sum of the total clips. The thing is though... I don't want to lower the volume of the dialog track from full normalization. Or is that impossible to do if I want to lay music with it? It just seems that there should be a way to have the dialog track perfectly normalized and have music embedded into it?? Again, I'm far from an audio editor, so I'm really seeking guidance here. Thanks for your help!

Shaun Roemich
June 17th, 2008, 04:20 PM
Drop both clips by the same amount and then their relationship to each other will remain constant. You only have SO much "volume" to work with.

Bill Ravens
June 17th, 2008, 08:43 PM
1+1 = 2

it's simple math

And how did you normalize? If you normalized to RMS, with no compression, then there's a good chance each channel's peaks are being clipped. If you normalized to PPM, then the original algorithm applies, ie 1+1=2

Petri Kaipiainen
June 18th, 2008, 01:07 AM
The math is simple: if you mix two equally loud tracks the result is 6 dB higher in volume. The "mixing" is done by simply adding the sample values together. So the result is twice as big a number which in digital audio corresponds to 6 dB higher volume. With 4 equal tracks the result is 12 dB louder etc.

Statistically of course this seldom happens with real signals because levels vary widely, but it is a real possibility to have several loud signal to occur at the same time.

So, as said above, lower the levels 3 dB for each trak before mixing them together. If the end result is too low in volume, raise the master level accordingly.

Seth Bloombaum
June 18th, 2008, 09:53 AM
...The one audio track is dialog, and it was recorded perfectly at near normalization so as to provide the maximum dynamic range. The other clip is a music track that was prenormalized from the producer (and PPro shows it's -0.4dB from the top... so it's not even normalized 100%)...
Bring that music down. A mix that has dialog and background in the relative volume relationship you describe is not a mix, it's a hack job, even when the final result is not clipping.

It's difficult to imagine how you might be doing this - do you have good audio monitors? Most music at these levels will drown out dialog unless the dialog is extremely compressed as to volume range (dynamic compression).

This is on the assumption that it is important for all the dialog to be heard and understood.

Shaun Roemich
June 18th, 2008, 02:33 PM
To further complicate the 1+1=2 point, only the transients where the audio files are BOTH approaching full saturation (0dBfs) will cause the overmodulation distortion/clipping. Normalizing doesn't change the dynamics of a piece of music or dialog, it just applies gain so that the loudest section is adjusted to meet your Normalize target. Normalization UNIFORMLY raises volume of the entire wave form by a set amount. Compression on the other hand, coupled with normalization...

And yes, if you are mixing -0.4 dBfs music with -0.4 dBfs voice over, your music is too loud.

In my humble opinion, Normalizing is for Mastered tracks to ensure that the output is sufficiently loud enough for your purposes while maintaining dynamics of the mix. Normalizing too early in the process adds gain which raises the noise floor, decreasing your ACTUAL dynamic range. Which is why most of us mix -12 to -20 dBfs sound tracks and THEN adjust our final mix for our intended purpose (for multimedia, I get at close to 0dBfs as I can because everyone "knows" LOUDER IS BETTER, for broadcast or DVD I acknowledge the established standards).

Ty Ford
June 19th, 2008, 06:29 AM
In my humble opinion, Normalizing is for Mastered tracks to ensure that the output is sufficiently loud enough for your purposes while maintaining dynamics of the mix. Normalizing too early in the process adds gain which raises the noise floor, decreasing your ACTUAL dynamic range. Which is why most of us mix -12 to -20 dBfs sound tracks and THEN adjust our final mix for our intended purpose (for multimedia, I get at close to 0dBfs as I can because everyone "knows" LOUDER IS BETTER, for broadcast or DVD I acknowledge the established standards).

Shaun's right. Somewhere, someone got the idea that finalizing every track before mixing was a good idea. As explained above, there's only so much room. If you're tracks are peaking at -6 to -12 and you're recording at 24 bits, you'll be fine. I try to keep my voice tracks peaking at -3 to -6 when I cutting narration or commercials, but I usually record them with just a little compression or limiting to pack them.

"Embedding" is a little alien to me. Combining tracks (mixing them) is a tricky process. It's not just about level. EQ and stereo position have as much to do with it.

When I'm mixing music with vocals, I'm extremely careful about the level of the vocal. It's carrying the message. I usually write a lot of automation to keep the vocal exactly where I want it and then bring up the other elements of the mix around it. Narration voice levels are a lot more consistent and don't require as much attention.

Hope this helps.

Regards,

Ty Ford

Bill Ravens
June 19th, 2008, 06:56 AM
For reasons I can't seem to understand, the clips I receive from the production stage have RMS audio levels ranging from -48dBFS to -20dBFS. I've talked to the director, but, to no avail. Apparently, the sound engineer is asleep at the wheel. Ah well, such is the plight of a school production.

My point is, it's necessary to normalize each clip, beforehand. Trying to normalize levels after the edits are assembled would be a nightmare. Or is there a way I'm missing, here, Ty?

Ty Ford
June 19th, 2008, 07:19 PM
Well rms levels can run a lot lower than peak, but that does sound a bit low.

Anyway, in that cast I'd normalize, but only up to maybe -6 dB on peaks. If you go much higher than that, you run out of headroom on the mix buss.

Regards,

Ty Ford

Bill Ravens
June 19th, 2008, 07:40 PM
great. many thanx for your feedback.