|
|||||||||
|
Thread Tools | Search this Thread |
June 17th, 2008, 03:13 PM | #1 |
Major Player
Join Date: Nov 2006
Location: Washington, USA
Posts: 213
|
Both clips normalized... but somehow clipping when mixed?
I'm using PPro CS3 and have two normalized audio clips in the timeline. Both are normalized and didn't clip originally either, and when played in isolation, both audios do not clip on the PPro VU meters. However, when combined, they occasionally clip into the red.
Now I'm no audio expert, but I thought as long as no audios clipped, that mixing perfectly normalized audio would not put the VU meters into the red? Am I wrong on this? The one audio track is dialog, and it was recorded perfectly at near normalization so as to provide the maximum dynamic range. The other clip is a music track that was prenormalized from the producer (and PPro shows it's -0.4dB from the top... so it's not even normalized 100%). Anyone know how I can get around this so I don't get clipping on the master render? Thanks for any help or suggestions. |
June 17th, 2008, 03:48 PM | #2 |
Regular Crew
Join Date: Nov 2007
Location: Vancouver, WA
Posts: 190
|
Yeah that happens, Just lower the volume's for the individual audio tracks in the audio mixer until the master is no longer clipping. Two single unclipped tracks can clip when both are played/mixed together. The master is the "sum" of the two tracks.
Hope this helps Ben |
June 17th, 2008, 04:06 PM | #3 |
Major Player
Join Date: Nov 2006
Location: Washington, USA
Posts: 213
|
Hey Ben... wow... I had no idea the master was the sum of the total clips. The thing is though... I don't want to lower the volume of the dialog track from full normalization. Or is that impossible to do if I want to lay music with it? It just seems that there should be a way to have the dialog track perfectly normalized and have music embedded into it?? Again, I'm far from an audio editor, so I'm really seeking guidance here. Thanks for your help!
|
June 17th, 2008, 04:20 PM | #4 |
Inner Circle
Join Date: Sep 2002
Location: Vancouver, British Columbia (formerly Winnipeg, Manitoba) Canada
Posts: 4,088
|
Drop both clips by the same amount and then their relationship to each other will remain constant. You only have SO much "volume" to work with.
__________________
Shaun C. Roemich Road Dog Media - Vancouver, BC - Videographer - Webcaster www.roaddogmedia.ca Blog: http://roaddogmedia.wordpress.com/ |
June 17th, 2008, 08:43 PM | #5 |
1+1 = 2
it's simple math And how did you normalize? If you normalized to RMS, with no compression, then there's a good chance each channel's peaks are being clipped. If you normalized to PPM, then the original algorithm applies, ie 1+1=2 |
|
June 18th, 2008, 01:07 AM | #6 |
Major Player
Join Date: Apr 2007
Location: Espoo Finland
Posts: 380
|
The math is simple: if you mix two equally loud tracks the result is 6 dB higher in volume. The "mixing" is done by simply adding the sample values together. So the result is twice as big a number which in digital audio corresponds to 6 dB higher volume. With 4 equal tracks the result is 12 dB louder etc.
Statistically of course this seldom happens with real signals because levels vary widely, but it is a real possibility to have several loud signal to occur at the same time. So, as said above, lower the levels 3 dB for each trak before mixing them together. If the end result is too low in volume, raise the master level accordingly. |
June 18th, 2008, 09:53 AM | #7 | |
Inner Circle
Join Date: Sep 2003
Location: Portland, Oregon
Posts: 3,420
|
Quote:
It's difficult to imagine how you might be doing this - do you have good audio monitors? Most music at these levels will drown out dialog unless the dialog is extremely compressed as to volume range (dynamic compression). This is on the assumption that it is important for all the dialog to be heard and understood. |
|
June 18th, 2008, 02:33 PM | #8 |
Inner Circle
Join Date: Sep 2002
Location: Vancouver, British Columbia (formerly Winnipeg, Manitoba) Canada
Posts: 4,088
|
To further complicate the 1+1=2 point, only the transients where the audio files are BOTH approaching full saturation (0dBfs) will cause the overmodulation distortion/clipping. Normalizing doesn't change the dynamics of a piece of music or dialog, it just applies gain so that the loudest section is adjusted to meet your Normalize target. Normalization UNIFORMLY raises volume of the entire wave form by a set amount. Compression on the other hand, coupled with normalization...
And yes, if you are mixing -0.4 dBfs music with -0.4 dBfs voice over, your music is too loud. In my humble opinion, Normalizing is for Mastered tracks to ensure that the output is sufficiently loud enough for your purposes while maintaining dynamics of the mix. Normalizing too early in the process adds gain which raises the noise floor, decreasing your ACTUAL dynamic range. Which is why most of us mix -12 to -20 dBfs sound tracks and THEN adjust our final mix for our intended purpose (for multimedia, I get at close to 0dBfs as I can because everyone "knows" LOUDER IS BETTER, for broadcast or DVD I acknowledge the established standards).
__________________
Shaun C. Roemich Road Dog Media - Vancouver, BC - Videographer - Webcaster www.roaddogmedia.ca Blog: http://roaddogmedia.wordpress.com/ |
June 19th, 2008, 06:29 AM | #9 | |
Inner Circle
Join Date: Nov 2004
Location: Baltimore, MD USA
Posts: 2,337
|
Quote:
"Embedding" is a little alien to me. Combining tracks (mixing them) is a tricky process. It's not just about level. EQ and stereo position have as much to do with it. When I'm mixing music with vocals, I'm extremely careful about the level of the vocal. It's carrying the message. I usually write a lot of automation to keep the vocal exactly where I want it and then bring up the other elements of the mix around it. Narration voice levels are a lot more consistent and don't require as much attention. Hope this helps. Regards, Ty Ford |
|
June 19th, 2008, 06:56 AM | #10 |
For reasons I can't seem to understand, the clips I receive from the production stage have RMS audio levels ranging from -48dBFS to -20dBFS. I've talked to the director, but, to no avail. Apparently, the sound engineer is asleep at the wheel. Ah well, such is the plight of a school production.
My point is, it's necessary to normalize each clip, beforehand. Trying to normalize levels after the edits are assembled would be a nightmare. Or is there a way I'm missing, here, Ty? |
|
June 19th, 2008, 07:19 PM | #11 |
Inner Circle
Join Date: Nov 2004
Location: Baltimore, MD USA
Posts: 2,337
|
Well rms levels can run a lot lower than peak, but that does sound a bit low.
Anyway, in that cast I'd normalize, but only up to maybe -6 dB on peaks. If you go much higher than that, you run out of headroom on the mix buss. Regards, Ty Ford |
| ||||||
|
|