|
|||||||||
|
Thread Tools | Search this Thread |
August 25th, 2009, 10:53 PM | #1 |
Regular Crew
Join Date: Jul 2009
Location: Yorba Linda, CA
Posts: 35
|
I dont understand the fundamentals of stereo. Please explain.
First, I am familiar with the concepts of stereo imaging but I am not sure how it is applied to general video use. I have a hobbyist background with studio recording so that may help to give you a point of reference.
Specifically: I generally shoot documentaries with my rode NTG 3 attached to my Ex1 or on a boom (no mixer) if I am doing interviews. I will often use my second channel to have a wireless microphone on whoever I am doing a documentary on for when they are running around doing whatever it is they do. I may just have my internal microphone on auto for the 2nd channel just as a backup. In post, I generally just find whatever channel has the best quality audio and double it up on 2 separate tracks and pan one hard left and the other hard right. If I need to create the sensation that my subject is on one side of the screen or another, I mix it until the audio sounds like its coming from the direction of the speaker. Ive had some friends listen/watch my work and they say "why is it all in mono?" Clearly my audio has not stereo imaged. I just double up a track and pan it so its a stereo signal but without a stereo imaged. If there is background noise, it also appears to be coming from the direction I am trying to make the speaker come from. Am i doing something wrong? Does the average person expect a stereo image for all audio period? If so, how would I achieve it? Using a stereo setup for all applications? That doesn't seem practical and I only have 2 xlr inputs to work with as of now. Whenever I record music, I just do a XY or ORTF and its no big deal. Basically, whats standard operating procedure for general video dialogue? My NLE is vegas pro 9 if that helps. |
August 26th, 2009, 02:44 AM | #2 |
Inner Circle
Join Date: Sep 2007
Location: Cambridge UK
Posts: 2,853
|
You are doing OK! (and in fact are ahead of many as you already realise the necessity to boom the NTG-3 rather than leave it on camera - the worst place for anything other than run and gun!).
Talking heads should be in Mono. Music and Ambient sounds etc. are best in Stereo (there are exceptions/cases where Mono is fine/best). I suggest you do a bit of reading about audio for video (I'm sure the audio experts on here will suggest some good sources). In time, perhaps consider a small portable (stereo) Digital Audio Recorder (Sony / Zoom / Edirol etc.... many options, just read around on here...) for capturing ambient sounds etc. independently to your EX1/NTG-3 and Radio Mic camera set-up for mixing into the video during post to create a more complex sound texture - when it's required.
__________________
Andy K Wilkinson - https://www.shootingimage.co.uk Cambridge (UK) Corporate Video Production |
August 26th, 2009, 02:52 AM | #3 |
Inner Circle
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
|
As Andy said, SOP in both film and video is for dialog to be mono, centered, with music and some FX stereo. You may be working too hard doubling your tracks and panning hard left and right respectively - just leave it as one mono track, pan it to the centre, and leave it there. Audiences can find it distracting to try to position the track to follow the speaker from side to side around the screen but if you want to try it, just having one track and twisting the pan a bit left or right of centre can do that.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams! |
August 26th, 2009, 08:26 AM | #4 |
Trustee
Join Date: Jul 2003
Location: Burlington
Posts: 1,976
|
As Steve said, especially if you're working in Vegas, you don't have to double your tracks. Just right-click on an audio event and go down to "Channels", then select the appropriate choice for your situation. Preferably do this before you start chopping up a long event into many sections.
Then use the pan control on that track as needed. You can always change an individual part to another channel selection later if needed. |
August 26th, 2009, 10:16 AM | #5 | |
Inner Circle
Join Date: Jan 2006
Posts: 2,290
|
Quote:
|
|
August 26th, 2009, 12:26 PM | #6 |
Inner Circle
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
|
Doubling a track can add some richness by introducing what is effectlively a very short time constant reverb.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams! |
August 26th, 2009, 12:31 PM | #7 |
Inner Circle
Join Date: Sep 2002
Location: Vancouver, British Columbia (formerly Winnipeg, Manitoba) Canada
Posts: 4,088
|
As long as the delay between tracks is minimal, you'll get "reverb" or "chorus" effects based on phase cancellation. Strictly doubling SHOULD do nothing more than increase overall output levels by 3 - 6 dB (depending on method used for calculating/monitoring dB). If the overall quality of the audio "drops out" after doubling, you've created excess phase cancellation.
__________________
Shaun C. Roemich Road Dog Media - Vancouver, BC - Videographer - Webcaster www.roaddogmedia.ca Blog: http://roaddogmedia.wordpress.com/ |
August 26th, 2009, 01:32 PM | #8 |
Trustee
Join Date: Jul 2003
Location: Burlington
Posts: 1,976
|
The great thing about the video editing revolution is that everyone gets to do it their own way.
I was mainly commenting based on a way to reduce the effort of editing. Doubling the exact same audio onto a second track adds complexity to the edit in many subtle ways. From simply taking the time to double the items, to adding to the track count (often a visual handicap on your editing screen), to increasing the need to group items, to increasing the risk of mono incompatibility, to adding the danger of slipping some event even by one frame and causing severe echo, etc. etc. An accident like that can often happen way at the other end of the editing timeline if you're using track ripple edits and some group or track doesn't shift exactly as you're expecting. Those errors may go undetected without closely inspecting the final timeline. A slip of one frame in lip-sync won't stand out like a one frame error in doubled audio would. There are lots of people who like to double for a variety of reasons and that's their choice so I won't argue against them. I'm simply playing Devil's Advocate for a moment. Just like noise reduction and removing echo, recording your original tracks so they are of high enough quality to stand on their own is what I strive for (most of the time!). It not only sounds better, it's just faster and easier to edit. That leaves more time for things like re-editing a paragraph someone in the legal department changed <groan>. |
August 26th, 2009, 04:04 PM | #9 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
Yep. Keep the dialog centered. Pan the music wide. In 5.1, limit music on the rear speakers to light ambiance. Place foley and sound design wherever you want.
Yeah, doubling tracks to create a mono mix isn't necessary. There is one situation where I do like to double tracks: narration. 1) Make a mono track. 2) EQ it as needed. 3) Mess with envelopes, if needed, to keep the level somewhat consistent. 4) Duplicate the track. 5) Compress the heck out of one of the tracks. Something like 20:1, -25dB threshold. 6) Mix to taste. This really fattens up a voice. The original track generally has detail, but can sound thin. The compressed track is full, but lacks dynamics and punch. Mix the two and get the best of both worlds - and keep them both mono and centered. And, yes, make sure that they are perfectly aligned in time. You'll have no phase issues at all. Group them so they don't slip. Route them to a new "Narrator" bus. If you want another pass at levels, EQ, reverb, delay, etc, add it on that bus, so it affects the mixed tracks. Trying to go back and affect the original tracks will make your head explode. :)
__________________
Jon Fairhurst |
August 28th, 2009, 11:44 PM | #10 | |
Regular Crew
Join Date: Jun 2009
Location: Tampa, FL USA
Posts: 42
|
Quote:
When an exact track is delayed less than 30ms (thirty milliseconds), it will sound like one track/sound. Delaying a track will give it a 'kinda' chorus affect, but basically muddy's up the combined sine wave to make it sound more 'fat' or 'rich' I completely agree with Jon about narration, minus if you're going to duplicate/compress the heck out it (parallel compression), just pop a de-esser on the second track and compress it and no need to kill yourself on eq on the 2nd track. I see work flow Jon has laid out, simpler to eq the main, and then duplicate etc... My personal trick (since I work in PT) is I bus the narration track to an aux track compress the crap out of that, then add a 4-7ms delay. This ads a fullness that doesn't chorus, nor cause phase issues. After you get your levels mixed right between the 2, I send the outputs of those out to a group track and control the level with just one fader (kinda like grouping). But who's counting. |
|
August 29th, 2009, 08:21 AM | #11 | |
Inner Circle
Join Date: Nov 2004
Location: Baltimore, MD USA
Posts: 2,337
|
Quote:
Some folks do this rather than boost gain on an audio clip, or if they run out of gain. I did that recently when shooting a Pre-Vis. I was booming to a mixer and fed both mixer outputs to the camera; A Sony EX3. In post, I had both audio tracks on the timeline. Because it was dialog and I wanted it centered, I raised both tracks when I wanted more gain. As to Dan's concern. Not everything on the timeline should be stereo. Dan, start thinking about what COULD be stereo. Ambi, music, some sound effects depending on where they occur. To do this correctly, you need a good audio monitoring environment. You need two good monitors set up in a pretty much equilateral triangle with your head. Tweeters at ear height. If the distance between the monitors is much greater than the distance between your head and any one monitor, you'll under mix the stereo field. If the distance between the monitors is much less than the distance between your head and one monitor, you'll over mix the stereo spectrum. You need a monitor system that has a solid low end down to at least 50 Hz. If you don't have that you need a sub-woofer. Genelec, ADAM, Myers, (some) JBL, K&H are good brands. The powered Event Opals are remarkable. $3k a pair, no sub-woofer needed. I am patiently (well almost) waiting for a pair. Regards, Ty Ford PS: Steve, will you be at AES in October? |
|
August 29th, 2009, 08:26 AM | #12 | |
Inner Circle
Join Date: Nov 2004
Location: Baltimore, MD USA
Posts: 2,337
|
Quote:
If you don't have a really good ear for cancellation effects, I'd suggest you don't try this. And by all means have a good monitoring system and check everything in mono. Regards, Ty Ford |
|
August 29th, 2009, 11:07 AM | #13 | |
Inner Circle
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
|
Quote:
I'd sure like to get to AES this year but I don't know if I'll be able to make it.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams! |
|
August 29th, 2009, 06:44 PM | #14 |
Regular Crew
Join Date: Jan 2009
Location: Portland OR
Posts: 159
|
Short delays of a few milliseconds will cause phase cancellation at higher frequencies.
The classic 'fatten the voice' trick used to avoid this was to use two Eventide Harmonizers, one a quarter tone up (or so), the other a quarter tone down, panned left and right. That used to require about $7000 worth of Eventide gear to do - nowadays, you probably got a pitch shift plugin for free. -Mike |
August 30th, 2009, 03:00 AM | #15 |
Major Player
Join Date: Jul 2007
Location: Bristol U.K.
Posts: 244
|
doubling the tracks and panning out.... you wont find a pro doing it that way. Even in the old 1/4 " days we only ever took one side of the recording and panned centrally. It's a modern take with the nle of an old technique. Standard procedure should be to take one side and pan centrally as advised above.
|
| ||||||
|
|