View Full Version : Audio for Dance Recital and the Alesis FW Multimix 8
Bryan Daugherty January 27th, 2009, 10:53 PM Greetings! This conversation is a splinter from my OP http://www.dvinfo.net/conf/wedding-event-videography-techniques/141857-equipment-shooting-dance-recital.html . During that conversation, Garrett Low brought to my attention the Alesis Multimix 8 FW and I wanted to get some input from the audio people here, hence this thread.
Here is the scenario. I am shooting a 2 camera HDV dance recital. The studio owner has in the past mic'ed some of the dancers for speaking or singing parts and the dances are mixed ballet and jazz/tap. Some of the choreography includes claps, floorslaps, and props hitting the floor. I will be taking a direct feed from the board for music and be requesting copies of all music in case i need to mix in post. Both cameras will be recording with on camera mics, I have 2 additional boom mounted shotguns (for mounting on c-stands) and a Sennheiser cardoid mic in my kit already.
I like the idea of capturing multiple streams to individual tracks for mixing in post. We will be shooting HDV, editing HDV, archiving BD-r ready HDV, and delivering DVD. I want to have sound that i can offer as a 5.1 mix (great info in this thread http://www.dvinfo.net/conf/all-things-audio/105836-how-record-5-1-atmos-cheap-2.html BTW) or stereo.
I am thinking of using this setup. (Graphic in Link)
http://i121.photobucket.com/albums/o221/DaughertyVideography/Opera_hse_audio_ver_1.jpg
The hope is to use the shotguns highly directional feed aimed toward the stage to capture the taps, claps, and other stage ambient left and right and Sennheiser in center to add supplemental support sound to the center channel. I am hoping the directional nature of the mics will pick up strong tap but weak music so I can mix in the cleaner music from the board feed. The talent mic'ed wirelessly will be feed on a separate stream to the Alesis, and will be center and panned L or R depending on talent position. and for the rear channels I will mix in crowd response, the music feed will be stereo and left on R and L as is. The Alesis will be hooked up to a desktop and recorded on PC via Vegas Pro 8, each mic having it's own stream, the wireless feed having its own stream, and the music its own stereo stream.
Does this sound like a good setup? How many people have tried the Alesis Multimix 8 FW and vegas method?
Bryan Daugherty January 31st, 2009, 01:56 PM I see there have been a few views but no bites...would really love some input here if anyone has some thoughts... Mr. Ford are you available for comment? Thanks in advance...
Dave Stern January 31st, 2009, 10:20 PM well Bryan, I too am interested in a response and have subscribed to your thread to see what comes back. to me, the possible weak link in the firewire mixer is the clock speed and how much it will get out of sync from your camera (apart from the 5.1 aproach). I do some events too and have thought about something like that, but to be honest recording on a PC in a live event scares me a little (it should be fine but what if that's the time windows decides to lock up or crash), and with those firewire mixers, if you didn't have to sync to video, they probably are fine but I wonder how much drift you would get.. although with vegas you could correct it, but maybe a PITA?
anyway thus my subscribing, I would be very interested to know any feedback as well...good post
Bryan Daugherty January 31st, 2009, 10:40 PM Thanks Dave! Hopefully we can get some answers to these questions. Very good additional questions. I am planning on getting copies of all the music too so worst case i will have that and the on-camera mics for ambient. I also am trying to arrange to borrow my second ops marantz and record the stereo out of the Alesis FW board to CF but we have had sync issues in the past with the marantz... Garrett Low brought this device to my attention and he has used it, to see his take on it, click on the link in the OP to the original thread...Thanks!
I am theorizing that by recording in vegas with the project set to drop frame that the mix should match because it will be recording in the same format that i am going to capture in but I don't really know...
Steve House February 1st, 2009, 06:32 AM Can't really address your specific questions but recording to the laptop should work fine if you prep it a bit in advance by removing all unneccessary software and disabling background utilities like virus and spyware scanners, etc, before the gig. Scan everything first to make sure it's clean, of course, but during recording especially make sure your anti-virus is disabled. Also, make sure your hard drive is freshly optimized, recycle bin and temp files dumped, and defragged. If at all possible, a laptop used for serious recording should be dedicated to audio functions only and not serve double duty for surfing the web, email, gaming, etc. There are some good articles on optimizing Windows for audio in Rain Recording's website's technical library - Pro | Rain Recording (http://rainrecording.com/pro/)
Your timecode settings aren't going to matter much one way or the other as far as preserving sync over the duration of the shot. Camera and audio recorder have to be talking to each other or jammed to each other for TC to be of much use. With file-based recording TC does not control playback speed anyway so while it can aid in lining up audio to picture, it doesn't do anything to prevent "sync drift" over time. Timecode settings are not a "recording format" setting and won't have any effect on your ability to mix tracks. OTOH, sample rate settings ARE recording format settings and all recording devices should be set the same, 48kHz since you're recording audio for video.
Just FYI ... drop frame is only needed when elapsed time viewed in the editing timeline needs to match real time as kept by stopwatch. In broadcast, where you need program length to fit a specific real-time window, it's used for the final delivery masters and it's often used for convenience when shooting short-form projects like commercials. But for long-form and non-broadcast applications 29.97 non-drop is frequently used for both the camera & audio originals during the shooting and editing stages of production. When you get right down to it, timecode is just a series of uniform arbitrary tick-marks indexing the positions inside the file and in most applications it doesn't matter if it matches real-world time or not.
Bryan Daugherty February 1st, 2009, 03:13 PM ...recording to the laptop should work fine if you prep it a bit in advance...Your timecode settings aren't going to matter much one way or the other as far as preserving sync over the duration of the shot. Camera and audio recorder have to be talking to each other...to prevent "sync drift" over time...all recording devices should be set the same, 48kHz since you're recording audio for video...29.97 non-drop is frequently used for both the camera & audio originals during the shooting and editing stages of production...
Steve - WOW! What a plethora of information! You have helped me a lot here (maybe I should reword my questions a little bit.) I do want some specific answers to the OP but also insight from others, such as yourself, who have a lot of experience with audio recording.
As far as recording goes, I am planning on using a desktop for the recording instead of a laptop. We have access to power and i figured i would bring along my UPS for backup. I had not planned on stripping it down (program wise) but maybe I will go with my older computer so i can strip out some of the old software. My Old compaq is used primarily for running my Laserjet and Graphics design work when my primary system is rendering. It hasn't been on the web for almost a yr now. I will have to give this some thought, it is not as robust as my primary system (the older comp has an AMD dual core and 2 gb ram) but that may not matter as much as I thought.
48kHz won't matter so much because that is the upper limit of the Alesis Multimix so I was going to be in that mode anyways. I am surprised to hear your thoughts on the TC issues. I figured it would be more important (shows what I know) that the audio and video files are interpreted at the same TC. I figured if the files recorded in a non-video format that they might have a greater likelihood of being out of sync when imported into the editing timeline. Of course the 29.97 drop is a video format, but I must say sync is a big issue for us as we will be running continuous 90+ min and recording dance, so being off a little could show a lot. Thanks, Steve!
Dave Stern February 1st, 2009, 04:04 PM ....
I am theorizing that by recording in vegas with the project set to drop frame that the mix should match because it will be recording in the same format that i am going to capture in but I don't really know...
Hi Bryan, others are much more knowledgable on this than me, but the one thing I would add here is that when your audio is first converted to digital (e.g. sampled at the frequency and bit depth you specify), that is where any clock error, drift, etc. will be introduced. (and I am talking just about clock frequency, not time codes). If you are converting Analog to Digital in the alesis, your recording will be as accurate or not as the clock in the alesis (which I don't think is an expensive piece of gear, which is good, but that also goes to how much $ is in that clock chip). Once you are recording on your video cam and sepeately your audio, the ability of those two clocks to tick at the same frequency is what will / won't introduce drift. (independent of whether a specific time code is on them, which would help you match the two together but that can be done manually).
Once it's in the alesis and headed to your PC in firewire, any (relative) "error" is already there. my suggestion would be then to use whatever simple utility you can to capture it, although vegas with its roots in audio should be fine I would think. You'll just have to tell vegas (or have it detect, not sure) the frequency and bitrate you have in the firewire and it will dutifully capture it (probably as a wav file or other uncompressed). I'm sure others can comment or expand / correct this (and I love vegas, just have not used it this way). Good news is that vegas can shrink or stretch your audio back to match the project (or the video to match the audio).
Please keep us posted as to how your project goes!! I am sure others will jump in too.
Steve House February 1st, 2009, 04:12 PM ...
48kHz won't matter so much because that is the upper limit of the Alesis Multimix so I was going to be in that mode anyways. I am surprised to hear your thoughts on the TC issues. I figured it would be more important (shows what I know) that the audio and video files are interpreted at the same TC. I figured if the files recorded in a non-video format that they might have a greater likelihood of being out of sync when imported into the editing timeline. Of course the 29.97 drop is a video format, but I must say sync is a big issue for us as we will be running continuous 90+ min and recording dance, so being off a little could show a lot. Thanks, Steve!
Just assumed you would be using a laptop for portability but the same principles would apply to a desktop.
Yep, it's a common misbelief that timecode can keep files in sync. In fact, timecode is not recorded continuously within the body of the audio file at all. In a timecode workflow, time is sent from the camera to the audio recorder so the audio recorder's timecode register slaves to it. The timecode at the moment the first audio sample is recorded is placed in the file header as a timestamp. In editing the timecode recorded with the video is aligned to the editor's timeline. When you drop the audio file into the editor, the editor reads the timestamp in the file header and uses it to align the first audio sample to that location in the timeline so it lines up to its matching video frame. But lining up the first audio sample to the corresponding video frame is all it does. What happens later as the files play in parallel isn't influenced at all by timecode or the lack of it. If there is any mis-match between the camera sample clock rate and the audio sample clock rate, the two will playback at different speeds and the audio will drift out of sync as the scene plays. And that's the rub - while you'll be setting the Alexis to 48kHz to match the camera's fixed video rate, that 48kHz for both is the nominal frequency. In fact, due to manufacturing tolerances, temperature differences, etc, they may not be exactly on-spec. If camera's "48,000" is actually 48,005 samples per second and the audio recorder's "48,000" is actually 47,995 samples per second, there will be 10 samples difference in the number of samples played with each passing second (5 too many for one, 5 too few for the other). There's ~1600 samples per frame, so after playing for 160 seconds they will be mis-aligned by 1 frame, after 320 seconds 2 frames, and so forth. To prevent this, the camera and recorder must share a common timebase, meaning the camera and the recorder clocks must be slaved together. Of course these numbers are just for illustration and your gear might be better than that but the principle remains and over a 90 minute continuous take it's almost certain to be signifigant. Alas, timecode doesn't influence the number of samples per second being recorded or played back so it's irrelevant.
With two cameras and the audio recorder, to keep everything in sync you need some way to send clock to all of them and that's going to be a problem. The sync the two cameras to each other you need genlock inputs and I don't think any of those that you mentioned in your other thread have that. Also, the Alesis mixer doesn't have a video sync input and from looking at the manual online it doesn't appear to have wordclock I/O either so that's a dead end. How about renting a Sound Devices 788t for the shoot? It's an 8 track recorder that accepts video blackburst or HD trilevel sync and slaves to it. Designate 1 camera as B-camera recording wild sound. Designate the other A-cam and all sync sound scenes are referenced to it. Send video from that camera to the recorder to keep their clocks locked together.
Bryan Daugherty February 1st, 2009, 04:44 PM Steve, Dave - I cannot tell you the number of books I have read on this matter and never have i understood it, as I do (i think) after reading your statements. Thanks for the input!
Since my cams don't have TC out and the board I am looking at does not have TC in or blackburst, I am running at a risk of drift here if i get your statements. Even if the sample rates are the same and all other settings being equivalent regardless of project settings. I wish I could afford better mixing gear but this time around the funds are going into other equipment, mainly the main cam upgrade.
Steve House February 1st, 2009, 05:20 PM Steve, Dave - I cannot tell you the number of books I have read on this matter and never have i understood it, as I do (i think) after reading your statements. Thanks for the input!
Since my cams don't have TC out and the board I am looking at does not have TC in or blackburst, I am running at a risk of drift here if i get your statements. Even if the sample rates are the same and all other settings being equivalent regardless of project settings. I wish I could afford better mixing gear but this time around the funds are going into other equipment, mainly the main cam upgrade.
You may set the sample rates to be the same but the issue is ... are they really and truly the same? With consumer gear, probably not. With prosumer gear you'll get closer. With professional gear you'll get even closer. For example, Nagra claims their new Nagra VI recorder's sample clock is accurate to 1 frame in 15 years. But considering the price, we're talking about a whole different ballgame and ballpark and most camera's under a hundred kilobucks don't even come close to that accuracy anyway so you just can't win. If one were using fully professional cameras with genlock and a recorder with wordclock it would be a piece of cake - a Lockit box for each and you're in business but that's not in the cards here. Seriously, consider renting the SD recorder I mentioned.
Ron Evans February 1st, 2009, 07:16 PM I find for my multicam projects I correct all audio for sync issue in Vegas. So video is edited using Edius and audio is fine tuned in Vegas. Tracks can be stretched/shifted to bring these small errors back in sync. I use two FX1's an SR11 and audio from a couple of Zoom H4 or Zoom H2 I designate one of the cameras as master and resync the other audio if I have to use to get my best mix. This would also be true of audio from CD which would first be re-sampled in Sound Forge to 48K then into Vegas for sync. Especially with consumer DV the audio can be out by 1/3 a frame between cameras and can drift within this spec as long as it isn't more than this 1/3 frame out. I found that with DV it was always necessary to resync in this way and though HDV and AVCHD seem to be better they are still not that accurate!!! Recorders like the Edirol (F-1 Video Field Recorder (http://www.edirol.com/index.php/en/products-mainmenu-421/field-recording-mainmenu-390/341-f-1-video-field-recorder)that) )record from iLink and have two extra audio inputs look attractive too but I find the Vegas approach is just fine when dealing with consumer cameras like I have.
Ron Evans
Bryan Daugherty February 1st, 2009, 08:55 PM ... are they really and truly the same? With consumer gear, probably not. With prosumer gear you'll get closer. With professional gear you'll get even closer...Seriously, consider renting the SD recorder I mentioned.
Somehow, in my first read of your post i missed the reference to the 788t rental. I will look into it. For the most part, all the gear I looked into renting before, the rental rates became prohibitive due to the need to rent for over a week. I am not looking at consumer gear (unless maybe the Alesis skirts the line) and will be using prosumer (at least by B&H standards) gear. The 788t looks to be an awesome piece of hardware and will need to give serious consideration to it as a future investment, if rental is not possible. In reality, if I have to, i guess I could make any sync adjusts on a per performance basis (more time in the editing bay.) While each act (2 per nt) lasts about 90 minutes, each performance is only one song, 1.5 - 8 minutes with a fast scene break between each set.
Rob - I tend to edit all my video in Vegas and tweak audio in Audition 1.5. I keep telling myself I need to spend more time getting to know the audio capabilities of Vegas but I just have more time in with Audition as an audio utility. I have never had issues syncing between my cameras (yet) and guess I have just been fortunate so far. i love the concept of the Edirol F-1 but the cost and proprietary drives have it on my someday list instead of my maybe list, but it is a cool recorder for sure.
Thanks for your tips guys I am learning lots and it is great to hear how other people are doing it.
Ron Evans February 1st, 2009, 10:02 PM Unless you are using genlocked pro cameras the audio will be out of sync. To check this, line up two tracks of video in Vegas, expand timeline scale to 1 frame and look at the audio waveform at the beginning and the end of say 30 mins. The first thing you may notice is that either the video is out of sync or the audio is out of sync and may drift out even further after the 30 mins. In other words its not possible to move the tracks to get the audio to line up because they snap to video frame lock. This is because with frame sync on the tracks will only lock on a video frame pulse and consumer and prosumer cameras normally do not have audio clocks locked to video. That means that each camera has the audio lined up in reference to the video clock in a different position that is visible if you expand the timeline to show a frame. Turning frame sync off allows the audio to be dragged so that they can be line up independent of the video and by expanding or contracting one of the tracks can be made to sync up precisely independent of the video track. The NLE's will effectively create the equivalent to the pro video genlock in syncing up cameras to a frame sync but they do not do that for the audio. I am not talking about big differences but within a frame so that there is potentially a reverb or echo effect if left uncorrected. Vegas actually started life as a multitrack audio editor then added video and was the companion to Sound Forge stereo mastering product.
Ron Evans
Bryan Daugherty February 1st, 2009, 10:23 PM ...Vegas actually started life as a multitrack audio editor then added video and was the companion to Sound Forge stereo mastering product.
yeah, I had heard that when I was first researching to go the Vegas or Adobe route with my NLE. I landed on Vegas and love it. I had 2 yrs experience with Audition already at that time helping one of my friends who is an indy singer/songwriter master some demos and self published CD's so i have a boatload of custom presets I created in Audition which is why i lean on it so heavily. I am sure if i spent some time with Vegas i could probably recreate most of those presets but no time... Thanks for all the shared knowledge on frame syncing/snapping, that is good stuff. I am so glad we have this forum to share knowledge and learn from each other.
Dave Stern February 2nd, 2009, 01:22 AM Byran, one additional thought, the tascam HD-P2 is a stereo recorder that accepts a video signal in and syncs its clock rate to that video signal..I have used it to record 2+ hours with no drift between the audio and video ... and with the two additional audio tracks on your camera, that gives 4 .. I do a mix of my live mikes into the tascam, take a feed from the board into one of the camera tracks and an additional mike for ambience / audience and that has served me well .. you'll get a stereo mix, not the 5.1, and you do have to do your stereo mix at the venue (although if you record mid/side and you don't decode that at the venue (e.g. record the two M/S mikes just as they are), you could control your stereo spread when you are editing) ... anyway, food for thought..
edit: by the way, my name probably doesn't belong in the same paragraph as steve's ... I am a rank amateur just having fun learning and doing some recording, and steve house is a professional at this ..
Dave Stern February 2nd, 2009, 06:40 AM ....Turning frame sync off allows the audio to be dragged so that they can be line up independent of the video and by expanding or contracting one of the tracks can be made to sync up precisely independent of the video track. ...
Ron - not to move this thread OT, but quick question - how to you turn off the snapping of the audio to the video frame? I did have that problem and just couldn't find it...
Ron Evans February 2nd, 2009, 08:10 AM Dave it's under Options>Quantize to frames, snapping is also under the options menu. The process I use is to export audio from the synced up video in Edius into Vegas. Set the main camera audio as the master, expand timeline so that I can see the waveform in detail, line up the waveforms of the remaining audio tracks at the beginning( you will have to shorten the beginnings to give room to move tracks around then expand back when set)) and scroll to the end of the track and expand or contract track to line up the waveforms. With quantize to frames off the audio tracks will move smoothly with the mouse in very fine increaments. Then do an audio playback check to see if it sounds good using the mixer to set levels etc. With my older cheap DV cameras they would not hold sync all the way through so I usually had to adjust a few times( I am really picky!!!) HDV and AVCHD are better at maintaining their relative position. In our case the Zoom H4 or H2 is set at stage level, the FX1's use shot gun mics and the SR11 shoots in 5.1. The main centre FX1 is used as reference audio( SR11 is in the same position), rarely use the second FX1 audio and the Zoom and SR11 audio provide stereo ambiance in the mix.
Ron Evans
Bill Warshaw February 2nd, 2009, 05:30 PM Hi,
I use the MOTU 8pre Firewire audio interface with my primary camera (XH-G1) and record to a small, portable, USB HDD hooked up to my Laptop using Vegas 8 Pro. The G1 has timecode out, and the MOTU happily syncs to SMPTE timecode in one of the 8 input channels (a nice feature of the MOTU's). There is no significant drift over 1hr long recordings. My secondary camera (XH-A1) is simply genlocked with the G1. This is a more expensive setup (G1 & 8pre) but works really well. Recording using my laptop (plugged into AC) has never been an issue to date.
One trick (safety net) that I use is to record a submix of the important elements out to the primary & secondary cameras from the 8pre. I also tell Vegas to record this submix to the HDD. This makes lining up the clips in Vegas very easy, and once aligned I just delete/mute the camera audio tracks and the reference track from the HDD.
On one occasion, I made a mistake in setup and the MOTU wasn't looking for sync from the audio input - rather it was running off it's internal clock. Having the submixes on both camera tapes and the HDD really paid off: I aligned the camera clips as usual and then stretched all the MOTU recorded audio tracks using the reference submix from the MOTU as a guide so it matched the camera audio. They should look exactly the same when sync'd. Took a couple of minutes and sounded fine. I guess I spent the extra $'s for the G1 for nothing <g>.
I'm not sure if the FW unit you're looking at has the capability to record a mix your sending out to the camera back to the computer along with the individual tracks, but if so it will really help to get things in sync.
/BILLW
Steve House February 3rd, 2009, 04:26 AM Hi,
I use the MOTU 8pre Firewire audio interface with my primary camera (XH-G1) and record to a small, portable, USB HDD hooked up to my Laptop using Vegas 8 Pro. The G1 has timecode out, and the MOTU happily syncs to SMPTE timecode in one of the 8 input channels (a nice feature of the MOTU's). There is no significant drift over 1hr long recordings. My secondary camera (XH-A1) is simply genlocked with the G1. This is a more expensive setup (G1 & 8pre) but works really well. Recording using my laptop (plugged into AC) has never been an issue to date.
One trick (safety net) that I use is to record a submix of the important elements out to the primary & secondary cameras from the 8pre. I also tell Vegas to record this submix to the HDD. This makes lining up the clips in Vegas very easy, and once aligned I just delete/mute the camera audio tracks and the reference track from the HDD.
On one occasion, I made a mistake in setup and the MOTU wasn't looking for sync from the audio input - rather it was running off it's internal clock. Having the submixes on both camera tapes and the HDD really paid off: I aligned the camera clips as usual and then stretched all the MOTU recorded audio tracks using the reference submix from the MOTU as a guide so it matched the camera audio. They should look exactly the same when sync'd. Took a couple of minutes and sounded fine. I guess I spent the extra $'s for the G1 for nothing <g>.
I'm not sure if the FW unit you're looking at has the capability to record a mix your sending out to the camera back to the computer along with the individual tracks, but if so it will really help to get things in sync.
/BILLW
How are you genlocking the XH-A1 to the G1? I don't recall the A1 having an input for external sync?
Bill Warshaw February 3rd, 2009, 09:26 AM Sorry, other way around. BNC video out from the A1 to the BNC Genlock In on the G1.
Jay Massengill February 3rd, 2009, 03:29 PM Going back to your original post regarding the mics you hope to use and their setup locations. I don't think the side positioned shotguns will give you the quality of audio that you're hoping. In addition to their likely pickup of noticeable off-axis coloration, the timing differential will likely be very noticeable between these two mics and difficult to adjust since the on-stage sources will be constantly moving.
I think your track isolation list is good, but I would substitute a stereo pair of cardioids or hypercardioids in X/Y configuration rather than the side positioned shotguns.
I would also think about substituting a boundary layer mic on the stage lip for your central Sennheiser dynamic, or mount the Senn dynamic as close to the stage as you can. (You may need to shift your ambient stereo pair forward in time to match the extremely short delay of this closely placed source when you edit.
With my setup, the central mic gives the isolation of the stage sounds due to its closer proximity while the stereo pair adds ambience and some stereo imaging to augment the direct music and wireless feeds.
For all your acoustic sources, remember that distance equals time. Roughly 3 NTSC video frames per 100 feet if you're recording at 30 frames per second. (2.4 frames per 100 feet at 24fps). Even at very short distances, with music and tap, any delay between sources at equal volumes can be very noticeable. This delay will also come into play when using your on-camera audio as a line-up guide, even though you hope not to use the on-camera audio as a source.
Bryan Daugherty February 3rd, 2009, 07:44 PM Jay, thank you for your input. I think either I have not been clear with my plans for using the mics or I am not understanding your post. Let me preface by saying that for this particular job, I have to use the mics i have and do not have the budget to add additional mics this yr. The main question would be deployment. That said, I am not planning to just drop the mics in to the mix based on their location but rather am trying to capture sound based on the environment that can be mixed to match the video. I am not hugely proficient in sound so my terminology may be off a little, so I ask your patience if I don't say things correctly.
On the shotgun mikes, I want to deploy them in a manner so that they record the sounds close to them such as the tap shoes, and other stage noise but away from the speakers so they pick up less of the room mix. I want each one to pick up the sounds as heard from that part of the room. So that when I cut to the CU cam (cam 2 on the right of the stage) and we see the dancers further from the camera to stage left i would spin up the tap on the rt shotgun mic so that the sound seems further away. It would not be on the right speaker channel but more distributed across all front channels, as that op zooms in to get a tighter shot, i would bring in more of the left shotgun sound that is closer to the dancers actual position, etc. In other words, i am not wanting to use the mics to capture speaker channels but rather points of view. When i cut back to the wide shot i would pan the best sound to the dancers position.
I get what you are saying about distance and sound and appreciate that I will need to sync my audio to video to account for what (when) the mic hears vs what (when) the camera sees, my sync worry is not so much sync but sync drift. If i sync the sound at the beginning of the clip will it still sync at the end of the clip? The consensus seems to be no (or maybe), and that I will have to manually adjust via stretch or shrink.
The production makes occasional use of the orchestra pit so i won't have any mics at stage edge but rather at the edge of the pit. (The last thing i want to be responsible for is a dancer tripping on a cable I run.)
Thanks for your input and i look forward to learning more from you fine folk.
Jay Massengill February 3rd, 2009, 11:35 PM My prediction is the shotgun mics won't be able to pick up the isolated sound that you're imagining they can.
They will have leakage from the stage sounds on the opposite side, and also the PA and crowd. What will make this even worse is the leakage will not sound good generally but it will also have a varying time relationship as the dancers move. So it won't just be sync, and sync drift, but it will be the constantly shifting time relationship between the 3 mics as the people move on the stage that you'll have to adjust as your shot changes.
But you'll be the only one to judge whether it's working satisfactorily. I'd suggest that you not only do a full recording test during tech week but that you also actually edit a typical dance number and see if it's possible to achieve.
Unless the performance space has outstanding acoustics and you're using the highest quality shotgun mics, I think it will be very difficult to record and edit the way you've stated, mainly due to the nature of shotgun mics with reverberent indoor sources.
What exact model of shotgun and Sennheiser dynamic do you have?
Bryan Daugherty February 4th, 2009, 12:10 AM Jay thanks for the follow up. The acoustics are outstanding in this theatre, i was quite shocked with the clarity my shotgun recorded on camera last yr during the test run. Since my shotguns will be in use on camera, my second op is lending his mics to the project. We are meeting (weather permitting) later this week and I will try to get the model numbers from him but he just upgraded them recently and my memory is they were extremely high grade and quite expensive. He is the media director for a non-profit and they take their equipment very seriously. I am limited in equipment placement for this production and am making do, so is there a better way to do it with this same equipment?
EDIT:
...the constantly shifting time relationship between the 3 mics as the people move on the stage that you'll have to adjust as your shot changes....
Are you referring to doppler shift or something else here?
Steve House February 4th, 2009, 04:41 AM ...Are you referring to doppler shift or something else here?
Sound travels at about 1 foot per millisecond. Let's say a dancer is on the right side of the stage so she is closer to the right-hand mic than the left-hand one by, lets say, 30 feet. That means the clack of a tap hitting the stage will arrive at the left mic 30ms later than it arrives at the right. 30ms is one full frame.
Jay Massengill February 4th, 2009, 08:49 AM That's good news about the performance space acoustics and the potential quality of the mics. However I would for safety use a stereo pair for your centrally located mic, even if you had to beg, borrow or steal either a matching Senn dynamic or a low cost matching pair of something like AT3031 cardioids.
You could then use this stereo pair in three different ways:
1. As two slightly directional mics to reinforce your stated editing goal (but without the large time shift and odd-sounding leakage of the shotguns);
2. Mixed to mono if needed;
3. Or most importantly as a normal stereo backup if it becomes obvious that editing with your shotguns is too time consuming or has too much leakage.
I think you have enough channels to do this: 2 for prepared music, 2 for centrally located stereo pair, 2 for your shotguns and 2 for isolated wireless feeds from the performers.
By using a stereo pair in the central position, you gain much more flexibility and safety without dramatically changing away from your original shotgun idea when you record.
After the fact you'll have the full 8 tracks and the time to determine what can be edited to best effect.
To add one thing to Steve's example, it's the "leakage" of the dancer on the right of the stage that will arrive at the left shotgun 30ms late. I think you were hoping for enough isolation from the shotguns that the far side of the stage wouldn't be a factor but I think that's impossible in this case.
|
|