Synch Drift in Zoom H2 PCM file? - Page 3 at DVinfo.net
DV Info Net

Go Back   DV Info Net > The Tools of DV and HD Production > All Things Audio
Register FAQ Today's Posts Buyer's Guides

All Things Audio
Everything Audio, from acquisition to postproduction.

Reply
 
Thread Tools Search this Thread
Old September 13th, 2007, 03:23 PM   #31
Inner Circle
 
Join Date: Sep 2003
Location: Portland, Oregon
Posts: 3,420
Quote:
Originally Posted by Bob Huenemann View Post
Seth, since you have an H2 and an H4, can you investigate the power glitch issue and tell us if the H2 has it also?
Sorry, did I mis-type in my last post? I don't have an H2. I can confirm that in the studio I can find audible beeps down in the noise floor of the H4 that correspond with the record light blinking, but in practical recording applications I've never heard it.

Quote:
Originally Posted by David Ennis View Post
Chew on this one, though: Why does the 44.1/16 file from my $300 HiMD recorder align perfectly (within detectability in Vegas) with my camera's 48/16 tracks for a full 90 minutes?
Luck. Gotta' be luck.

Quote:
Originally Posted by Steve House View Post
...but it's still going to be a PITA to align the heads of video and audio for every one of hundreds of shots and then stretch or squeeze the audio so that the tails line up properly as well.
If you're working on hundreds of shots then most likely they'll be short shots in which time-base errors won't be noticed or correctable. But it will be a ginormous synching job for hundreds of shots even if you have full genlock and TC lock. That's a couple dialog editors working full time for a week or more.

Some definitions:
Offset - an interval by which two timecode values differ (like Steve said), you plug in the offset and the two sources should now sync.
Drift - non-linear errors, the clock doesn't maintain constant intervals (we don't see this much in the digital world).
Genlock - the capability of a clock to be slaved to another. In theory and large studios, all clocks would slave to a house clock. In practice, feed composite video out of the camera to the genlock input of your audio recorder.
Jam-sync, jammable code - the capability of a TC generator to slave to another TC gen.
Timebase error - two or more clocks run at slightly different rates. This is what most of this thread is concerned with.
Timecode generator - a circuit that turns clock into individual frame addresses and embeds that info in a video frame. For an analog or DAT audio recorder, lays down timecode info in a separate stripe on the tape. For a digital audio recorder, typically embeds (stamps) starting timecode in the file header.
Seth Bloombaum is offline   Reply With Quote
Old September 13th, 2007, 04:13 PM   #32
Trustee
 
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
Okay, I understand what you meant now.

Quote:
Originally Posted by Steve House View Post
it's still going to be a PITA to align the heads of video and audio for every one of hundreds of shots and then stretch or squeeze the audio so that the tails line up properly as well.
This shouldn't be necessary is my point. The clips go out of sync so slowly, that any particular shot of a few seconds up to a minute or more will be in snyc. It's only very long shots that would be a problem.

For example, the H2 in the original post goes out only 10 frames in 45 minutes. A ten second shot with the head in sync will certainly be in sync for that ten seconds.
__________________
.
http://www.nosmallroles.com
Vito DeFilippo is offline   Reply With Quote
Old September 13th, 2007, 04:52 PM   #33
Inner Circle
 
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
Quote:
Originally Posted by Vito DeFilippo View Post
Okay, I understand what you meant now.



This shouldn't be necessary is my point. The clips go out of sync so slowly, that any particular shot of a few seconds up to a minute or more will be in snyc. It's only very long shots that would be a problem.

For example, the H2 in the original post goes out only 10 frames in 45 minutes. A ten second shot with the head in sync will certainly be in sync for that ten seconds.
Right, but .... you're shooting a scene that plays for 5 minutes and doing it right and going for coverage. That means you're going to shoot the same scene in its entirety over and over again from different setups. Shoot once in a long shot, shoot again in a MS, shoot again over the shoulder of A seeing B, shoot again over the shoulder of B seeing A, shoot again A in ECU, no B visible, shoot again ECU B without A. Now pick 2, 3, 4, 10 second clips from out of all that and cut it together into a cohesive scene.

What I'm getting at is that (in my example) you have 6 or more 5 minute scenes, each of which must be adjusted into sync throughout their entire length. THEN you can slice and dice 'em up into clips of the desired length. So I'm not talking about holding sync over a series of short clips - I'm thinking about about achieving and maintaining sync throughout a set of relatively long source materials so you can then pull out pieces of sync'ed audio and video and cut them into your final edit.

Now obviously this method of multiple coverage of the same scene from different setups wouldn't be used in wedding/event video or ENG. But for indy film, documentary, dramatic narrative, theatrical film, and even corporate training etc, it may very well be used to a greater or lesser degree. Even concert footage is often done this way - have a Talking Heads DVD on my shelf that appears to be a single performance start to finish but in fact has been assembled from footage obtained over three or four different performances.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams!
Steve House is offline   Reply With Quote
Old September 13th, 2007, 05:32 PM   #34
Trustee
 
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
Quote:
Originally Posted by Steve House View Post
I'm thinking about about achieving and maintaining sync throughout a set of relatively long source materials so you can then pull out pieces of sync'ed audio and video and cut them into your final edit.
Absolutely, I see your point. But (there's always a but), even if you grabbed a shot say five minutes into a long clip, it might only be 1 frame out of sync by then. So you slip it one frame, and you're good. You don't have to do any time remapping/stretching in order to have the tail in sync as well.

Unless I'm just too dense to understand what you're getting at? Which is not far fetched by any means.

Thanks for the replies. An interesting discussion for me...
__________________
.
http://www.nosmallroles.com
Vito DeFilippo is offline   Reply With Quote
Old September 13th, 2007, 08:17 PM   #35
Inner Circle
 
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
Quote:
Originally Posted by Vito DeFilippo View Post
Absolutely, I see your point. But (there's always a but), even if you grabbed a shot say five minutes into a long clip, it might only be 1 frame out of sync by then. So you slip it one frame, and you're good. You don't have to do any time remapping/stretching in order to have the tail in sync as well.

Unless I'm just too dense to understand what you're getting at? Which is not far fetched by any means.

Thanks for the replies. An interesting discussion for me...
You're right ... but doing it for every cut will be time consuming and error prone. And many productions have a LOT of cuts - look at a typical TV commerical, there's a cut on average every three seconds in many of them! Let's say audio has drifted to be a little late in that shot you're extracting from the longer take - now you cut it but you need to remember to leave it longer than you actually need. You're going to slip the audio toward the head to bring it back into line. But wait! Your slate is 5 minutes back at the start of the take so what are you going to use as a reference to line up the audio properly to the video in the shot at hand? You can't look back in the timeline because although the slates already line up at the start of the take sync has drifted apart by the time we get to the part of the shot we're cutting due to the clock issues we're discussing - and after we cut out the part we want to use we lose the slates for reference anyway. So we have to do it the old fashioned way, by trial and error, sliding the audio back and forth until it appears to match picture. Okay, now we've got 'em aligned but there are extra frames of audio before the start of picture and also extra frames of picture after the end of audio. So now we have to trim again to get the sync-repaired shot to its final length. If the entire take was in proper sync from start to finish so that aligning the slates at the head before you started cutting was all you had to do to guarantee the entire shot would be locked in perfect sync all the way through to the end regardless of its length, your job would have been much easier and you could focus on the narrative and creative/expressive decisions in the editing process without getting distracted constantly fixing technical hitches along the way.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams!
Steve House is offline   Reply With Quote
Old September 13th, 2007, 08:31 PM   #36
Trustee
 
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
I hear you. You are right on all counts.

But really, I don't think productions such as the ones you describe will be using any H2s or the like. You would hope they wouldn't, and thus would avoid these problems.

For an event videographer such as myself, the problem is much less severe. I take an audio file, which is often a complete ceremony with no cuts, lay it under the complete video take, also no cuts, and sync them up. Then I start cutting.

Eventually, as the cuts progress, I may notice that the audio is a frame out now. So I slip it (the uncut remainder) a frame to correct, and all the uncut material is back in sync. I can continue to cut until the drift takes me out another frame. In an hour's material, I may have to slip the audio perhaps twice. And correct the mismatched head and tail as you mention. But this two times is hardly a problem.

Your description, however, reminds me that most productions are not so simple, and thus are not so forgiving of non-pro equipment.

Thanks, Steve, for your explanations. I've always enjoyed, and learned from, your posts.
Vito DeFilippo is offline   Reply With Quote
Old September 14th, 2007, 04:06 AM   #37
Trustee
 
Join Date: Jul 2007
Location: New Zealand
Posts: 1,180
Would stretching or shrinking an audio track which was 10 or so frames out over 45 minutes (< half a second), really change the pitch so much it was noticable?

If not, wouldn't stretching or shrinking the track before any work was done on it, be the easiest way to go?
Renton Maclachlan is offline   Reply With Quote
Old September 14th, 2007, 08:47 AM   #38
Major Player
 
Join Date: Aug 2007
Location: Austin, TX
Posts: 383
Quote:
Originally Posted by Renton Maclachlan View Post
Would stretching or shrinking an audio track which was 10 or so frames out over 45 minutes (< half a second), really change the pitch so much it was noticable?

If not, wouldn't stretching or shrinking the track before any work was done on it, be the easiest way to go?
This is exactly how a lot of people I know handle it. In fact some software will take the pitch into consideration when you "stretch" the audio on the timeline.

Wayne
Wayne Brissette is offline   Reply With Quote
Old September 14th, 2007, 10:50 AM   #39
Regular Crew
 
Join Date: Apr 2007
Location: UK
Posts: 113
I don't understand why you're going to alter the pitch?

The situation as I see it from the posts above:

Import the camera video and audio to your NLE. Even if the NLE 'clock' (i.e. the computer clock) is different from the camera clock, you leave this as it is - with the camera audio in sync with the video -the reference file, in other words.

Now importing the remotely recorded audio to the NLE may result in a different length audio file, because the audio recorder clock may be different from the NLE clock, by a different factor.

Once you know the correction factor to make this audio file 'sync' with the camera audio (and thus video), you apply it to the whole of the remote audio file, before any editing.

This will restore the pitch to match the camera's audio anyway - so no 'change' in pitch from the original (assuming the NLE clock is not too far removed from the camera clock!!), plus the fact that the corrected audio file will now 'fit' the whole take.
It also follows that any segment of the file will now 'fit' as well - so adding the external audio to any segment will only require an initial 'clapper' reference - no additional stretching of each indidvidual sequence.

On the other hand, if you don't add the correction factor to the whole audio file, but try to stretch each sequence separately, that's going to take forever....
Roger Shore is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > The Tools of DV and HD Production > All Things Audio


 



All times are GMT -6. The time now is 12:08 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network