|
|||||||||
|
Thread Tools | Search this Thread |
September 13th, 2007, 03:23 PM | #31 | |||
Inner Circle
Join Date: Sep 2003
Location: Portland, Oregon
Posts: 3,420
|
Quote:
Quote:
Quote:
Some definitions: Offset - an interval by which two timecode values differ (like Steve said), you plug in the offset and the two sources should now sync. Drift - non-linear errors, the clock doesn't maintain constant intervals (we don't see this much in the digital world). Genlock - the capability of a clock to be slaved to another. In theory and large studios, all clocks would slave to a house clock. In practice, feed composite video out of the camera to the genlock input of your audio recorder. Jam-sync, jammable code - the capability of a TC generator to slave to another TC gen. Timebase error - two or more clocks run at slightly different rates. This is what most of this thread is concerned with. Timecode generator - a circuit that turns clock into individual frame addresses and embeds that info in a video frame. For an analog or DAT audio recorder, lays down timecode info in a separate stripe on the tape. For a digital audio recorder, typically embeds (stamps) starting timecode in the file header. |
|||
September 13th, 2007, 04:13 PM | #32 | |
Trustee
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
|
Okay, I understand what you meant now.
Quote:
For example, the H2 in the original post goes out only 10 frames in 45 minutes. A ten second shot with the head in sync will certainly be in sync for that ten seconds. |
|
September 13th, 2007, 04:52 PM | #33 | |
Inner Circle
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
|
Quote:
What I'm getting at is that (in my example) you have 6 or more 5 minute scenes, each of which must be adjusted into sync throughout their entire length. THEN you can slice and dice 'em up into clips of the desired length. So I'm not talking about holding sync over a series of short clips - I'm thinking about about achieving and maintaining sync throughout a set of relatively long source materials so you can then pull out pieces of sync'ed audio and video and cut them into your final edit. Now obviously this method of multiple coverage of the same scene from different setups wouldn't be used in wedding/event video or ENG. But for indy film, documentary, dramatic narrative, theatrical film, and even corporate training etc, it may very well be used to a greater or lesser degree. Even concert footage is often done this way - have a Talking Heads DVD on my shelf that appears to be a single performance start to finish but in fact has been assembled from footage obtained over three or four different performances.
__________________
Good news, Cousins! This week's chocolate ration is 15 grams! |
|
September 13th, 2007, 05:32 PM | #34 | |
Trustee
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
|
Quote:
Unless I'm just too dense to understand what you're getting at? Which is not far fetched by any means. Thanks for the replies. An interesting discussion for me... |
|
September 13th, 2007, 08:17 PM | #35 | |
Inner Circle
Join Date: Mar 2005
Location: Hamilton, Ontario, Canada
Posts: 5,742
|
Quote:
__________________
Good news, Cousins! This week's chocolate ration is 15 grams! |
|
September 13th, 2007, 08:31 PM | #36 |
Trustee
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
|
I hear you. You are right on all counts.
But really, I don't think productions such as the ones you describe will be using any H2s or the like. You would hope they wouldn't, and thus would avoid these problems. For an event videographer such as myself, the problem is much less severe. I take an audio file, which is often a complete ceremony with no cuts, lay it under the complete video take, also no cuts, and sync them up. Then I start cutting. Eventually, as the cuts progress, I may notice that the audio is a frame out now. So I slip it (the uncut remainder) a frame to correct, and all the uncut material is back in sync. I can continue to cut until the drift takes me out another frame. In an hour's material, I may have to slip the audio perhaps twice. And correct the mismatched head and tail as you mention. But this two times is hardly a problem. Your description, however, reminds me that most productions are not so simple, and thus are not so forgiving of non-pro equipment. Thanks, Steve, for your explanations. I've always enjoyed, and learned from, your posts. |
September 14th, 2007, 04:06 AM | #37 |
Trustee
Join Date: Jul 2007
Location: New Zealand
Posts: 1,180
|
Would stretching or shrinking an audio track which was 10 or so frames out over 45 minutes (< half a second), really change the pitch so much it was noticable?
If not, wouldn't stretching or shrinking the track before any work was done on it, be the easiest way to go? |
September 14th, 2007, 08:47 AM | #38 | |
Major Player
Join Date: Aug 2007
Location: Austin, TX
Posts: 383
|
Quote:
Wayne |
|
September 14th, 2007, 10:50 AM | #39 |
Regular Crew
Join Date: Apr 2007
Location: UK
Posts: 113
|
I don't understand why you're going to alter the pitch?
The situation as I see it from the posts above: Import the camera video and audio to your NLE. Even if the NLE 'clock' (i.e. the computer clock) is different from the camera clock, you leave this as it is - with the camera audio in sync with the video -the reference file, in other words. Now importing the remotely recorded audio to the NLE may result in a different length audio file, because the audio recorder clock may be different from the NLE clock, by a different factor. Once you know the correction factor to make this audio file 'sync' with the camera audio (and thus video), you apply it to the whole of the remote audio file, before any editing. This will restore the pitch to match the camera's audio anyway - so no 'change' in pitch from the original (assuming the NLE clock is not too far removed from the camera clock!!), plus the fact that the corrected audio file will now 'fit' the whole take. It also follows that any segment of the file will now 'fit' as well - so adding the external audio to any segment will only require an initial 'clapper' reference - no additional stretching of each indidvidual sequence. On the other hand, if you don't add the correction factor to the whole audio file, but try to stretch each sequence separately, that's going to take forever.... |
| ||||||
|
|