View Full Version : 24p Capture comes to FCP 6.02


Nathan Pawluck
November 14th, 2007, 05:55 PM
Here are the new capture specs for FCP 6.02

Sony HVR-V1 HDV Tape-Based Camcorder Support
Final Cut Pro 6.0.2 is compatible with the Sony HVR-V1 HDV camcorder, which is capable of recording 1080p24, 1080p25, and 1080p30 footage. You can capture natively or capture to either the Apple Intermediate Codec or the Apple ProRes 422 codec. You can also output back to the Sony HVR-V1 camcorder using the Print to Video command.

To natively capture 1080p25 or 1080p30 footage, you should use the HDV 1080i50 and HDV 1080i60 Easy Setups, respectively. Your footage will retain its progressive scanning even though it will be stored in an interlaced format. You can capture 1080p24 footage using the 1080i60 Easy Setup, but your captured footage will retain 3:2 pull-down in this case.

For transcoded capture of 1080p24 footage, 3:2 pull-down is removed during transcoding, resulting in footage stored in the 1080p24 Apple Intermediate Codec format or the 1080p24 Apple ProRes 422 codec format. You can also capture 1080p25 and 1080p30 footage to either format, although Easy Setups are not included for these formats. In these cases, your captured footage is stored in the 1080p25 or 1080p30 Apple Intermediate Codec or Apple ProRes 422 codec format.

Here are the recommended workflows for capturing from and outputting to the Sony HVR-V1 camcorder with Final Cut Pro 6.0.2:

24p/60i on tape: Capture to the 24p Apple Intermediate Codec or Apple ProRes 422 codec, then output to the HVR-V1 camcorder in 24p/60i mode.
25p/50i on tape: Capture to the 25p Apple Intermediate Codec or Apple ProRes 422 codec, then output to the HVR-V1 camcorder in 25p/50i mode.
30p/60i on tape: Capture to the 30p Apple Intermediate Codec or Apple ProRes 422 codec, then output to the HVR-V1 camcorder in 30p/60i mode.

Nathan Pawluck
November 14th, 2007, 07:15 PM
Here is the link to all the spec's on FCP 6.02 http://www.apple.com/support/releasenotes/en/Final_Cut_Pro_6.0_rn/

Matt Devino
November 15th, 2007, 12:55 PM
Has anyone tried to capture to ProRes while removing pulldown? This would save me a ton of time, as I'm removing pulldown and transcoding to ProRes through compressor. Is there an easy setup now that is specifically for the V1? Thanks,
Matt

Steve Mullen
November 15th, 2007, 03:28 PM
Here is the link to all the spec's on FCP 6.02 http://www.apple.com/support/releasenotes/en/Final_Cut_Pro_6.0_rn/

Remember you must update to 10.4.11 BEFORE you try to download this update. I'm waiting a few days because the update to QT 7.2.3 broke JES.

"24p/60i on tape: Capture to the 24p Apple Intermediate Codec or Apple ProRes 422 codec, then output to the HVR-V1 camcorder in 24p/60i mode."

Is Apple saying there is now "24p/60i" Print-to-Tape that adds 2-3 pulldown?

Benjamin Eckstein
November 15th, 2007, 03:50 PM
Will you ever be able to capture from the V1U with the 1080p24HDV preset like you can with Canon 24F footage?

Steve Mullen
November 15th, 2007, 07:23 PM
Will you ever be able to capture from the V1U with the 1080p24HDV preset like you can with Canon 24F footage?

That's the point of the upgrade.

Nathan Pawluck
November 15th, 2007, 08:43 PM
I love people who comment without reading.

Matt Devino
November 15th, 2007, 10:00 PM
Hey Everyone,
So I got home an updated my home system to 6.0.2 on Leopard, ready to test out the new pulldown removal on capture.

The short answer is it works.

The long answer is it works but with some annoying things apple seems to have left out of it's release info.


Here it goes:

I hooked up the V1U via firewire and loaded a tape that was shot at 24pA. I first tried the Easy Setup HDV-1080P24, then HDV-1080P24 firewire basic. Both worked, and the pulldown was removed correctly. BUT when I jogged through my clips frame by frame there was a duplicate frame every 5 frames. I right clicked on my clips and they were still at 29.97! WTF!

Now, I checked the easy setup and my sequence setting was for HDV 23.98, so I took one of these clips and dropped it into a new sequence, FCP prompted me to change my sequence settings to match the clip, I said NO, and to my delight when I jogged through the clip in the timeline the extra frames were gone. It is realtime, needing no renders. So although it captures at 29.97 it's really 23.98 with duplicate frames added and by putting it into an HDV 23.98 sequence the extra frames are properly removed.

Weird way to do things, but it works.

Now, I'm not a fan of editing in HDV, so I was very excited to try out a conversion to ProResHQ while removing pulldown on capture. So I open up the Easy Setup HDV-Apple ProRes 1080P24. I opened up my log and capture window and... NO LOG AND CAPTURE WINDOW, WTF!!!! A window pops-up that says "name HDV clip" or something like that. So I go along with FCP and name a test clip, and hit OK. It goes right to the capture screen, and starts my camera rolling. It begins capturing footage off of my tape, with a little warning saying "preview is 10 seconds behind, view camera for playback". I let it roll for a few seconds, it goes through a clip break, and I hit ESC.

Now 2 clips pop up in my Bin, both with the pulldown removed and at 23.98, and it successfully created a new clip at the clip break. So at least it works, now I don't have to go to compressor to do ProRes/Pulldown conversions of my HDV footage. But seriously, why can't I add logging info? I can't even add a reel name, or have playback controls. I have to switch between easy setups or (gasp) use my on camera playback controls to cue up footage? That really really really sucks. It doesn't make any sense. It's like Apple brought some of the Windows code writers to add these new functions to FCP.

Anyway, I guess I'll have to deal, it's better than recompressing everything I capture.


But one more thing, there is no Easy Setup to do the transcode to ProResHQ, only to standard ProRes. Weird right? SO I made a custom easy setup where the compressor was ProResHQ...and it worked exactly the same as the regular ProRes Easy Setup. So Apple seems to have left it out, but it works. The lag concerns me though, I'll need to try a longer capture (the test was about 30 seconds), I'm worried that maybe the lag happens because of the transcoding eating up processor power and memory, I could see a long capture (like a full tape) possibly causing the system to run out of memory or lag to the point where it couldn't keep up anymore.

Although I did do this test on a Dual 1.8 GHz G5 with 1.25GB of RAM over to a firewire800 drive. So guys with shiny new 8-cores and RAIDs should be fine. I'll have to take my V1 to work tomorrow and try it on my 8-Core there and see what happens :-).

Leslie Wand
November 15th, 2007, 10:52 PM
somewhat different setup - v1p to winxp - however, i was under the impression that there was NO 'batch capture' under hdv since it's 15 gop would give inaccurate in/out points?

i hope someone will tell me there is, but from my experience with vegas 8, and other people i've talked to, tc capture is a none starter. bugger.

leslie

Steve Mullen
November 15th, 2007, 11:14 PM
Now, I'm not a fan of editing in HDV, so I was very excited to try out a conversion to ProResHQ while removing pulldown on capture. So I open up the Easy Setup HDV-Apple ProRes 1080P24. I opened up my log and capture window and... NO LOG AND CAPTURE WINDOW, WTF!!!! A window pops-up that says "name HDV clip" or something like that. So I go along with FCP and name a test clip, and hit OK. It goes right to the capture screen, and starts my camera rolling. It begins capturing footage off of my tape, with a little warning saying "preview is 10 seconds behind, view camera for playback". I let it roll for a few seconds, it goes through a clip break, and I hit ESC.

Now 2 clips pop up in my Bin, both with the pulldown removed and at 23.98, and it successfully created a new clip at the clip break. So at least it works ...
There is no advantage to editing with ProRes 422. It's a waste of drive space and time.

What's happening is HDV is converted to ProRes 422 -- which is why you don't get logging. Another reason to stay with HDV.

Matt Devino
November 15th, 2007, 11:56 PM
I think people will be interested to see this:

http://www.flickr.com/photo_zoom.gne?id=2037000334&size=o

This is a frame grab comparing the footage I captured using on the fly conversion to ProResHQ to HDV footage I had on my system that was captured as HDV in 6.0.1

The top frame is HDV that was captured in FCP 6.0.1, and pulldown was removed and converted to ProResHQ in compressor (at the time the only good way to remove pulldown in final cut studio). Even before the conversion I would see the same artifacting (mostly aliasing) you can see in the top half of the image (the raw HDV footage in my FCP looked the same, I did a bunch of side by side comparisons, as I wasn't happy with the amount of compression at all, but I figured that's what happens when you put HD onto a miniDV tape).
Now when I capture straight to ProRes in 6.0.2 I get what you see in the bottom half of the image, an image that is almost completely free of aliasing (it's very apparent on the black trim around the windows of the car).

Needless to say I'm a lot happier with my V1U (or should I say Final Cut's ability to capture HDV).


As far as what Steve said above:

- "There is no advantage to editing with ProRes 422. It's a waste of drive space and time."

This isn't true, it's a 10-bit codec (much better looking renders) and is intra frame (faster renders) and it's 4:2:2 instead of 4:2:0 so any kind of color effects/correction you add will look MUCH better. Sure it's bigger file-wise, but you can get 6 hours of footage on a 500GB drive, so who's worried about disc space?

-"What's happening is HDV is converted to ProRes 422 -- which is why you don't get logging. Another reason to stay with HDV."

Ok, but why is there no deck control?

Steve Mullen
November 16th, 2007, 02:48 AM
As far as what Steve said above:

- "There is no advantage to editing with ProRes 422. It's a waste of drive space and time."

This isn't true, it's a 10-bit codec (much better looking renders) and is intra frame (faster renders) and it's 4:2:2 instead of 4:2:0 so any kind of color effects/correction you add will look MUCH better.

-"What's happening is HDV is converted to ProRes 422 -- which is why you don't get logging. Another reason to stay with HDV."

Ok, but why is there no deck control?

You don't understand how FCP works. HDV is automatically converted, whenever a render is performed, to 4:4:4. If you are working with a 10-bit Sequence -- HDV is converted to 10-bit.

IT MAKES NO DIFFERENCE "WHEN" HDV IS CONVERTED TO 10-BIT 4:4:4 .

If you do it upon capture, you only waste disk space. There will be no difference in quality. None.

You have bought into one of the oldest anti-HDV myths! Shame on you. :)

The reason for no deck control is that there has never been deck control with AIC. All Apple has done is replace the AIC codec with the ProRes codec.

But, once you stop believing that you can use HDV, you'll have no need to use ProRes captures. Problem solved.

PS: the only time ProRes is needed is when Color renders. And, you can ask Color to do so.

Matt Devino
November 16th, 2007, 09:57 AM
You don't understand how FCP works. HDV is automatically converted, whenever a render is performed, to 4:4:4. If you are working with a 10-bit Sequence -- HDV is converted to 10-bit.


But Final Cut isn't an RGB system, it only work in YUV 4:2:2 unless you have Kona or Blackmagic codecs installed (and this is still completely un-native to how the guts of FCP work). Even then for it to ever end up as 4:4:4, or anything 4:2:2 for that matter, you would need to take your final edited sequence and put it in a sequence that uses a 4:2:2 or 4:4:4 codec and render your entire sequence. Seems like a lot of extra rendering when you can just convert everything to 4:2:2 in the first place right?

And Final Cut won't render all HDV footage to 4:4:4 automatically, it may render it into 10-bit 4:2:2 IF you set up you sequence's render option to use 10-bit YUV super white etc.

Anyway you're right there's nothing "wrong" with editing in HDV native, it just may take a few extra steps at the end of your edit and some extra render time to get your final piece looking as good as it can.

Steve Mullen
November 16th, 2007, 01:18 PM
Seems like a lot of extra rendering when you can just convert everything to 4:2:2 in the first place right?

There is NO rendering as HDV is edited in real-time. Only 3 things can happen to any frame of HDV or ProRes:

1) Nothing

2) FCP uncompresses the frame to 4:2:2 either 8-bit and displays it.

3) FCP uncompresses the frame to 4:4:4 YUV either 8-bit or 10-bit (your choice) and uses the frame in some way. For example CC or FX. The 4:4:4 YUV 8-bit or 10-bit result is sent to a display. The result is NEVER saved or re-used! When you use the source frames again -- FCP always uncompresses ALL frames from scratch.

Note: even ProRes is "chroma up-sampled" from 4:2:2 YUV to 4:4:4 YUV before being used.

Obviously, it makes no difference if FCP decompresses HDV to 4:4:4 YUV (either 8-bit or 10-bit) during capture OR at the moment the HDV frame is needed. The only difference is how many bits you waste holding the HDV information on disk.

The actual source information is never other than 4:2:0 at 8-bits.

Your export is ONLY dependent on the codec you use when you export. All frames including the "nothing" frames are are converted at this time.

PS1: if you MANUALLY render -- it is only for you to see the frames play in RT. When you export -- FCP uncompresses ALL frames from scratch. The rendered HDV frames are ONLY used to drive your display.

PS2: Color renders are THE exception. They go back to the timeline as ProRes, but this is done automatically.

PS3" Even Avid has no need for anything but HDV source files. Avid does REUSE renders -- which is where the myth started -- but Avid is smart. All renders are automatically DNxHD. Just like Color.

Matt Devino
November 16th, 2007, 05:06 PM
Ok, yes there is upsampling to 4:4:4 happening in order to display video on your computer's monitor, because computer monitors are RGB.

But this doesn't matter if you're trying to view your timeline on a broadcast monitor through SDI. Say I want to look at my timeline on a video scope through SDI. An HDV timeline would look slightly different than a ProRes timeline of the same video because of the different color space and 10-bit vs 8-bit.

Another thing to consider is graphics. Working in an HDV timeline, if you place a title into your timeline you will eventually need to render it, if not right away. When you render it the HDV codec is used, and let's face it HDV is pretty compressed, and you can get aliasing and quantizing errors notorious to 8-bit (even "uncompressed" 8-bit) and adding the amount of compression that HDV calls for along with 8-bit is just yucky. Now, if you were working in ProRes, you will not get this problem as ProRes is 10-bit (say goodbye to those quantizing errors) and it's much less compressed which will reduce aliasing. These errors will never be fixed simply by outputing to something that is 4:2:2 10-bit like a D5 tape, your sequence must be rendered at a 10-bit 4:2:2 codec before the output. Hence the reason to use ProRes in a situation like that. Fo an extreme example drop a title into a DV timeline, turn on 10-bit renders in your timeline settings, and render your title. Still see all that aliasing? Yup, it's because it was rendered using the DV codec, not uncompressed 4:4:4 or 4:2:2.


Anyway, I think anyone reading our convorsation is probably confused or annoyed by now. Everyone should just read this article by Philip Hodgetts: http://www.kenstone.net/fcp_homepage/when_to_stay_native.html

And I think after reading this article neither of us will be proven right or wrong, just maybe we're thinking of different workflows.

Most of the stuff I do will go to broadcast and never go back to HDV tape, it will go to D5 or HDCAM, and sometimes DigiBeta. This is why I like to convert to ProRes.

Thanks for the good geek debate, it was fun.

Steve Mullen
November 16th, 2007, 05:42 PM
Working in an HDV timeline, if you place a title into your timeline you will eventually need to render it, if not right away.

You will render to HDV ONLY if for some odd reason where the graphic is doesn't playback fast enough. You are missing the point that FCP is a RT NLE. Even when you add lots of FX, playback only slows down a bit. There's almost no need to render even with many streams.

And even if you like to wait for renders -- the render to HDV will only be for viewing faster. It has NOTHING to do with export. You will not render to HDV in the final export. You will lose NO quality. All that stuff about errors will never happen.

You seem to think FCP first compresses to HDV and then uncompresses and re-compresses to another codec. It does not. FCP throws away all renders on export and takes the 4:4:4 YUV of each frame -- ignores the Sequence codec -- and compresses it to the export codec.

The only time the HDV codec NEED be used during export is if you write back to HDV tape. And, that makes perfect sense.

PS: And most folks will not be monitoring via HD-SDI. Not when Apple provides a Cinema monitor connection. My laptop outputs DVI which connects via HDMI.

ANYWAY, if you have a AJA/BM card then you obviously will NOT be capturing HDV via FW. You'll connect via HD-SDI and then you'll not capture HDV. Now the HDV is converted to 4:2:2 10-bit INSIDE your VTR. That MAY look better, but I've never seen proof of this.

Frankly, the huge increase in storage (ProRes 422) and the possible need for a RAID (uncompressed) are so negative that I'll gladly accept a quality loss that a few anti-HDV types claim they can see.

PS: when you do CC or use Color the result is not compressed to HDV. It is fed directly to your monitor.

Matt Devino
November 16th, 2007, 06:31 PM
\You are missing the point that FCP is a RT NLE. Even when you add lots of FX, playback only slows down a bit.

Sure it's realtime, but not at 4:4:4. BTW there's no such thing as 4:4:4 YUV, 4:4:4 is by definition RGB. When I was working on the 4:4:4 conform of The Kingdom YUV never entered the equation.

\the render to HDV will only be for viewing faster. It has NOTHING to do with export. You will not render to HDV in the final export. You will lose NO quality.

What are you talking about? Final export to what? Quicktime? Tape? If I got to say an HDCAM tape from my HDV timeline over an SDI card, it doesn't magically re-render everything on the fly to "10-bit 4:4:4 YUV" as you've been saying. It just scales the video to the correct dimensions and puts it on tape. If you're going back to HDV tape sure then it doesn't matter, but what kind of company would accept HDV tape as a delivery format?


\You seem to think FCP first compresses to HDV and then uncompresses and re-compresses to another codec. It does not. FCP throws away all renders on export and takes the 4:4:4 YUV of each frame -- ignores the Sequence codec -- and compresses it to the export codec.

I've never said anything like this. What you just explained in the second sentence of that paragraph is exactly what you are accusing me of saying. If you work in HDV it stays in HDV period, until it is on tape or you output a quicktime movie in a different codec.

\PS: And most folks will not be monitoring via HD-SDI. Not when Apple provides a Cinema monitor connection. My laptop outputs DVI which connects via HDMI.

ANYWAY, if you have a AJA/BM card then you obviously will NOT be capturing HDV via FW. You'll connect via HD-SDI and then you'll not capture HDV. Now the HDV is converted to 4:2:2 10-bit INSIDE your VTR. That MAY look better, but I've never seen proof of this.

Your apple cinema display is not a professional video monitoring tool. If you're doing stuff for YouTube or home movies then fine it's great. I own one too. Sure it's nice and pretty, but it's not even close to par with a professional calibrated tube monitor. I would NEVER analyze video on a cinema display, or any consumer LCD/plasma for that matter.

As for the SDI card, I'll still capture over firewire, it's too much of a pain in the ass to capture HDV over SDI. Capturing through SDI isn't going to gain you a whole lot unless you want to convert to uncompressed on capture. But the only reason to do that is the exact same reason to trancode to ProRes, so you don't have to work in HDV space! And now ProRes exists so there's a high quality codec to work in that takes up next to NO drive space compared to uncompressed. The SDI card is there for monitoring on a pro monitor, and for outputting to a pro deck when you're done


\PS: when you do CC or use Color the result is not compressed to HDV. It is fed directly to your monitor.

If you color correct in FCP and stay in an HDV timeline for finishing then yes it will be rendered HDV. The Color app is a whole other architecture, it only operates in RGB 10-bit or ProRes and nothing else.




I'm done with the flaming. I just hope somebody doesn't think YUV 4:4:4 is something that exists in their FCP system after reading this.



Now to get back on topic, yea, pulldown removal over firewire from a V1U works in 6.0.2

Steve Mullen
November 16th, 2007, 09:11 PM
Sure it's realtime, but not at 4:4:4. BTW there's no such thing as 4:4:4 YUV, 4:4:4 is by definition RGB. When I was working on the 4:4:4 conform of The Kingdom YUV never entered the equation.


Finally I realize you do not know what 4:4:4 means. No wonder you think you weren't using it. The meaning of 4:4:4 is that both chroma components are sampled at the same rate as the luma. From the FCP manual:

"4:4:4 Each R, G, and B channel, or each Y´, CB, and CR channel, is sampled at the same rate. Maximum color detail is maintained."

And this from a review of FCP" "However, FCP 4's render engine processes video at 4:4:4:4, which makes sense because FCP 4 offers color-correction tools and compositing operations that make full use of the higher sampling rate."

RGB is inherently equally sampled. So, it IS often called 4:4:4.

But, YUV can also be 4:4:4. It's often called BASEBAND video. In a camera when you convert RGB to YUV -- the result is 4:4:4. Prior to processing or recording the two chroma components are decimated (down-sampled) to 4:2:2.

Likewise, before FCP does any computation on ANY video -- it CHROMA up-samples to 4:4:4 YUV. HDV is up-sampled from 4:2:0 while ProRes is up-sampled from 4:2:2.

Now hopefully you understand why nothing is improved by converting during capture from 4:2:0 to 4:2:2 and then to 4:4:4. In fact, quality may be improved by waiting until a frame needs to be processed where FCP does ONE conversion from 4:2:0 to 4:4:4.

And now that you understand that FCP always computes in 4:4:4 at either 8-bits or 10-bits -- you can hopefully see your other ideas are baseless. They are ideas that editors often pickup from other editors.

For example: Sony makes only one HD tube monitor. It costs $40,000. So like it or not, the vast majority of FCP users will be using either LCD or Plasma displays. Like HDV -- this is "good enough" for most applications.

For example: I never said FCP did anything "magically." Unless you are exporting HDV -- YOU need to do something. You can export to a QT Movie where you specify a codec. You can export to using Compressor where you specify a codec. You can export to DVDSP where you specify a codec.

Or, you can follow the procedure in your AJA/BM manual. This will simply convert every frame in a Timeline back to YUV 4:2:2.

For example: Color automatically creates a 4:2:2 render even when the source is native HDV. If you don't use Color, all computations are done in 4:4:4 YUV. There is NO rendering to HDV!!!

In FCP you almost never render so I don't why you keep talking about it. If you find playback not fast and smooth enough -- ONLY then do you MANUALLY render a small segment. Yes -- this will be to HDV. But, you are only doing this for a smooth PREVIEW. Not a final quality check.

As soon as you pause -- you are looking at 4:4:4.

I'm not flaming you, in fact because of your post I just learned: Because of the new Open Timeline you can use ProRes 422 Sequences. Thus, all PREVIEW renders will be to ProRes 422 - not HDV. Yet you can use native HDV clips freely in the Sequence. RT still works.

Mark OConnell
November 16th, 2007, 10:19 PM
"I think people will be interested to see this:

http://www.flickr.com/photo_zoom.gne?id=2037000334&size=o

This is a frame grab comparing the footage I captured using on the fly conversion to ProResHQ to HDV footage I had on my system that was captured as HDV in 6.0.1"


Wow. The difference in quality between the two parts of the frame is amazing.

Steve Mullen
November 17th, 2007, 12:18 AM
Wow. The difference in quality between the two parts of the frame is amazing.

If HDV really looked that bad:

1) no one would watch it.

2) if the source really looked that bad -- every bit of aliasing and MPEG-2 blocking would be copied to the prores image.

No one claims ProRes can magically remove aliasing and MPEG-2 blocking. That would be an absurd claim. The higher quality media would copy every bit of detail which would include the crap.

So what's going on? The top frame is not pure HDV. It is HDV that was put through Compressor to try and remove pull-down. Who knows what happened to it. It looks like it lost one field.

The bottom frame is HDV-to-ProRes without any additional processing. Bottom-line, Apples and Oranges. Doesn't prove anything.

Hint -- when you saw how bad the output from Compressor was, you should have realized something was wrong with its Output NOT the input.

Matt Devino
November 18th, 2007, 01:03 PM
Hey Steve,
No, really, I opened up the original HDV image as well as that converted image in an HDV timeline, and inverted one of the clips and it produced a white screen, i.e. they matched pixel for pixel (btw originally the only reason I was converting to ProRes was because compressor would crash if I tried to remove the pulldown and compress back to HDV, I had to go to another codec for it to not crash, and cinema tools wouldn't remove pulldown from anything that wasn't intra frame).

Now, I'm not claiming that ProRes magically "fixes" an HDV image, that would be, well, magic. What I'm saying is, at least on my FCP system, in 6.0.1 FCP was doing something on capture that screwed up my HDV image, and it's not doing it now that I've upgraded to 6.0.2. It could have been something wrong with my system, but it may have happened to other people as well, and now 6.0.2 did fix the problem, so I thought people should have been aware of it in case they had the same problem I was having. A regular HDV capture looks the same as a ProRes capture on my 6.0.2 system.

As per our earlier discussion, seems I was wrong about YUV (Y'CbCr)

From wikipedia:

4:4:4 Y'CbCr
Each of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.

4:4:4 R'G'B' (no subsampling)
Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.

The term Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often (erroneously) used to refer to Y'CbCr encoding. Hence, terms like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).


Still, as far as I know, any NLE or camera that works in 4:4:4 is RGB (i.e. Avid DS Nitris, FCP with a Blackmagic or Kona card). The only tape format of 4:4:4 I know of is HDCAM SR, which I frequently work in, and is RGB. Of course per the information above, it looks like they may use 4:4:4 Y'CbCr in 2k or 4k DI's, which I have no experience with. The only DI work I've done was a 4:4:4 HD DI (RGB) which we did for Bury my Heart at Wounded Knee.

Now, if FCP is sampling everything at 4:4:4 Y'CbCr (YUV) for render, that's a little scary when working in RGB space, every time it renders it samples at 4:4:4 Y'CbCr then back to RGB? I haven't seen anything strange color-wise in a 4:4:4 render but it's still a little weird. Do you know if FCP still acts this way when you select "render everything in RGB" in your timeline options?

Anyway, I suppose that if you work in HDV, and export your entire timeline to say a 10-bit uncompressed quicktime movie, then yes you are right no downfall to working in HDV if all of your renders are being sampled at 4:4:4 YUV and re-rendered at 10-bit uncompressed (although any color effects may look slightly different, maybe better, in the final quicktime output). But if you are simply going to play your timeline out to tape like a D5 or SR, it makes sense to work in the 10-bit space the whole time so you don't need to output a quicktime of your entire timeline before sending out to tape (your renders would be sampled from your timeline's codec at 4:4:4 YUV like you said, then rendered at your timeline's codec) so being in a 10-bit timeline makes sense for this workflow. In the end it truly depends on where the end destination for your project is that determines where you start, and HDV or ProRes are both completely legitimate options depending on what your project calls for.

Steve Mullen
November 18th, 2007, 08:23 PM
Now, if FCP is sampling everything at 4:4:4 Y'CbCr (YUV) for render, that's a little scary when working in RGB space, every time it renders it samples at 4:4:4 Y'CbCr then back to RGB? I haven't seen anything strange color-wise in a 4:4:4 render but it's still a little weird. Do you know if FCP still acts this way when you select "render everything in RGB" in your timeline options?

Anyway, I suppose that if you work in HDV, and export your entire timeline to say a 10-bit uncompressed quicktime movie, then yes you are right no downfall to working in HDV if all of your renders are being sampled at 4:4:4 YUV and re-rendered at 10-bit uncompressed (although any color effects may look slightly different, maybe better, in the final quicktime output). But if you are simply going to play your timeline out to tape like a D5 or SR, it makes sense to work in the 10-bit space the whole time so you don't need to output a quicktime of your entire timeline before sending out to tape (your renders would be sampled from your timeline's codec at 4:4:4 YUV like you said, then rendered at your timeline's codec) so being in a 10-bit timeline makes sense for this workflow.

I can't explain what you found with 601, but it sure did look bad. No wonder you were worried. I did pull lots of frame grabs fron 24p for my book and I never saw anything that bad.

---------

I've never even thought of RGB with FCP because -- coming from Media 100 -- I always have tried to avoid working with RGB-based NLEs. I remember the FCP command, but that's it. Can you tell us when you use RGB?

In any case you are correct -- if you worked in 4444 YUV and at the end rendered to RGB the color-space will be re-sampled. Perhaps that's why FCP has the "render in RGB" mode. I assume it forces HDV to be converted not to 4444YUV but to 4444RGB before rendering. In this way there would be no color changes during the final export.

Is there a real problem? Avid Liquid, for example, has some FX that are RGB and some that are YUV. I remember that some Matrox hardware FX are RGB and some YUV. (The RGB FX were processed by the Graphics Processsor while the YUV FX were done by the CPU.) I don't remember any problems, but it points out that it's VERY hard to get details on the insides of our NLEs.

Matt, thanks to you -- I went back to my FCS2 docs and actually read about the new Open Timeline. It works! From now on I can choose ProRes as my Sequence Preset. Makes no difference what's coming in or out.

Unless I render -- everything is RT just like before. But, if I render it will be to ProRes not HDV. Magic -- renders will be in 422 space.

I wonder how many folks know this?

I don't know how I would send a Sequence to HDCAM or D5 or HDCAM SR. If one had an AJA/BM board with HD-SDI -- will they "convert" ProRes to uncompressed OR do you need to drop your Sequence into an uncompressed Sequence?

And, remember I said FCP always ignored renders at export -- now there is really no need. So, does FCP now use ProRes renders?

James McCrory
November 19th, 2007, 02:37 PM
I thought that editing HDV natively was nothing but bad, Bad, BAD? I've read in so many places that everytime you make a cut you are degrading the image, and that if you care about image quality at all, you will NOT edit in HDV, but rather in another format. Now I'm reading here that there is absolutely nothing wrong with editing your film in HDV, and that transcoding to an intermediate codec is not necessary. So which is it?

Here is an excerpt from wikipedia:

"In HDV, splicing always introduces distortion at the splice points, due to the interdependencies between groups of video frames. Any editing of the video, whether it be a complex transition or a simple scene-change, requires a decompression and recompression of the entire HDV frame group... If HDV footage is converted (known as 'Transcoding') to a good intermediate format for editing, these considerations will not necessarily apply, and gradual degradation from generation to generation of edit may be avoided while substantial system performance gains are made."

http://en.wikipedia.org/wiki/HDV


And from Ken Stone's site:

"However, even a simple cut in a native HDV editing system will cause some kind of change to the video. This means that new video has to be generated which taxes your computer's CPU. More complicated things such as color correction, titles, and resizing video frames will cause all of the affected video to be re-generated. Therefore, in many situations, native HDV editing offers little or no advantage because so much new video needs to be created."

http://www.kenstone.net/fcp_homepage/hdvxdv_wright.html

James McCrory
November 21st, 2007, 10:42 AM
I see the conversations have gravitated back to bags and such. What happened? You were all so outspoken on this subject just a moment ago, now no one knows anything?

I have never edited any HD material before so I wanted to thoroughly research a complete workflow before I spent so much as a penny upgrading my editing system. But I keep encountering contradictory information. I'm just trying to get the story straight to avoid wasting money and/or irrevocably destroying my project due to my ignorance of the nature of HD formats. So far I've just been shooting with my V1, and watching my footage on an LCD monitor.

In the meantime, here's another quote from the good people at Cineform, to add fuel to the fire.

"There is no question these new HDV cameras acquire a great picture using MPEG2 compression, but in this analysis we will show that editing using the source (MPEG2) format usually does not result in the highest quality final result... Some proponents of "native" MPEG editing claim HDV editing in its native form can be lossless; this can only be achieved if you don't change anything (including cuts -- which still require rendering using MPEG editing workflows). In practice, editing-session recompression must occur for obtaining an output even in a cuts-only scenario."

http://www.cineform.com/technology/HDVQualityAnalysis/HDVQualityAnalysis.htm