|
|||||||||
|
Thread Tools | Search this Thread |
April 7th, 2005, 11:52 AM | #1 |
Major Player
Join Date: Jul 2003
Location: Oklahoma
Posts: 424
|
RGB v. YUV
I heard that it was better to edit with YUV color space, is this true? What is the difference/benefits of editing with YUV vs. RGB? Which is better to edit with? What is the best way to convert between the two? Etc. Etc.
|
April 7th, 2005, 09:02 PM | #2 |
Trustee
Join Date: May 2004
Location: Knoxville, Tennessee
Posts: 1,669
|
Assuming your raw footage is DV, then its in YUV colorspace to begin with. So there's some modest quality and sometimes speed advantage in staying there during editing rather than converting to RGB and then back again to whatever your final product is (YUV again for DV or DVD, etc).
|
April 7th, 2005, 11:38 PM | #3 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
Actually, DV is in the Y'CbCr colour space - YUV being something completely different, and is often used innnacurately to refer to Y'CbCr. Even Apple etc. do this incorrect naming.
So yes, both DV and DVD use a Y'CbCr colour space, and hence unnecessary conversions to RGB and back can cause issues but: There's practically few effects or colour corrections that work entirely in Y'CbCr space, Even effects that are Y'CbCb native often involve some conversion to RGB, And Y'CbCr is effectively a compression technique as it is part of the 4:1:1, 4:2:2 type goings on where chroma is spatially compressed compare to the luma. So, although it's wonderful in theory to stay in Y'CbCr, it's often very hard to in practise. Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 12:29 AM | #4 |
Trustee
Join Date: May 2004
Location: Knoxville, Tennessee
Posts: 1,669
|
Graeme - I'll have to admit to being confused! The YUV page of www.fourcc.org starts off with this introduction:
"YUV formats fall into two distinct groups, the packed formats where Y, U (Cb) and V (Cr) samples are packed together into macropixels which are stored in a single array, and the planar formats where each component is stored as a separate array, the final image being a fusing of the three separate planes" So when you say that YUV and Y'CbCr are "completely different", what are you getting at? And what are they getting at? Thanks! |
April 8th, 2005, 12:34 AM | #5 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
YUV is an intermediate stage in turning component video into composite video or S-video. Y'CbCr are the component of digital video, often wrongly called YUV.
I recommend this as a highly technical guide to video: http://www.poynton.com/DVAI/index.html where he explains it much better than I can. Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 12:37 AM | #6 |
Trustee
Join Date: May 2004
Location: Knoxville, Tennessee
Posts: 1,669
|
Oh and ... my understanding has been that all the AVIsynth2.5 filters work in YV12, and that this is the FourCC for 4:2:0, and so these filters would be particularly appropriate for HDV (and PAL DV) work, since both of these are 4:2:0 formats.
If I'm confused on this as well, it would be good to know! |
April 8th, 2005, 12:42 AM | #7 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
I don't know about those specific formats. But MPEG2 4:2:0 and DV 4:2:0 are different. I think it's where the Cb and Cr samples are sited with respect to the luma. There's a diagram, I think, in the Poynton book I mentioned. I don't know if the people who write the effect take this into account, or wether they have to or not if the actual capture process of the video converts them both into a more standard format.
Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 01:01 AM | #8 |
Trustee
Join Date: May 2004
Location: Knoxville, Tennessee
Posts: 1,669
|
>>http://www.poynton.com/DVAI/index.html
Well there's bedtime reading for a good long while (I just "looked inside the book" on Amazon - yikes!). Thanks for the recommendation. |
April 8th, 2005, 05:47 AM | #9 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
Yes, it's very heavy going, but it's full of wisdom from someone who knows what they are talking about. I learned a lot from this book on every page, which in these days of books and articles that can barely get basic facts right, is a most wonderful thing. However, you probably want a maths or electrical engineering degree under your belt to be able to get the most out of it.
Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 07:25 AM | #10 |
Major Player
Join Date: Sep 2002
Location: Belgium
Posts: 804
|
YUV processing conserves color precision and luma resolution for most of the NLE processing routines and is thus preferable in DV video.
As far as I know Charles Poyntong only pinpoints the confusion between the gamma corrected luma value(') and the linear version ( he is a gamma fan!). This is only a mathematical issue when color processing (space conversion...) is involved in graphics applications ( he used to be a Silicon Graphics boy). In video, all signals are gamma precorrected and YUV and Y'CrCb eve YCrCb, are the same for video people. They both belong to the same analog luma/color difference signal protocols. For many years professionals, professional companies and standardisation organisation routinely use the "wrong" expressions.. |
April 8th, 2005, 07:33 AM | #11 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
Well, I think Charles has it right on this one. If there's a proper name to call something, then that's what should be used, even if many people as standard practise use the incorrect terminology. In this context, read:
http://developer.apple.com/quicktime...spatch027.html I'd disagree that "YUV processing conserves color precision and luma resolution for most of the NLE processing routines and is thus preferable in DV video" because most processing that you do to video is, by necessity RGB based. Although yes, the edit package should be aware that the video is Y'CbCr and that this needs to be taken into account. For instance in FCP, you'd make sure that the timeline is set to "YUV", even though 90% of the effects you apply will convert the video to RGB at some point. AFAIK: Y'PbPr is analogue component video YUV is an intermediary step in converting Y'PbPr to S-Video (YC) or composite video Y'CbCr is digital component video. 3 different names, for three different things. Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 11:41 AM | #12 |
Major Player
Join Date: Sep 2002
Location: Belgium
Posts: 804
|
See http://www.canopus.us/us/pdf/Storm_comparison.pdf and much more just by a Google search ...but just search YUV instead of Y'CrCb otherwise you will get no results.
|
April 8th, 2005, 11:47 AM | #13 |
Major Player
Join Date: Jul 2003
Location: Oklahoma
Posts: 424
|
Im using premiere pro, which i assume will capture RGB (?), so I should just stay with RGB and not worry about YUV. I edit in Premiere and do most of my effects in AE.
|
April 8th, 2005, 01:04 PM | #14 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
Every compositing application works purely in RGB as there are 100 times as many things you can do to an image in RGB space as Y'CbCr space. Sometimes Y'CbCr space is useful too. The key to good effects software is clean conversions between the two without clipping or rounding errors.
Graeme
__________________
www.nattress.com - filters for FCP |
April 8th, 2005, 03:54 PM | #15 |
Major Player
Join Date: Sep 2002
Location: Belgium
Posts: 804
|
One of the problems, even using īdeal YUV>RGB is that each of the R, G and B converted components are a linear combination of a full resolution luma part and a 1/4 resolution color difference part. So, e.g. R= Y (full res) + (R-Y) ((1/4 res). So R in this example doesn't have the Y original bandwidth anymore. After processing, the result is back converted into YUV, but the Y value in this conversion is again made up out of bandwidth reduced RGB components. The issue is comparable by shooting a color TV pic in B&W and shoot the same pic in B&W when only the luma is taken (fully desaturated pic). The "color leakage"(bandwidth reduced color difference signals) is not presant in the latter case, resulting in a "full bandwidth" photo
|
| ||||||
|
|