|
|||||||||
|
Thread Tools | Search this Thread |
February 26th, 2009, 08:07 PM | #1 |
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Why I think the 5D is using BT.601
The general concensus seems to be that I should use 709 when converting 5D files to RGB because the format is HD. But in this post (entry #34), Keith shows the color parameters atoms in the QuickTime file as follows:
primaries = 1 (ITU-R BT.709-2) transferFunction = 1 (ITU-R BT.709-2) matrix = 6 (ITU-R BT.601-4) I've just had a quick flick through the QuickTime file specification guide, and this suggests the data is encoded from RGB to YUV using BT.601. Don't be confused by the references to 709: 'primaries' indicates the RGB colorspace is sRGB (= BT.709-2) and 'transferFunction' indicates a gamma 2.2 curve with a linear segment in the lower range (= BT.709-2). So, the question is, should I trust the Quicktime matrix value, or do I assume Quicktime is wrong and decode with BT.709 because it is a HD recording? One way to test (other than phoning Canon) would be to shoot a color card in HD mode, then shoot under exactly the same conditions using the SD recording mode. If the theory HD:=709 and SD:=601 is true, then the camera should change colorspace to 601 when recording in SD mode, and the change would be visible when playing back the two files. If it doesn't, then the camera is using a non-conventional matrix for one of its recording modes (either 601 for HD or 709 for SD). I haven't tried this, I'm hoping somebody will know for sure and I won't need to go to this effort. Last edited by Thane Brooker; February 26th, 2009 at 08:27 PM. Reason: Clarified transferFunction explanation |
February 26th, 2009, 09:00 PM | #2 |
Major Player
Join Date: Feb 2008
Location: Voorheesville, NY
Posts: 433
|
Try this yourself. Go in with a hex editor and change the matrix value from 6 to 1. I can't see any difference in any scopes, by doing so, but maybe you'll have better luck.
I think it really depends on what a specific decoder does with the 3 values in the nclc atom. |
February 26th, 2009, 09:08 PM | #3 | |
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Quote:
|
|
February 27th, 2009, 03:21 PM | #4 |
Major Player
Join Date: Feb 2008
Location: Voorheesville, NY
Posts: 433
|
This is little off topic, but have you played around with the most recent build of the Cineform NEO Scene demo? I have NEO HD, so I haven't tried to use NEO Scene with 5D2 MOV files. NEO Scene comes with a built-in h.264 decoder, which is licensed by Cineform from MainConcept and is a newer version the decoders used in Premiere and Vegas 8 Pro.
|
February 27th, 2009, 03:56 PM | #5 |
Inner Circle
Join Date: Nov 2005
Location: Elk Grove CA
Posts: 6,838
|
Guys, I don't understand all of your numbers, etc, but I have been testing the 5D2 with Neo Scene and and Quicktime. I noted a couple of things.
1. the Cineform and quicktime version are out of synch on sound. 2. On the Vegas time line, they each preview different, with Quicktime on the time line showing the more shadow and highlight detail. (I must confess I updated quicktime at some point here, but I can't recall if it was before I transcode or after.) I am attaching photo screen shots snipped form Vegas.
__________________
Chris J. Barcellos |
February 27th, 2009, 04:26 PM | #6 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
Chris,
When using the Vegas scopes, make sure that you set the preview at Full/Best and 1920x1080. Otherwise it smears the histogram. With the full resolution, you will see gaps and bumps in the QT histogram. BTW, that's definitely QT7.6. It doesn't clip the blacks and whites, and it boosts the mid tones.
__________________
Jon Fairhurst |
March 19th, 2009, 04:36 AM | #7 |
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Has anybody else reached a conclusion on this?
I logged a support incident with Canon, that was two weeks ago and although they have sent a number of replies "we're looking into it", haven't had any answer yet. So I setup the following experiment: 1) Calibrate an NEC Spectraview to sRGB mode, with video card LUT set to default. This is as close as I can get to displaying video on my PC as true as possible. 2) Set camera on tripod. 3) Manually set White Balance on camera to 6500K. 4) Video a Gretag Macbeth color checker chart under 6500K, 98RDI lighting using Standard, Neutral and Faithful settings. Set ISO 100 and note shutter and aperture. 5) Photograph exactly the same using ISO 100, same aperture and equivalent shutter. Convert to sRGB. 6) Compare Videos to Photographs, ensuring photo viewing application is not doing any monitor profile correction. 601 is almost identical to the photographs. 709, reds turn to orange. Am I the only one that things 5D files are encoded in 601, not 709? Has anybody else done color comparisons between video and picture? |
March 19th, 2009, 12:58 PM | #8 |
Major Player
Join Date: Feb 2008
Location: Voorheesville, NY
Posts: 433
|
The one variable that needs to be controlled in any experiment, is what h.264 decoder is being used to display the file on the monitor. If the decoder reads the file metadata and uses it correctly, then you will get one result. If it ignores it, you will a different result. I posted elsewhere that the new version of the CoreAVC h.264 decoder (1.95) now allows you to choose either 601, 709 or read the metadata header information ("auto detect").
|
March 19th, 2009, 04:36 PM | #9 | |
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Quote:
For the record, I used three methods to convert from YUV to RGB: 1) FFDShow to convert to RGB, sending RGB directly to EVR. FFDShow is switchable. 2) CoreAVC, sending YUV direct to Haali. Haali is switchable. 3) AVIsynth to tweak YUV levels from 601 to 709, then brought into Premiere Pro as 601 (I documented the procedure for this in another thread). All 3 results matched as expected., so I know my method is good. My tests conclude 601 is accurate, 709 is not. This is contrary to the general consensus that one should use 709 "because it is HD". As Canon seem unable to confirm either way, I wanted to know if anybody else agrees that 601 is the more accurate, and therefore correct standard to use. |
|
March 19th, 2009, 06:14 PM | #10 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
One clue is this: when you use QT to rewrap the MOV file as an MP4 and open in Vegas, the RGB histogram is perfectly smooth - there are no gaps or bumps. That confirms that we are getting all of the levels from the camera properly. Any other 8-bit interpretation loses information.
Given this starting point, we can then re-color to taste. If you use 32-bit processing in your project, you will retain the quality. IMHO, 709 vs. 601 isn't such a big deal on the capture side. Just make the decision to get all the data you can and grade to taste while matching scenes. It's more art than science. On delivery, it's important to get the color space correct. Aside from monitor variations, that's what your audience will see. On the capture side? Don't sweat it - just make sure that you get all the data that the camera can deliver.
__________________
Jon Fairhurst |
March 19th, 2009, 07:21 PM | #11 |
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Jon, I agree for most projects 601/709 on the capture side isn't important. As you say, as long as all the 'bits' are properly in the NLE, one can do what is necessary to make the footage look good. From an artistic point of view, whether you start with "technically correct" colours or "technically incorrect" colours, if you're going to re-grade by eye in 32-bit/10-bit space it doesn't really matter, just tweak the colours and don't worry about the actual numbers. But for some workflows where colour accuracy is more important than looks, starting with technically correct colours is easier and requires less correction in post.
The actual reason I raised this point is not because I'm having problems editing or creating output, but because I'm documenting a workflow. I need to be technically accurate, so if the DIGIC 4 processor in the 5D is encoding RGB data to YUV using standard 601, my workflow needs to state the .MOV files should be decoded back to RGB using the same standard. I guess most people would prefer to setup their H.264 decoder correctly, even if they were going to regrade in post. And I know for some personalities, doing a transcode or initial playback with the wrong setting ticked would be sacrilege! |
March 20th, 2009, 12:31 AM | #12 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
I can relate, Thane. Sometimes getting it technically correct is paramount. As a matter of fact, I produced a ten minute video loop on Blu-ray that is the IEC international standard for measuring television power consumption. Another member of the project team had used SD versions of the content and encoded a DVD with a cheap encoder. The standards committee voted to approve that rough version. I then acquired the HD footage and the EDL and had to reverse engineer the thing. I had content of all stripes (1080i, 720p, 480p, 480i, 50 Hz, 60 Hz...), and the differences between the HD versions and the SD source was staggering. And it all had to meet our target APL' (average picture level) histogram.
During the development, I created 601 to 709 and 709 to 601 matrices for Vegas. Truth be told, the problems were never that simple. On a number of clips, I had to hand roll conversions that were close, but never perfect. It was within the margin of error though, and approved. The standard was published in October last year, Energy Star started using it on November 1st, and the EU and Australia are in the process of adopting it. So far, there have been no technical complaints. (Yay!) IEC Webstore | Publication detail > IEC 62087-BD Ed. 2.0 English (No, I don't profit from it...) So yeah, for creative work, just make sure you get all the bits out of the camera, work in 10 bits per color or better, and grade to taste. You'll have fewer gray hairs that way. But sometimes, you've just got to get it technically perfect. Speaking of which... did you know that when you encode simple test patterns, such as colorbars, with MPEG-2, you get offsets of one or two bit levels? Different encoders and decoders give different errors. I've been told by experts that one of the things they fixed when developing MPEG-4 is that it's bit accurate for flat levels - or at least repeatable from decoder to decoder.
__________________
Jon Fairhurst |
March 20th, 2009, 12:56 AM | #13 | ||
Regular Crew
Join Date: Jan 2009
Location: Manchester, UK
Posts: 53
|
Quote:
Quote:
So all these seemingly 'impossible' variances, which shouldn't be there as we're dealing with standards, formulas and simple maths (it's not like I'm dealing with analogue signals here), make it all the more important for me to be technically consistent and accurate in my workflow. |
||
March 20th, 2009, 01:50 PM | #14 | |
Major Player
Join Date: Dec 2008
Location: Laguna Niguel, CA
Posts: 277
|
Quote:
I come from the analog video days. As they said in Ghostbusters "It's more of a guideline than a rule" when they crossed their streams. |
|
| ||||||
|
|