|
|||||||||
|
Thread Tools | Search this Thread |
December 29th, 2007, 10:29 AM | #46 |
Inner Circle
Join Date: Sep 2004
Location: Sacramento, CA
Posts: 2,488
|
I like the EX1 and would pick it over an XL-H1 for my purposes, but I will say that some of the EX1 controls are clumsy compared to the way Canon does things. In particular I found the menu-driven shutter speed controls on the EX1 to be annoying, so that's something to think about before making a decision between these two cameras.
|
February 25th, 2009, 07:29 AM | #47 | |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Quote:
http://www.videoscope.com/pdf_files/...mats_Guide.pdf This jibes with your resolution and image comparisons. The advantages seem to be solely the result of larger and newer sensors. |
|
February 25th, 2009, 08:52 AM | #48 | |
Major Player
Join Date: Oct 2001
Location: Washington D.C. Metro Area
Posts: 384
|
Quote:
The facts are clear and simple. The Sony EX Series cameras have several ways to output footage. The most desirable is the 10 bit per channel 4:2:2 output of the HD-SDI. Of course the problem there is that the camera requires a third party solution to record this output. I've used Blackmagic, AJA, and Matrox MXO2 with EX1 and EX3 footage now. The camera is definitely outputting 10 bit data over SDI. I've pushed it around in Shake and Color now, and compared it to RED and Genesis footage, as well as F900. I am not the only one. The most authoritative person I remember testing this was Dave at Cineform who posted on these boards, I thought in early 2008. I consider the matter of the HD-SDI output bit level to be completely closed- its a 10 bit per channel output with 4:2:2 color sampling. There are two codecs that the camera records. 35Mbps VBR XDCAM EX and a 25Mbps CBR XDCAM format that is very similar to HDV. Both of these are quantized at 8 bit 4:2:0 sampling. So long as we are talking about the encoded recordings the EX series create, what Mr. Moretti says is absolutely true. Although the signal chain within the EX1 uses hardware to reduce the 10 bit signal to 8bits before the compressor gets to work, this is a non-issue. If a 10 bit signal was passed to the codec directly the codec would begin its work by reducing bit depth. I am not one of the Sony hardware engineers, but putting my engineering and computer science backgrounds to work, I can offer an educated guess at the reason: Removing the data in hardware pre-codec is probably faster and uses less power. There are almost certainly some edge cases where its theoretically possible that this could matter. Its safe to say that, with the exception of tests designed specifically to test for this, when the signal is reduced to 8 bits just doesn't matter. In practice I don't think any modern production codec is subject to any practical effect due to these changes. I have never had any reason to test the analog outputs of the camera. I've used them once on Star Trek, and I'm not even sure how to do it, the camera assistants handled it for me. All I cared is that the producers stopped trying to stand on the dolly with me. For what its worth... we used a BNC analog composite connection to an old 720p HDTV, which was displaying SD. It looked horrid and you could clearly see 4:2:0 sampling artifacts. The lesson I learned there is that if I want SD output from the EX series, I output to a box like AJA's ioHD via HD-SDI, then use its downconversion features. It looks about 5 trillion times better. |
|
February 25th, 2009, 09:06 AM | #49 |
Major Player
Join Date: Sep 2004
Location: New York, NY
Posts: 775
|
Well I suppose this "debate" continues because of constant confliction of facts and opinions, especially when the company making these cameras releases an official whitepaper stating such.
According to this official Sony Whitepaper, it seems what Peter Moretti has stated is correct: sans HDCAM SR, every other Sony camera quantizes the image data to 8-bit before even going into the color-subsampling stage (which needs to happen before going out the HD-SDI). People I've talked to who have recorded the EX1 via HD-SDI to a 10-bit codec have concluded that it is outputting 8-bits. This official Sony paper only seems to back that up. |
February 25th, 2009, 09:37 AM | #50 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Alexander,
I suggest you read pages 4, 7, 16 and 17. Sony makes it pretty clear that the EX1 is quantitized to 8-bits before any other compression takes place. Apparently this also happens with the F900 as well, so if you're getting similar results, that's not surprising. I don't KNOW the definitive answer to this ?, but there is still clearly reason for debate. I believe you are assuming that the initial digital image coming off the sensor is 10-bit. But according to Sony, it's either 14 or 12-bit. (I would imagine in the case of the EX's, 12-bit.) So to supply 10-bit color to the HD-SDI output, there would have to be a stage where the image is requantitized to 10-bit. This additional stage is never mentioned (but that doesn't mean it doesn't exist). Perhaps Sony doesn't want people knowing it's possible to get 10-bit color out of a $6k camera. Or perhaps it's the other way around, where the last two bits are padded and it really kicks out 8-bit color. The latter makes more sense to me. Why bother putting in an intermediary 10-bit quantiziation stage but leave it unmentioned and not explicitly advertize that the camera is capable of 10-bit color? That seems like a waste of time and money. It would make more sense to do what Canon does and supply 8 bits of useful data. P.S. AFAICT, Cineform has not confirmed that the EX's spit out ten significant bits. Read post #14. http://www.dvinfo.net/conf/sony-xdca...tml#post937396 Last edited by Peter Moretti; February 25th, 2009 at 10:58 AM. |
February 25th, 2009, 10:08 AM | #51 |
Trustee
Join Date: Jun 2004
Location: Denver, Colorado
Posts: 1,891
|
Peter, from page 4 it's also "clear" that color sampling is 4:2:0 pre-compression, yet it's been widely stated that SDI is 4:2:2.
If SDI is in fact 4:2:2, why can't quantization be 10 bit as well? Or the corollary is that neither is chroma 4:2:2. Confusion reigns. |
February 25th, 2009, 10:25 AM | #52 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Tom,
But the heading on the top of page 9 say "Recorded H X V Samples, by Channel," so Sony is indicating that color subsampling varies by recording type. They make no such indication when it comes to color bit depth. Also, Canon supplies 4:2:2 color out of its HDV cameras' HD-SDI and HDMI ports. I would imagine it's much easier to supply full horizontal resolution color samples than it is to requantify all three channels to a new bit depth. So true 4:2:2 color subsampling seems reasonable, while true 10-bit color depth less so, IMHO. Last edited by Peter Moretti; February 25th, 2009 at 01:06 PM. |
February 25th, 2009, 01:53 PM | #53 | |
Major Player
Join Date: Oct 2001
Location: Washington D.C. Metro Area
Posts: 384
|
Quote:
They talk about Quantization and "Requantization" in the same step on page 7. The last two paragraphs on page 7 should be in their own section entitled "Requantization" at the top of page 10. Requantization occurs after signal processing, but before compression/recording. It even says as much in the last paragraph on Page 7: "Prior to recording, signals are requantized with fewer bits per sample." As far as the information throughout, the whitepaper keeps talking about what is RECORDED. First off, just about every Sony tech rep, technician, engineer and marketing person has said for the last year and a half that the camera outputs full 10 bit data over HD-SDI. I think the manufacturer has been as clear as possible about this matter. I've already said that I've monitored with 10 bit monitors and worked with EX1 footage captured over SDI. I'm not the only one. Its 10 bit 4:2:2 data at full raster. I've analyzed it in Shake and Color. Its 10 bit 4:2:2. I can also say it looks rather different than the recorded output off SxS cards. Just zoom in so you can see the pixels on fine lines, especially diagonal or curved ones with saturated reds or blues. The difference is immediately obvious even to people who have no training whatsoever. 10 bit is harder to see, because there are so few displays that support it. The easiest way I've seen to show the difference is to pull a key on the sky. As you might for sky replacement. See the banding in 8bit... don't see it in 10 bit. Pretty simple really. I wish I still had some of that footage, I could post and end that topic. I don't shoot EX1 anymore though, I am shooting RED presently. The cheapest way to get 4:4:4 recorded. As a final point... if you can't tell the difference after all this time, then maybe its a difference that doesn't matter to your productions. Get back to the real work and stop fiddling unimportant digits. ;) |
|
February 25th, 2009, 02:00 PM | #54 | |
Major Player
Join Date: Oct 2001
Location: Washington D.C. Metro Area
Posts: 384
|
Quote:
The same dumb paragraph, along with the paragraph immediately before, that should be in its own section on page 10. Apply a little logic: Why would you do quantization in two steps one immediately after the other? You wouldn't. You do it in one step in that case. They do it in two steps so that signal processing can happen at high bit depth, then they requantize for compression and recording. |
|
February 25th, 2009, 03:00 PM | #55 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Alexander,
The Sony document may be misleading, or it may indeed be accurate. But it clearly states that the color depth is lowered for the first time during the analogue to digital conversion stage. The resulting bit depth is either 14 or 12 bits depending upon the camera. It then states that a second bit depth lowering takes place. So for the EX1/3 to output true 10-bit color, this stage would have to lower the bit depth from either 14 or 12 bits to 10 bits. But then another/alternate bit depth conversion will also have to take place to convert the signal to 8 bits prior to the recording, because the codec itself does not do bit depth conversions. Having an intermediary, unmentioned, conversion to ten bits makes little sense to me. It is of course possible. As for Sony reps, there are quotes from reps who say it's 4:2:0 out of the HD-SDI port (see the first page of the thread I linked). So which rep should be listened to? You reference David from Cineform, but he seems to believe it's 8-bit not 10-bit but has done no testing. You reference your own experience with the footage, which mostly supports that it's 4:2:2, not 4:2:0. I never believed it was 4:2:0. But you also say it works like F900 footage, which is probably HDCAM 3:1:1 8-bit, so what does that prove? Your sky test is the most convincing, but it doesn't give a definitive answer. Does all this matter? For people equipped to do HD-SDI capture, yes it does. For those who aren't, no it does not. |
February 25th, 2009, 05:32 PM | #56 |
Inner Circle
Join Date: Jan 2006
Posts: 2,290
|
Has anyone done a footage comparison of EX1/3 SDi versus SxS? Is there significant real world image improvement? No need for number slinging, what about the actual image side by side?
|
February 25th, 2009, 09:45 PM | #57 | |||||
Major Player
Join Date: Oct 2001
Location: Washington D.C. Metro Area
Posts: 384
|
Quote:
Later the paper talks about "requantization" which is poppycock. The data is rounded to the nearest 8 bit value and truncated. That isn't quantization in any sense. Why do you think that HD-SDI ports are so expensive? It isn't merely some conspiracy to keep the good stuff to the "big boys." Every SDI port has the "intelligence" to take the "raw" signal from the internal processing chain and convert it to conform to the appropriate SDI format and send it out. Let me get a bit more technical: every version of the SDI standard includes a set of video signal generators. At this stage the generators are still handling the "raw" internal signal with the full bit depth of the first quantization at or near the sensor. Then the signal passes to SMPTE SDI encoder that takes a raw 74MHz signal- at whatever bit depth you send it- and converts that to the SDI signal specification. HD SDI encoders are made, essentially, of two SD SDI encoders. (Which by the way means that while SD SDI handles 10 bits per clock, HD SDI handles 20 bits per clock.) Only then is it passed to the transceiver and sent down the wire. Quote:
There is a conversion at the HD-SDI port from a 14 bit source to 10 bit. There is a conversion before the compressor from a 14 bit source to 8 bit. Both of these are on separate signal paths. How else could you get the HD-SDI signal out in real time and deliver near real time performance on the compression recorder? Let me put that differently... if you instead send the signal to the compression encoder and then out the SDI port you'll get serious lag on the SDI monitoring. The Sony paper we are discussing entirely ignores the live video outputs of the cameras. It is focused on, and discusses the recording path. (And does so poorly at that.) Quote:
I said I compared it to RED, Genesis and F900 footage. I am sorry I even said that, I had no idea how far it would get taken out of context. For the record, RED and Genesis are 4:4:4 and 16 bit. They blow the EX out of the water. The F900 is more complicated. Yeah, its data kind of sucks these days, and yes its 3:1:1. Its much lower compression though, and that matters. The camera is also much better than an EX1/EX3. It ends up being about on par for most uses, though the EX series is better for compositing and still scenes, the F900's CCD's still kick a$$ compared to the EX1/EX3- especially on fast motion shots or rapid flashing. The point of value to THIS conversation is merely that the camera has been compared to other digital cinema systems, and understood relative to them. I specifically reject your misinterpretation that somehow all those cameras are equivalent. In fact I count it as absurd. Quote:
From the command line on a Mac with Shake 4 or 4.1 installed, enter the following: shake filename.mov -info That should get you something like this: [athena:Psyche/Alex Dynamic Range Tests 1-23-2009/ProRes HQ] aibrahim% shake "DR TEST AGI Red Dynamic Range Tests.mov" -info info: Node: SFileIn1 info: FileName: //Athena/Volumes/Psyche/Alex Dynamic Range Tests 1-23-2009/ProRes HQ/DR TEST AGI Red Dynamic Range Tests.mov info: Type: RGBA info: Size: 2048x1024 info: Depth: 16 bits info: Z-Buffer: none info: Format: QuickTime movie info: Duration: 1-7917 That represents a file with 16 bit per channel available. Its actually a 14 bit log file, but that's another matter. If you have an 8-bit file you'll see this: info: Depth: 8 bits Quote:
I think it matters to people who are equipped for HD-SDI capture AND monitoring 10 bit video. I think it matters to the colorist and compositor if you can capture HD-SDI. If you are either of those people you can already see the difference in your day to day work. If you aren't one of those folks- then no matter what you do you are actually getting 8 bit results. That isn't intrinsically bad. If nothing else you do get to see 4:2:2 results over HD-SDI as opposed to 4:2:0 on the card. |
|||||
February 26th, 2009, 01:32 AM | #58 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Alexander,
Your test, I imagine, cannot tell the difference between ten bits of significant data and eight bits of significant data padded with two zeros. The bottom line is that neither of us can prove it either way. You bring up some good points, but clearly this ? is far from answered. But it seems that not all HD-SDI encoders work as you stated, otherwise Canon's XH's and XL-H's would provide ten bit color out of their HD-SDI ports. They don't, they provide eight bit color in a ten bit wrapper. Perhaps the Canon chips quantify the color to eight bits during the first go around, so that's all that can be had off the sensor. IDK. I DO think it's very fair to say the EX's HD-SDI data is 4:2:2, not 4:2:0. But truly ten bit? The jury is still out. Last edited by Peter Moretti; February 26th, 2009 at 05:35 AM. |
February 26th, 2009, 06:27 AM | #59 |
Trustee
Join Date: Nov 2005
Location: Sydney Australia
Posts: 1,570
|
I don't think it's that hard to test if it's 10bit or 8bit. Underexpose a wedge by 1 stop and see how much of the blacks you can recover.
As to the white paper, it does seem to be referring only to recording and from memory the HD SDI signal was stated as being derived from the component outputs. The again that could raise some questions too. |
February 26th, 2009, 08:02 AM | #60 | |
Inner Circle
Join Date: Dec 2002
Location: Augusta Georgia
Posts: 5,421
|
Quote:
Paul Cronin performed that test. He recorded using an EX1 or EX3 (I can not remember) to the SxS card internally and to a Flash XDR via HD-SDI. He found the same frame in both recordings displayed them on two identical monitors. The difference was dramatic. 4:2:2 should always beat 4:2:0. His wife came into the room and asked: "Why did you color correct one image and not the other?" He had not corrected either image, the HD-SDI output, 4:2:2, as recorded by the Flash XDR was just much better visually. Disclaimer: I work for Convergent Design.
__________________
Dan Keaton Augusta Georgia |
|
| ||||||
|
|