|
|||||||||
|
Thread Tools | Search this Thread |
December 2nd, 2007, 02:37 AM | #1 |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
EXMOR Technology
Each CMOS capture-cycle begins when a pulse is sent to the Reset transistors within a row to prepare the photodiodes to capture light. After resetting all pixels in a row, the amount of light falling on the photodiodes determines how much charge accumulates in each element's Potential Well. Technically, the reset begins an “integration” period.
When a CMOS row is read, each element sends it’s signal down a column-bus where it is input to a Sample & Hold circuit that briefly stores the value. Each column has its own bus. (The read-out clears the Potential Well.) Next, a second cycle is performed on the same row. Immediately, all row elements are sent down their column-bus to a second set of Sample & Hold circuits. The second set of values measure each element’s inherent noise. The first set of values measure element signal+noise values. Now the noise values are subtracted from stored signal+noise values to yield a row of signal values. This system is called Correlated Double Sampling (CDS). These analog signal values are shifted to A/D converter(s). The more A/D converters, the faster each row can be read-out. The faster each row is read, the faster ALL rows are read-out. The faster All rows are read-out, the less rolling shutter effect. The A/Ds can be built-into a CMOS chip or they can be in an external chip. In this case, the analog signal(s) must pass from one chip to another adding noise. The output from a V1's 3ClearVid chip is an analog signal that is converted to a digital by a 14-bit A-to-D converter in its Digital eXtended Processor (DXP) chip. According to Sony Japan, four elements—from four columns—are read simultaneously from each ClearVid chip through the DXP into each of the EIP’s three 2-million cell buffers. Thus, there are four EXTERNAL A/Ds for each CMOS chip. 960 elements are processed in 120 slices of four samples each. Faster than conventional CMOS, but not as fast as would like. According to Sony Japan, ""Exmor" is a trademark of Sony Corporation. This "Exmor" is the column parallel AD converter with by the high-speed processing, low noise performance, low power consumption of CMOS sensor excellent." To me this sounds like there is an on-board A/D for each column. If this is true, in the time that one A/D conversion could occur -- 1920 elements in a row are converted from analog to digital. Now, pure digital values can be output. Assuming 10-, 12-, or 14-bit converters, there would need to be 10-, 12-, or 14-pins that output 1920 values as they are shifted from the A/D converters. Digital values can be shifted-out far faster than analog values. So a row can be processed far far faster in the digital domain. Likely many times faster than can a row in the V1. (It's possible to increase this performance by outputting multiple digital values per clock tick.) Not only can't the EX1 EXMOR chips be fairly compared to the V1's CMOS chips, the new Z7/S270 will use 3ClearVid CMOS chips that incorporate EXMOR technology. This will greatly improve S/N ratio. It's not known if more than four values will be fed to the EIP chip per clock tick. Likely not.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
December 2nd, 2007, 04:05 AM | #2 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Does EXMOR use ClearVid's pixel pattern and ratio of considerably more green than blue or red sesnors than you'd find in a Bayer pattern?
|
December 2nd, 2007, 04:34 AM | #3 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
Both systems get 1920x1080 images at 60Hz.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
|
December 2nd, 2007, 07:19 AM | #4 |
Trustee
Join Date: Nov 2005
Location: Sydney Australia
Posts: 1,570
|
Something doesn't quite ring true in that explaination. Firstly noise is random by definition. You cannot remove the noise from one sample by subtracting the value from a second sample, its going to be different. You can take two complete sample and average them to reduce noise though.
Secondly if you're trying to read the dark current noise from that second sample then as the camera doesn't have a shutter and the photodiodes are still exposed to the light then something like a flash could give way off kilter noise values and lead to a completely mangled frame. Using multiple A/D converters is one way to speed up reading the sensor but it can have problems, just look at the HD100s early issues. |
December 2nd, 2007, 07:50 AM | #5 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
2) Obviously, the amplifier's resting (no light) voltage itself is not "noise" -- but when all their tiny variations are imposed on a light capture (say an even gray) they create "fixed pattern" noise which traditionally has been a CMOS weakness. That's why these voltages are called "noise." 3) The Potential Well requires TIME to accumulate a signal based upon light. Since the second sample is taken instantaneously after the Row Reset -- there is NO TIME -- hence no signal other than from the element's own amplifier. CDS is used by all modern CMOS chips -- including your V1. 4) As you say -- the HD100 had early PROCESS problems that were fixed in a short time. The second generation HD250 didn't. Anyway, EXMOR technology is already in use in Sony DSLR cameras. PS: The other way to speed-up a chip is faster clock rates that require greater power hence yielding more heat dissipation hence other problems. And, much greater power consumption. Not the way to go.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c Last edited by Steve Mullen; December 2nd, 2007 at 06:01 PM. |
|
December 2nd, 2007, 09:23 AM | #6 | |
Contributor
Join Date: Apr 2005
Location: Tucson, AZ
Posts: 128
|
Quote:
http://ieba.wordpress.com/2007/11/29...-s270-hvr-z7u/ Carroll Lam |
|
December 2nd, 2007, 09:28 AM | #7 |
Inner Circle
Join Date: Mar 2003
Location: Ottawa, Ontario, Canada
Posts: 4,222
|
This diagram in the Sony information seems to show A/D per column, its on page 4.
http://www.sonybiz.net/res/attachmen...3315642481.pdf Ron Evans |
December 2nd, 2007, 05:57 PM | #8 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
"The pixel shift interpolation technique has been traditionally used in small 3CCD camcorders. However, it normally requires the combination of all three colour element (RGB) signals to maximise resolution. If an object lacks one or more colour elements, the resolution of the object may be degraded." From my V1 book: "By combining output from all three CCDs, both horizontal and vertical resolution is increased up to 150-percent on B&W static images. Naturally, that leads to the question of how real is the extra resolution obtained by using pixel-shift technology. The complex answer is that with pixel-shift technology, effective resolution is a function of the colors, the color patterns, and the motion of objects in a scene. The claimed increase of 1.5X is far too generous. The typical value is only 115-percent. Therefore, while pixel-shift is a good solution, it is not as good as using higher resolution CCDs." From my Z1 book: "The White and Black Fence (see the attached very crude diagram) has a luma signal that varies from 2 to 6: a range of 4 that indicates a maximum resolution—as shown in the Figure 5.1. A Green and Black Fence has a luma signal that is constant at 2; a range of zero that indicates resolution will be far more limited—as shown in the Figure 5.2." Sony also adds an important addition to pixel shift: "The 3 ClearVid CMOS Sensor system is different. It can always produce maximum resolution, regardless of the balance between colour elements, thanks to its unique and sophisticated interpolation technology." There is a critical difference between "passive" pixel shift (Canon and Panasonic) and "active" -- DSP-based 2D FIR -- interpolation of "pixel shifted" signals as provided by Sony. The V1's EIP chip does the interpolation. Other cameras that use "pixel shift" do not have an "EIP" interpolation chip.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c Last edited by Steve Mullen; December 3rd, 2007 at 05:25 PM. |
|
December 3rd, 2007, 11:20 AM | #9 |
Trustee
Join Date: Mar 2004
Location: Milwaukee, WI
Posts: 1,719
|
I would even go as far as to say if your RGB color lacks any green component then the pixel shift isn't going to do anything at all. There are a lot of colors in the world that can be made up of only blue and red components. In fact 2/3rds of the colors.
|
December 4th, 2007, 05:13 PM | #10 | |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Quote:
Maybe I'm missing something, but your respones, while interesting, doesn't seem to answer the question of if the EX1's EXMOR chip uses higher ratio of green pixels to blue & red pixels than a Bayer patterns does. |
|
December 4th, 2007, 05:18 PM | #11 |
Major Player
Join Date: Feb 2006
Location: Philadelphia
Posts: 795
|
The EX1 has three chips with 1920x1080 sensors each - so the ratio is 1:1 in terms of green pixels to red and/or blue - I believe that is always the case with 3-chip cameras.
__________________
My latest short documentary: "Four Pauls: Bring the Hat Back!" |
December 4th, 2007, 05:21 PM | #12 | |
Wrangler
|
Quote:
Want more proof? In the NTSC system, when color TV standards were devised, they determined that green is the predominant color in most average scenes. Green is not really transmitted because it would take up too much bandwidth. Instead, color difference signals are created for the small percentage of red and blue, and whatever percentage is left over, must be green. It was a clever way to create color tv without using a larger bandwidth. -gb- |
|
December 4th, 2007, 06:55 PM | #13 |
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Thomas's point is an interesting one, but in practice the response curves of the sensors have considerable overlaps in terms of spectral response - it would be wrong to think of the responses as three mutually exclusive blocks. Hence, very few light sources are ever likely to stimulate only one of the three colours. (And that is why I understand green to be pixel shifted with relation to red/blue - it's spectral response overlaps both of the others.) That said, the advantages of pixel shift will undoubtably be at their best for black and white images (such as a test chart.........), and at their worst when highly saturated colours are involved. (Which may lessen it's benefits for green screen shooting.)
I agree strongly with Steve's previous post: "By combining output from all three CCDs, both horizontal and vertical resolution is increased up to 150-percent on B&W static images. .......The complex answer is that with pixel-shift technology, effective resolution is a function of the colors, the color patterns, and the motion of objects in a scene. The claimed increase of 1.5X is far too generous. The typical value is only 115-percent." But putting numbers to the increase is of limited value. Even if you accept it will be between 115% and 150%, that says nothing about the mtf value of that detail level - the contrast of the fine detail. |
December 5th, 2007, 02:01 AM | #14 |
Trustee
Join Date: Mar 2007
Location: Sherman Oaks, CA
Posts: 1,259
|
Of course that seems aboslutely logical. Except that the FX1 and FX7 both use three ClearVid sensors. Purportedly the same sensor used in the HDR-HC3. (And one of the design characteristics of ClearVid is more green pixels than found in other sensor designs.) So yes you illuminate a point that has me scratching my head... and also feeling like I worry about ClearVid too much.
|
December 5th, 2007, 02:51 AM | #15 | |
Major Player
Join Date: Nov 2007
Location: Stockholm, Sweden
Posts: 462
|
Quote:
|
|
| ||||||
|
|