|
|||||||||
|
Thread Tools | Search this Thread |
March 22nd, 2006, 07:42 PM | #31 |
Regular Crew
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 69
|
I'm cool with not using the term 'green shifting', but I'm not sure you get that I was using it to capture the fact that the green CCD would be clocked out of phase with the red and green CCDs. That's not pixel shift, it's a different thing. I'm fine not coining a phrase for it, though. It wasn't my idea, though it is a very interesting and cool one.
Are you sure the absense of green doesn't affect resolution? If the green CCD sees no light, how can the fact that it's offset by half a pixel offer any luma resolution gain? The idea of pixel shift is that the offset grid of information allows you to derive additional luma. I'm fine not using 'green shift' anymore. But sophmoric doesn't quite capture the thinking behind it. |
March 22nd, 2006, 08:06 PM | #32 | |
Obstreperous Rex
|
Well, I'm just making a strong suggestion, but certainly no mandate. Call it whatever you want to call it. What I meant about the color green not affecting it is, not affecting it as much as one might think. Not every pixel is a chroma pixel. You're not getting color information out of every pixel.
Quote:
If you wanted to represent a curve on a piece of graph paper, and if you were limited to putting say twenty pencil points on that graph to draw the curve, you could do it but it would be somewhat stair-stepped when you connect those dots. If you had forty pencil points to put on that graph, then you get a more accurate curve... in fact you could say a higher resolution curve because you've got more information going into the representation of that curve. That's what Pixel Shift does, it gives you a much smoother curve because you've got more dots to put on that graph (sorry for the overly simplified description). Never mind that it's the green CCD. That's not the point. The point is what Pixel Shift does, and not the color of the CCD that's offset. That's all I'm trying to get across. Many folks don't realize that a CCD (charge-coupled device) can't see color anyway and isn't digital to begin with. A CCD is a monochromatic, analog device. The people who aren't aware of that, those who simply use these cameras to create compelling and meaningful content and who prefer to talk about that instead of tiny electronic innards, are the ones I wish I had more of around here. |
|
March 22nd, 2006, 10:36 PM | #33 | |
Regular Crew
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 69
|
Quote:
I have read, by the way, that because of green's special status in how the human eye processes color, it is the the best of the three colors to shift, and that choosing to offset the green isn't a random choice, although luma gains are to be had with shifting any of the three. One thing I've never seen addressed is what the theoretical gain would be if all three CCDs were shifted relative to each other by 1/3 of a pixel. On a side note, I would guess that green is special because it's the color of chlorophyll, and no doubt our early ancestors needed to be good at spotting predators in dense foliage, or maybe just needed to be good at figuring out which leaves to eat! |
|
March 22nd, 2006, 10:38 PM | #34 |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
I had hoped I could simply clean-up my first post -- which has several errors. Nevertheless, here are my corrections. Thankfully, essentially everything I said was true, except the numbers were wrong.
1) Sony: Contrary to everything I've read, the Sony does NOT simply discard one field and "bob" interpolate a new 1080-line frame in it's CineFrame modes. If it did, it's CineFrame vertical resolution would be only about 405 TVL -- not 540 TVL. I believe a "2D FIR (Finite Impulse Response) filter" is applied to one field of each frame as part of the deinterlace process. These filters can have a small or large number “taps” where each tap is a sample. Current filters (interpolators) have up to 1024 taps; which would support a 32x32 window around each target pixel. The filter vertically scales a 960x540 field to a 960x1080 frame. Effective vertical resolution is increased by 1.4X. The result is 1080-line “interlace” video (without interlace artifacts) and with an effective vertical resolution of 540 TVL. 2) Canon: Canon’s 24F function likely uses Motion Adaptive deinterlacing. The CCDs are interlaced scanned at 48Hz. Because row-pair summation is employed, CCD sensitivity remains constant and a pair of 405 TVL fields are sent to the deinterlacer where logic is used to measure motion between fields. For static frames, “weave” is employed that combines together both fields and thus yields up to 810 TVL. (Shannon measured 800 TVL.) Because information from different moments in time may be combined, moving objects will have combing on their edges. A second-stage, isotropic filter is necessary to blend pixels at the edges of moving objects in order to reduce combing. The eye will likely not notice the blend, because we expect moving objects to be slightly blurred. For dynamic frames, a "2D FIR" filter is employed that scales a single 1440x540 field to a 1440x1080 frame. In the process, effective vertical resolution is increased by approximately 1.4X. The result is 540 TVL video. Either Frame Adaptive or Region Adaptive deinterlacing could be used. Resolution measures will not reveal which is used. A frame-based system makes each deinterlace mode decision for an entire frame. A region-based deinterlacer is far more complex. The smaller the region, the more total image resolution is maximized. Under real-world conditions a region-based deinterlacer delivers an image where only regions with movement lose vertical resolution. The eye will likely not notice this effect, because we expect moving objects to be slightly blurred. NOTE: Because of pixel-shift, for both 60i and 24F, and both static and dynamic conditions, horizontal resolution is, according to my model, 820 TVL/ph. The lens MTF and CCD MTF, plus the anti-aliasing filter appear to limit horizontal resolution to about 800 TVL/ph. Or, the charts are limiting the measurements. Canon's deinterlace process generates 24fps video with as much quality as is possible (within a mobil device) given that the video is obtained from interlace scanning. Deinterlacing technology, like most video processing such as pixel-shift, digital noise reduction, and compression cannot deliver consistently optimum quality under ALL conditions. Nevertheless, the more sophisticated the process, the greater the level of quality and the more consistent the results. Understanding HOW the H1 deinterlacer works, fully supports subjective reports that the Canon "looks better" than than the Sony (in CF25) although they both have 1080-row CCDs and both measure the same on Adam's tests.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c Last edited by Steve Mullen; March 23rd, 2006 at 03:50 AM. |
March 22nd, 2006, 11:30 PM | #35 |
Trustee
Join Date: Mar 2004
Location: Milwaukee, WI
Posts: 1,719
|
Hey here is an interesting thought to build on the HVX200.
Maybe somebody should make a HD camera with 6 CCD's. Yes I said six. 2 for green, 2 for blue, 2 for red. 1 each of the R,G,B chips are pixel shifted by 1/2 H and V. You now have a HD camera using 960x540 chips but yielding exactly 1920x1080 unique points for R,G and B. No variation on detail based on green chroma or debates on if you can get a 4:2:2 image this way. All full raster points have an exact RGB match no matter what color the pixel is. I think six 960x540 chips would be cheaper than three 1920x1080 chips not to mention help with the limitations of 1/3" chips. Anyways sorry for getting OT. |
March 23rd, 2006, 12:47 AM | #36 | |
Major Player
Join Date: Sep 2005
Location: Pleasanton, CA
Posts: 258
|
Quote:
Best, Christopher |
|
March 23rd, 2006, 01:38 PM | #37 |
Trustee
Join Date: Jan 2006
Location: New York, NY
Posts: 1,267
|
I believe the JVC uses a split technique CCD on their 100 camera. The effect they have had to deal with is called SSE. There is a difference in the exposures on some shots left to right. Some people say they have solved this with the latest update. It certainly is easy to manufacturer fewer pixel chips but it is not so easy to make them act as a single one. Of course computers have been moving to dual processors for some of the same reasons.
|
| ||||||
|
|