View Full Version : Clearvid is pretty darn sweet!
Thomas Smet October 3rd, 2006, 10:09 AM I have been putting a lot of thought into the SONY clearvid method and this thing seems pretty darn good and I can see why it works so well.
My first impression of Clearvid is that it looks a lot like the green pattern on a bayer filter.
With a bayer filter the green is in a pattern of: R=red, G=green, B=blue:
R G R
G B G
R G R
On clearvid the real angled pixels are kind of in that same pattern: I=interpolated, R=real:
I R I
R I R
I R I
So based on this I think we can expect video from the V1 to have a green resolution equal if not better than what a single 1920x1080 bayer CCD could do.
Whats great about the V1 is that the red and blue channels also have just as much deatil where on the bayer pattern the red and blue have less pixels and is harder to interpolate. This means much more accurate color than a bayer pattern single chip CCD camera.
Marvin Emms October 3rd, 2006, 10:43 AM "green resolution equal if not better than what a single 1920x1080 bayer"
The sensor has the same number of green pixels as a 1920x1080 bayer sensor and they are centered in the same positions as a bayer sensor, so I would agree with equal, but I cannot follow how you might expect either to be better than the other.
Overall resolution may be poorer than a 1920x1080 bayer, unless there is pixel shifting and post processing to take advantage of this.
Not that this is really an issue as the clearvid cameras currently sold only output 1440x1080 and noone is making a true 1920x1080 bayer camcorder within any similar price range.
Thomas Smet October 3rd, 2006, 12:26 PM I do not know but maybe overlapping pixels from Clearvid can get a better interpolation then just math done on a bayer pattern. I do not know what method is used to figure out the in between pixels.
How do you figure a clearvid will be lower than a 1920x1080 bayer pattern? Based on the pattern of pixels a bayer pattern on a 1920x1080 array actually has 960x540 green pixels. I would think clearvid at 960x1080 would be at least equal to that if not slight sharper.
Thomas Smet October 3rd, 2006, 12:35 PM The only thing I am a little confused on right now is how the chips deal with the 2:1 aspect ratio. 960x1080 is not 16x9 but 8x9. The horizontal has to be doubled but not the vertical. For the way clearvid has been shown to us by SONY I would think the square shape of the pixels in the diagram would work best if the clearvid chips were 960x540. It was pointed out in another thread that perhaps the pixels on the cmos chips are either anamorphic or spaced apart horizontally.
With the way SONY has showed clearvid I would think the chips would give off more of a 1920x2160 pixel array. That sure would mean a lot of detail for the vertical resolution when it gets down sampled to 1920x1080. Down sampling 1920x2160 down to 1920x1080 would also fix the aspect ratio and make it 16x9.
Marvin Emms October 3rd, 2006, 12:52 PM The information contributed by one pixel is a single value with an effective location of the middle of the pixel. Size reduces the effect of aliasing and affects light sensitivity but contrubutes no more actual information.
For a 2Mpixel bayer sensor you have physically 2 Mpixels of potential resolution, though lume and chroma are intermingled. If you make assumptions about chroma you can boost the luma resolution. The performance of a 2Mpixel sensor is generally somewhere between 1 and 2Mpixels real resolution.
The performance of a 1Mpixel 4:4:4 array will only ever be 1Mpixel real resolution.
Clearly this does not tell you which looks better, but there is much less of a technical difference than would be normal between a bayer and 3 chip camera.
If Sony pixel shifts, then the merits change. So far there has been no mention of it, and this is strange for a camera that only needs 4:2:0.
Edit to reply to above post.
"The only thing I am a little confused on right now is how the chips deal with the 2:1 aspect ratio. 960x1080 is not 16x9 but 8x9."
The 960x1080 number is a product of how Sony count pixels, it does not say anything about the nature of the array. Boyd has posted pictures of this. The pixels are square, the array is 16:9.
Stu Holmes October 3rd, 2006, 06:33 PM If Sony pixel shifts, then the merits change.I recall reading somewhere recently (a quick search has failed to nail it down) that Sony said that there is no pixel-shifting going on with the V1 sensor.
Just telling it as i remember it.
Douglas Spotted Eagle October 3rd, 2006, 07:11 PM Sigh...Sony does not pixel shift. Consider this a statement, not an opinion. The Sony V1 web page very clearly spells this out, displays how it works, and explains it clean and plain.
Thomas Smet October 3rd, 2006, 07:46 PM DSE is correct. What you may have read was somebody badly using the term pixel shift when it isn't really pixel shift. Pixel shift is a very specific form of interpolation and the V1 does not do this at all. The V1 uses the most clever form of raising the pixel value out of any camera I have ever seen. I really do give SONY props for this one.
I would still like to know how a square relationship of pixels can get to a 16x9 ratio of pixels but so far I like what I see.
Ron Evans October 3rd, 2006, 08:26 PM Thomas. This is a very simple explanation BUT. The CMOS imager is characterised by individual address for each pixel. This is different from a CCD that passes the charge from each pixel down to a shift register for reading. In the SONy Clearvid 45 deg arrangement the horizontal total pixel count is comprised of whole pixels and the addition of the half pixels between the whole ones from the line above and the line below( if you understand what I am saying). When you do this the pixel count is doubled from the whole pixel arrangement, 960 becomes 1920, yielding the 16x9 arrangement. I expect the output from the whole pixels is used and the pixels from above and below are added then divided by two. All this is possible because the pixels are addressed individually and the DSP has the opportunity to process this information. Since this uses bigger pixels for lower light performance and better horizontal resolution too. For square pixels, of the same size, in the traditional arrangement with 1920 by 1080 would require a much bigger sensor, likely greater than 1/3 inch. Infering larger lens etc... greater cost. It would be interesting to know the size of the pixels in the Canon HV10?
Ron Evans
Brent Ethington October 3rd, 2006, 09:05 PM I'm still puzzled as to why sony didn't just go with a 1440x1080 sensor instead of the 960x1080 that they're using - having the real pixels there instead of making up the missing ones (I mean interpolating the missing ones) seems more logical. is it a cost or processing power issue? or, are they still trying to differentiate between their higher-end pro cams? one explanation could be the larger pixels yield better light gathering characteristics, but this goes to sensor size too...)
I think it's pretty obvious that all things being equal, the fx7/v1 will outperform the fx1 and z1 (low light tbd), but is it as good as it could've been?
Thomas Smet October 4th, 2006, 07:47 AM Thomas. This is a very simple explanation BUT. The CMOS imager is characterised by individual address for each pixel. This is different from a CCD that passes the charge from each pixel down to a shift register for reading. In the SONy Clearvid 45 deg arrangement the horizontal total pixel count is comprised of whole pixels and the addition of the half pixels between the whole ones from the line above and the line below( if you understand what I am saying). When you do this the pixel count is doubled from the whole pixel arrangement, 960 becomes 1920, yielding the 16x9 arrangement. I expect the output from the whole pixels is used and the pixels from above and below are added then divided by two. All this is possible because the pixels are addressed individually and the DSP has the opportunity to process this information. Since this uses bigger pixels for lower light performance and better horizontal resolution too. For square pixels, of the same size, in the traditional arrangement with 1920 by 1080 would require a much bigger sensor, likely greater than 1/3 inch. Infering larger lens etc... greater cost. It would be interesting to know the size of the pixels in the Canon HV10?
Ron Evans
But isn't the same true for the vertical as well as the horizontal? If you look at some of the pictures of the clearvid chips you can see that the in between pixels are in the horizontal as well as vertical position. This would then mean the chips are sampling 1920x2160 points. The diamond shaped pixels are shown as square and we know there are 960x1080 of them. Squares rotated 45 degrees in a 960x1080 grid is a close to 1:1 ratio. In order to get 16x9 you would either need to skip every other vertical line or down sample the 2160 in the DSP or use some other mehtod I do not know about.
Ron Evans October 4th, 2006, 08:59 AM Thomas. The number of horizontal lines in the array, based on the centers of the pixels, is 1080. Put a square inside each of the diamonds and draw horizontal lines accross at top and bottom. You have 1080 horizontal lines one square pixel high. Sony clearly states that the DSP uses information from the four surrounding photo diodes to create this inbetween pixel.IF you look at the following site you will see that ALL the pixels are reprocessed to a 1920x1080 square array.http://bssc.sel.sony.com/BroadcastandBusiness/minisites/HDV1080/HVR-V1U/index.html
Ron Evans
Thomas Smet October 4th, 2006, 09:38 AM You mean vertical resolution not horizontal resolution. 1080 goes up and down.
That is the exact image I am looking at. Notice that new pixels are created in all four corners. That means not only the 960 gets doubled but so does the 1080. If you create new pixels in the four corners that is doubling the horizontal and vertical pixels not just the pixels in one direction. In the image you start with a 3x3 grid at a 45% angle. Vertical and horizontal both have 3 diamond shaped pixels running through the middle. After the clearvid interpolation notice that there are now 5 pixels running through the center not only in the horizontal but the vertical as well. The images on SONY's site clearly show the pixels as square and clearvid doubling the horizontal and vertical. If we take these images as true and we know that there are 960x1080 pixels that means the chips output 1920x2160 unique points. That is the only way I can see it done with the images we have so far.
The 1920x1080 4:2:2 could just be listed as what the DSP works in once the array is down sampled.
The way SONY has it shown in every image on clearvid the chips would have to be 960x540 to keep a 16x9 ratio when clearvid doubles the number of pixels. If the pixels are anamorhpic then why are the images SONY is giving us showing square pixels? If the space between the pixels in the horizontal direction is wider then why are we shown images of the pixels with the same distance in both directions?
I think sampling at 1920x2160 would be a great thing because down sampling back to 1920x1080 would help reduce aliasing. It would also have a lot more detail then if 960x540 chips would have been used.
I could be wrong but if I am then the images from SONY are not very accurate.
Stu Holmes October 4th, 2006, 09:40 AM one explanation could be the larger pixels yield better light gathering characteristics, but this goes to sensor size too...)
Yes I think this is highly likely to be a significant factor, if not THE significant factor. Pehaps along with a "limit of pixel-count that can be processed via EIP engine in time available".
And the 2nd principal factor in my opinion, is as you said, probably due to product-positioning issues with a future multi-CMOS model.
Just my opinion as ever.
Douglas Spotted Eagle October 4th, 2006, 09:43 AM You mean vertical resolution not horizontal resolution. 1080 goes up and down.
The way SONY has it shown in every image on clearvid the chips would have to be 960x540 to keep a 16x9 ratio when clearvid doubles the number of pixels. If the pixels are anamorhpic then why are the images SONY is giving us showing square pixels? If the space between the pixels in the horizontal direction is wider then why are we shown images of the pixels with the same distance in both directions?
I think sampling at 1920x2160 would be a great thing because down sampling back to 1920x1080 would help reduce aliasing. It would also have a lot more detail then if 960x540 chips would have been used.
I could be wrong but if I am then the images from SONY are not very accurate.
It's an illustration. Not a scientific explanation. The graph may not be accurate to the finest point, does it need to be? You aren't seeing images of pixels, you're seeing images that have representations of pixels. I don't think anyone at Sony in their wildest dreams, would have imagined that their illustration would keep people up at night, deeply troubled, or seriously concerned about the finer points.
Relax. Worry about something more important in life, like choosing what to eat for breakfast.
Thomas Smet October 4th, 2006, 10:07 AM DSE I'm not worried about it at all and it doesn't keep me up at night at all. I was just curious. I had mentioned that I was slightly curious and thats all. I find nothing wrong with that at all.
I started this thread in support of clearvid and hope to keep it that way.
Ron Evans October 4th, 2006, 10:10 AM My last comment!!! In the Sony illustration there are 5 horizontal lines in a diamond pattern before DSP processing with 9 real sensor pixels. After processing the created pixels in the gaps there are STILL ONLY 5 horizontal lines now with 13 pixels that are square with created pixels all only added to the horizontal. The array started with 960x1080 and now has 1920x1080. Could there be other options for the DSP--- of course. But this is Sony's way of explaining what they have done.
Ron Evans
Thomas Smet October 4th, 2006, 10:51 AM Dude where the heck are you seeing these lines? Can you not see that both the vertical and horizontal are increased in the same way. You are explaining to me how clearvid works by interpolating the in between pixels and I already know that part. I would not have started this thread talking about how great it is if I didn't have an idea of how it worked. you are not explaining why you think this only works in the horizontal and not the vertical. Clearly the image on SONY's website be it an illustration or not is all we have to go on and it shows equal precision in both directions. Because of the way the corners sit and make up the in between pixels there is no way you can boost horizontal without boosting vertical as well because they sit together. When you create the new horizontal pixel in between two real pixels you automatically also create the vertical pixel as well because the same pixel becomes a vertical pixel for another row.
Look at that same image made up of 13 new pixels. Isn't it true that the vertical is also more precise then it was before? Therefore you now have 1920x2160 new points based on if this illustration is real or not. I'm just going off what is in the pictures. You really seem to be pulling a lot of info from these few images that I am just not seeing. I'm not even sure why you keep going on. DSE has even said not to take these images as 100% accurate which I knew they were not. This thread is mostly on what is great about Clearvid and it would be nice if we could get back to that now.
Steve Mullen October 4th, 2006, 12:07 PM It's an illustration.
1) Correct. A Japanese engineer writes up something with a diagram which is given to marketing who gives it to the art dept.
You know these are rough illustration because the fill-factor of the photodiodes is not shown!
Taking these drawings literally will give you nightmares. Start with a deep understanding of CMOS -- then flip the elements 45-degrees.
2) Contrary to many statements, one cannot read-out pixels randomly -- it would screw-up exposure. Rows are captured from the top to the bottom. It is, however, possible to OUTPUT a window from the CAPTURED pixels and only read them out.
3) Yes -- you can capture 2M from 1M if you do it right.
4) Yes -- you can create a 16:9 chip array of 960x1080 "diamonds" if you don't take the drawings of pretty diamonds too literally.
5) Remember this rule -- interpolation cannot create resolution!
Ron Evans October 4th, 2006, 01:32 PM Thomas. Again this is a simplistic response and the way I understand what has been described by Sony who may have used some marketing license with the description!!! If you look at the images there is no increase in number of rows in the matrix after creation of new pixels. There are extra pixels being placed but only in the horizontal line, the row in the matrix. There aren't any extra horizontal rows created. You are looking at the column pixels, before and after. That is not the number of horizontal rows, which remains the same for both images passing through the centers of the original pixels horizontally. The number of rows of pixels remains the same (1080) but there are now more pixels in each row. These extra pixels are placed on rows that already existed. There were 1080 rows before and 1080 after. However you are correct in that the pixels for any particular column have increased. Because before processing there were 960 zig zag columns 1080 high. Think of it as 1920 vertical columns that only see a row pixel every other row. There are 1080 vertical pixels but in the 1920 column there would only be 540 without the DSP created pixel. Another way to look at it is to say the imager has 1080 rows and 1920 columns populated by real and created pixels. The extra pixels fill in the needed vertical and horizontal pixels to fill the matrix. Because of the 45deg arrangement the 1080 column pixels are not in one column but zig zag. When the horizontal is changed from 960 to 1920 the extra vertical pixels are needed to maintain the 1080 rows not create more vertical resolution . This is where I think Sony has used some license and where you have your question. This is your point? I took it as I think Sony would like us to think as being 1080 rows with created pixels in the horizontal direction. Rows remain the same, horizontal pixels doubled by DSP. Centers of pixels define rows and centers and between define columns. I understand your confusion just draw it out on some paper.
Ron Evans
|
|