|
|||||||||
|
Thread Tools | Search this Thread |
November 10th, 2005, 03:56 PM | #1 |
Regular Crew
Join Date: Jul 2005
Posts: 62
|
Understanding pixel count - chip vs. image
I've tried to figure this question out, but I must not be thinking about it correctly, and I'm sure a number of you know the answer to this question, so hopefully you can help me out. My thoughts are wrong in a number of places here, so please correct me.
do pixels on the end image NOT match up to pixels on the chip? Because in my head it's working like this: An SD camera uses the chips to obtain a certain number of pixels that line up to SD DV (480 p/i). When the HD camera comes around, to fit the higher number of pixels (1080 or 720) on the chip it has make the pixels size smaller on the chip given theoretically lower light senstivity, etc. BUT, the HVX200 can do both 1080 and 720. The sony is just 1080. They JVC is just 720. How does the HVX do both? It seems to me that if it can to 1080 then there is a certain pixel count to make that possible on the chip. When in 720, how does it use the same amout of pixels that is uses for the 1080 image? Does it combine pixels, or does the 1080 interpolate up from 720, something in the middle? Or is it something totally different going on? In my simple head, I wish it would work like this in the camera: 1080 - smallest pixel size on chip, highest resolution, lowest light sensitivity 720 - bigger pixel size on chip, lower resolution, lower light sensitivity 480 - biggest pixel size on chip, lowest resolution, highest light sensitivity Obviously the chips in the camera don't change their pixel count dependent on which mode they are in, so how does it do it? When shooting in DV mode for example, you probably aren't getting the image and light sensitivity AS IF the chips where pixel counted for DV resolution. Or am i just thinking about this way wrong? I've heard people say that pixel count on chips shouldn't be called pixels because people like me confuse them with the pixels in your end image. How should I think about it then? Please help. Thanks. |
November 10th, 2005, 04:20 PM | #2 |
Major Player
Join Date: Nov 2004
Location: Canada
Posts: 547
|
Here is my attempt at an explanation. I would love to have it clarified and all its faults illuminated.
So far as I understand it, we can call the discrete imaging elements of a CCD "photosites", while the elements of the final digital image can be called "pixels". CCD technology requires that the vertical lines be read off the chip in sequence. The horizontal dimension on the other hand is read off as analog signal, as the voltage from each photosite is passed along a chain to the read-out register. The sampling frequency for the horizontal dimension will dictate the number of horizontal pixels stored, but may have little or nothing to do with the actual number of horizontal photosites on the CCD itself. However, because there is a discrete number of lines, the number of vertical samples of the image is discrete. If this interpretation is correct, then in order to achieve both 1080 and 720 lines, the HVX must do digital interpolation in at least the vertical dimension. Alternatively, if the chips "horizontal" dimension were really the "lines" in the above argument, then it's possible it's the vertical dimension of the image on the HVX that is sampled with variable frequency, and the horizontal dimension that is discrete. In which case, interpolation would occur in the orthogonal direction. This much is certain: the "1080P" and "720P" specified in the HVX200 literature strictly refer to the DVCPRO-HD output format. Neither refers to the effective resolution of the images which may be significantly less. The P indicates that the images were acquired from a progressive chipset. It is interesting to note that the first working model of the HVX200 only functioned as a 1080i camcorder. -Steve |
November 10th, 2005, 05:47 PM | #3 |
RED Problem Solver
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
|
CCD's are indeed analogue devices. The voltage off them is proportional to the amount of light that hits the photosites. This needs to be digitzed, just as you digitize an analogue video signal.
However, the photosites dictate the max rez you can pull off the CCD. In the case of the HVX200 or Z1, they're also using pixel shift to get some extra real luma resolution, so although the sampling off the CCDs will be their native rez, in the DSP, in the digital domain, you can get some extra real rez off the pixel shift, up to about a 1.414 factor theoretically, probably a bit less in practice. Graeme
__________________
www.nattress.com - filters for FCP |
| ||||||
|
|