Brian Petersen
November 10th, 2005, 03:56 PM
I've tried to figure this question out, but I must not be thinking about it correctly, and I'm sure a number of you know the answer to this question, so hopefully you can help me out. My thoughts are wrong in a number of places here, so please correct me.
do pixels on the end image NOT match up to pixels on the chip? Because in my head it's working like this:
An SD camera uses the chips to obtain a certain number of pixels that line up to SD DV (480 p/i). When the HD camera comes around, to fit the higher number of pixels (1080 or 720) on the chip it has make the pixels size smaller on the chip given theoretically lower light senstivity, etc.
BUT, the HVX200 can do both 1080 and 720. The sony is just 1080. They JVC is just 720. How does the HVX do both? It seems to me that if it can to 1080 then there is a certain pixel count to make that possible on the chip. When in 720, how does it use the same amout of pixels that is uses for the 1080 image? Does it combine pixels, or does the 1080 interpolate up from 720, something in the middle? Or is it something totally different going on?
In my simple head, I wish it would work like this in the camera:
1080 - smallest pixel size on chip, highest resolution, lowest light sensitivity
720 - bigger pixel size on chip, lower resolution, lower light sensitivity
480 - biggest pixel size on chip, lowest resolution, highest light sensitivity
Obviously the chips in the camera don't change their pixel count dependent on which mode they are in, so how does it do it?
When shooting in DV mode for example, you probably aren't getting the image and light sensitivity AS IF the chips where pixel counted for DV resolution. Or am i just thinking about this way wrong?
I've heard people say that pixel count on chips shouldn't be called pixels because people like me confuse them with the pixels in your end image. How should I think about it then? Please help.
Thanks.
do pixels on the end image NOT match up to pixels on the chip? Because in my head it's working like this:
An SD camera uses the chips to obtain a certain number of pixels that line up to SD DV (480 p/i). When the HD camera comes around, to fit the higher number of pixels (1080 or 720) on the chip it has make the pixels size smaller on the chip given theoretically lower light senstivity, etc.
BUT, the HVX200 can do both 1080 and 720. The sony is just 1080. They JVC is just 720. How does the HVX do both? It seems to me that if it can to 1080 then there is a certain pixel count to make that possible on the chip. When in 720, how does it use the same amout of pixels that is uses for the 1080 image? Does it combine pixels, or does the 1080 interpolate up from 720, something in the middle? Or is it something totally different going on?
In my simple head, I wish it would work like this in the camera:
1080 - smallest pixel size on chip, highest resolution, lowest light sensitivity
720 - bigger pixel size on chip, lower resolution, lower light sensitivity
480 - biggest pixel size on chip, lowest resolution, highest light sensitivity
Obviously the chips in the camera don't change their pixel count dependent on which mode they are in, so how does it do it?
When shooting in DV mode for example, you probably aren't getting the image and light sensitivity AS IF the chips where pixel counted for DV resolution. Or am i just thinking about this way wrong?
I've heard people say that pixel count on chips shouldn't be called pixels because people like me confuse them with the pixels in your end image. How should I think about it then? Please help.
Thanks.