DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Sony HVR-V1 / HDR-FX7 (https://www.dvinfo.net/forum/sony-hvr-v1-hdr-fx7/)
-   -   CMOS and 24p (https://www.dvinfo.net/forum/sony-hvr-v1-hdr-fx7/76376-cmos-24p.html)

Chris Hurd October 2nd, 2006 01:01 PM

There were I think three different iterations of the old Hi-8 Canon A1 Digital. Using it was my first experience with Canon video. And I agree that the chosen nomenclature is already confusing enough... now we'll have to be sure to specify Canon A1 vs. Sony A1.

Stuart Mannion October 2nd, 2006 06:05 PM

CMOS latitude
 
Sony's Bob Ott implied the CMOS sensor could increase the latitude of the image by somehow treating each pixel separately.

Very exciting - latitude is the biggest issue with current digital video (especially when shooting in the harsh Australian sun).

I'd like to know if treating the pixels individually can actually increase the latitude of the sensor. If there was a way to simulate like a tiny ND filter for each pixel that would be very cool. Could this be what Bob was talking about? Can anyone who has played with the camera comment on the latitude? From the para glider footage it seems to be better than normal.

Marvin Emms October 2nd, 2006 06:22 PM

I can think of one theoretical method. If you read (and reset the value of) pixels you know are going to be bright more often than ones you know are going to be dark, then add together all multiple reads within one frame period, that will increase the total lattitude.

A good idea of which pixels are going to be bright could be had from the previous frame.

This sounds a bit implausable in a camera, but if only done for small area blocks like highlights it might work.

Stuart Mannion October 2nd, 2006 10:49 PM

Marvin you're right! In fact the 'bright' pixels wouldn't even have to read multiple times if there was a way to sense how much light had already accumulated in the sensor.

So for example once the sensor's pixel-light-pot was almost full of light it could switch off therefore stopping the light from 'spilling over' and overexposing the pixel. If you could monitor each pixel like that you could stop a sensor from ever over-exposing... therefore making a camera with enormous latitude! (if not light sensitivity). Wow should I patent this idea? : )

Marcus Marchesseault October 2nd, 2006 10:52 PM

There is a better way to do it, but my lawyer said not to elaborate on it with anybody, including him, until I can put together a patent. Seriously. How would you like a camera with another 5 f-stops of latitude? I've done the math, but don't have the technology to make a prototype.

Stuart Mannion October 2nd, 2006 11:02 PM

Really Marcus, a better way... hmmm, are you sure it's not just the way cited above and now you're slapping a patent on it?

: )

Marvin Emms October 3rd, 2006 06:35 AM

Provided you don't lose the timing for when the pixel switches off, that method would indeed work for lattitude. Without the timing you just get a whole load of pixels all at the same value with no way of working out what value they would have had. Given the timing it would work.

One of the disadvatages to this method is that the degree of motion blur, would over a per pixel basis depend on the brightness. You could have an object with a highlight moving in front of the camera with the correct level of motion blur for the frame rate, but the highlight would be sharp and move in jagged steps.

Reading multiple times would avoid this, as every pixel is exposed for the same length of time, and also improves the SNR. The disadvantage to this, apart from electronics that have to be very clever, is that the sensor needs to have a unused pixel bandwidth.

Marcus,

5 stops is a lot more lattitude, and this is actually easy if you design the sensor to need more than 30 times the amount of light in order to fully expose it. You may find it useful to calculate the ISO to fully expose the sensor as well as the signal to noise ratio of the resulting image. There are intrinsic issues with the light level, the quantum efficiancy of the sensor, the amount of stored charge and the noise level. Lastly a lawyer who tells you to write a patent yourself is not your friend. If what you have is valuable enough to justify a patent then you need a dedicated patent lawyer.

Edit,

Additionally, you should try the method with discrete componants. Say a single photosensor. Almost anything that can be done on silicon can be demonstrated with real componants - even if its a 1 pixel model.

David Ziegelheim October 5th, 2006 09:32 AM

Quote:

Originally Posted by Jason Strongfield
Well, just by the physical size of te sensor alone, 1/4 vs 1/3, the canon will give out a shallower depth of field. Heres a good CCD vs CMOS article http://www.dalsa.com/shared/content/..._Litwiller.pdf

I hope that these cams will have a "flip" feature, so that i dont have to get an external lcd monitor to use with the redrock m35 adapter.

ANother issue is that the Canon is not a true 24f/p progressive. Ironically, the sony is a pure 24 progressive though.

Also the Sony is lighter and smaller in size compared to the Canon.

That's a pretty old CMOS vs CCD article (2001). All of the Canon SLRs are CMOS. The SI 1920 is CMOS. Its only a draw here because Sony, for some strange reason, used a 1/4" instead of 1/3" CCD. It made the viewing angle smaller and added depth of field.

While th Canon is using an interlaced imager, I believe its is not de-interlacing. Therefore, for practical purposes, it is progressive, albeit at a lower resolution. It is in resolution, especially in 24p/24f, that the Sony has an advantage. The pics on their website highlight this.


All times are GMT -6. The time now is 10:31 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network