|
|||||||||
|
Thread Tools | Search this Thread |
October 2nd, 2006, 01:01 PM | #16 |
Obstreperous Rex
|
There were I think three different iterations of the old Hi-8 Canon A1 Digital. Using it was my first experience with Canon video. And I agree that the chosen nomenclature is already confusing enough... now we'll have to be sure to specify Canon A1 vs. Sony A1.
|
October 2nd, 2006, 06:05 PM | #17 |
New Boot
Join Date: Jun 2006
Location: Brisbane, Australia
Posts: 10
|
CMOS latitude
Sony's Bob Ott implied the CMOS sensor could increase the latitude of the image by somehow treating each pixel separately.
Very exciting - latitude is the biggest issue with current digital video (especially when shooting in the harsh Australian sun). I'd like to know if treating the pixels individually can actually increase the latitude of the sensor. If there was a way to simulate like a tiny ND filter for each pixel that would be very cool. Could this be what Bob was talking about? Can anyone who has played with the camera comment on the latitude? From the para glider footage it seems to be better than normal. |
October 2nd, 2006, 06:22 PM | #18 |
Major Player
Join Date: Apr 2006
Location: UK
Posts: 204
|
I can think of one theoretical method. If you read (and reset the value of) pixels you know are going to be bright more often than ones you know are going to be dark, then add together all multiple reads within one frame period, that will increase the total lattitude.
A good idea of which pixels are going to be bright could be had from the previous frame. This sounds a bit implausable in a camera, but if only done for small area blocks like highlights it might work. |
October 2nd, 2006, 10:49 PM | #19 |
New Boot
Join Date: Jun 2006
Location: Brisbane, Australia
Posts: 10
|
Marvin you're right! In fact the 'bright' pixels wouldn't even have to read multiple times if there was a way to sense how much light had already accumulated in the sensor.
So for example once the sensor's pixel-light-pot was almost full of light it could switch off therefore stopping the light from 'spilling over' and overexposing the pixel. If you could monitor each pixel like that you could stop a sensor from ever over-exposing... therefore making a camera with enormous latitude! (if not light sensitivity). Wow should I patent this idea? : ) |
October 2nd, 2006, 10:52 PM | #20 |
Trustee
Join Date: Nov 2005
Location: Honolulu, HI
Posts: 1,961
|
There is a better way to do it, but my lawyer said not to elaborate on it with anybody, including him, until I can put together a patent. Seriously. How would you like a camera with another 5 f-stops of latitude? I've done the math, but don't have the technology to make a prototype.
|
October 2nd, 2006, 11:02 PM | #21 |
New Boot
Join Date: Jun 2006
Location: Brisbane, Australia
Posts: 10
|
Really Marcus, a better way... hmmm, are you sure it's not just the way cited above and now you're slapping a patent on it?
: ) |
October 3rd, 2006, 06:35 AM | #22 |
Major Player
Join Date: Apr 2006
Location: UK
Posts: 204
|
Provided you don't lose the timing for when the pixel switches off, that method would indeed work for lattitude. Without the timing you just get a whole load of pixels all at the same value with no way of working out what value they would have had. Given the timing it would work.
One of the disadvatages to this method is that the degree of motion blur, would over a per pixel basis depend on the brightness. You could have an object with a highlight moving in front of the camera with the correct level of motion blur for the frame rate, but the highlight would be sharp and move in jagged steps. Reading multiple times would avoid this, as every pixel is exposed for the same length of time, and also improves the SNR. The disadvantage to this, apart from electronics that have to be very clever, is that the sensor needs to have a unused pixel bandwidth. Marcus, 5 stops is a lot more lattitude, and this is actually easy if you design the sensor to need more than 30 times the amount of light in order to fully expose it. You may find it useful to calculate the ISO to fully expose the sensor as well as the signal to noise ratio of the resulting image. There are intrinsic issues with the light level, the quantum efficiancy of the sensor, the amount of stored charge and the noise level. Lastly a lawyer who tells you to write a patent yourself is not your friend. If what you have is valuable enough to justify a patent then you need a dedicated patent lawyer. Edit, Additionally, you should try the method with discrete componants. Say a single photosensor. Almost anything that can be done on silicon can be demonstrated with real componants - even if its a 1 pixel model. |
October 5th, 2006, 09:32 AM | #23 | |
Major Player
Join Date: Jul 2003
Location: Warren, NJ
Posts: 398
|
Quote:
While th Canon is using an interlaced imager, I believe its is not de-interlacing. Therefore, for practical purposes, it is progressive, albeit at a lower resolution. It is in resolution, especially in 24p/24f, that the Sony has an advantage. The pics on their website highlight this. |
|
| ||||||
|
|