|
|||||||||
|
Thread Tools | Search this Thread |
May 10th, 2014, 05:15 PM | #1 |
Major Player
Join Date: Nov 2005
Location: Silver City, NM
Posts: 385
|
Sensor readout question
With CMOS sensors, the sensor readout time on a camera (still or video) obviously has to be faster than the highest available frame rate. It would seem that the readout time should be faster for sensors having fewer pixels, and that the faster the readout time, the less likely that there will be rolling shutter artifacts due to the motion of the subject(s) being recorded. Are these assumptions generally true ?
|
May 10th, 2014, 09:41 PM | #2 |
Major Player
Join Date: Oct 2008
Location: Whidbey Island
Posts: 873
|
Re: Sensor readout question
I think your statement is generally true. It has to be higher than the maximum combination of the number of pixels being read out and the frame rate.
I have just been testing my Sony FDR-AX100 which shoots 4K (3840x2160) at 30fps. It also has a high speed mode which shoots partial HD (1280x720) at 120fps. 4K mode: sensor area = 8,294,400 pixels. 8,294,400 pixels x 24 bits (8x3 for RGB per pixel) x frame rate of 30 = 5,971,968,000 bits/second (746,496,000 bytes/s). HS mode: sensor area = 921,600 pixels (9 times fewer than 4K mode). 921,600 pixels x 24 bits (8x3 for RGB per pixel) x frame rate of 120 (4 times more than 4K mode) = 2,654,208,000 bits/second (331,776,000 bytes/s) (2.25 times less than 4K mode). So in the case of this camera, the sensor readout rate has to be capable of doing 746MB/s. Then it gets processed and compressed for storage to the memory card, which is at a rate of 60MB/s. I have many questions about all this as well, but this is my understanding of this part of the process. Mark |
May 11th, 2014, 02:12 AM | #3 |
Inner Circle
Join Date: Aug 2006
Location: Efland NC, USA
Posts: 2,322
|
Re: Sensor readout question
To add some info to help with the data rate calculations, the image from the sensor has only one channel (pure luminance) when it comes directly off the sensor. There is no color information. Also many sensors are read out in 12 bits of luma data. This RAW readout would be the resolution times the bit rate with no extra padding for color. The data rate will increase when the image is debayered and the color information added as part of its processing further down the chain.
Its still a lot of data but the rates are lower than the numbers above.
__________________
http://www.LandYachtMedia.com |
May 11th, 2014, 05:03 PM | #4 | |
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Re: Sensor readout question
Quote:
But the camera may not be up to reading all that at frame rate, and typically may only read half the lines to keep the data rate manageable - a side effect will be to make the rolling shutter less than may be predicted. (The evidence is that such a camera may also then discard half the photosites per row, but that won't affect rolling shutter.) So it's less a case of "sensors having fewer pixels" as sensors making use of fewer pixels. In the example above, the total sensor may have about 16 million total, of which only about 6 million get read each frame. (And only 3 million are actually made use of.) Secondly, it is possible for CMOS sensors to have a two stage read out to avoid any rolling shutter at all. The charge accumulation photosites are all read out at the same time to buffers (one buffer for each photosite), then the buffers are read during the frame. The downside is increased complexity (hence cost), and extra silicon on the sensor is likely to mean a smaller light gathering area per photosite, so some impact on sensitivity. (The bigger the sensor, the less an issue it will be.) |
|
| ||||||
|
|