Mark Donnell
May 10th, 2014, 05:15 PM
With CMOS sensors, the sensor readout time on a camera (still or video) obviously has to be faster than the highest available frame rate. It would seem that the readout time should be faster for sensors having fewer pixels, and that the faster the readout time, the less likely that there will be rolling shutter artifacts due to the motion of the subject(s) being recorded. Are these assumptions generally true ?
Mark Watson
May 10th, 2014, 09:41 PM
I think your statement is generally true. It has to be higher than the maximum combination of the number of pixels being read out and the frame rate.
I have just been testing my Sony FDR-AX100 which shoots 4K (3840x2160) at 30fps. It also has a high speed mode which shoots partial HD (1280x720) at 120fps.
4K mode: sensor area = 8,294,400 pixels. 8,294,400 pixels x 24 bits (8x3 for RGB per pixel) x frame rate of 30 = 5,971,968,000 bits/second (746,496,000 bytes/s).
HS mode: sensor area = 921,600 pixels (9 times fewer than 4K mode). 921,600 pixels x 24 bits (8x3 for RGB per pixel) x frame rate of 120 (4 times more than 4K mode) = 2,654,208,000 bits/second (331,776,000 bytes/s) (2.25 times less than 4K mode).
So in the case of this camera, the sensor readout rate has to be capable of doing 746MB/s. Then it gets processed and compressed for storage to the memory card, which is at a rate of 60MB/s.
I have many questions about all this as well, but this is my understanding of this part of the process.
Mark
Chris Medico
May 11th, 2014, 02:12 AM
To add some info to help with the data rate calculations, the image from the sensor has only one channel (pure luminance) when it comes directly off the sensor. There is no color information. Also many sensors are read out in 12 bits of luma data. This RAW readout would be the resolution times the bit rate with no extra padding for color. The data rate will increase when the image is debayered and the color information added as part of its processing further down the chain.
Its still a lot of data but the rates are lower than the numbers above.
David Heath
May 11th, 2014, 05:03 PM
It would seem that the readout time should be faster for sensors having fewer pixels, ...........
A big caveat there (as you do say "still or video") is that all the photosites are being read. In the case of many still cameras with video mode, that's not the case. In a typical case, the whole sensor (4:3) may have about 16 megapixel, of which the area used for 16:9 video may be cropped to about 12 megapixels for aspect ratio.
But the camera may not be up to reading all that at frame rate, and typically may only read half the lines to keep the data rate manageable - a side effect will be to make the rolling shutter less than may be predicted. (The evidence is that such a camera may also then discard half the photosites per row, but that won't affect rolling shutter.)
So it's less a case of "sensors having fewer pixels" as sensors making use of fewer pixels. In the example above, the total sensor may have about 16 million total, of which only about 6 million get read each frame. (And only 3 million are actually made use of.)
Secondly, it is possible for CMOS sensors to have a two stage read out to avoid any rolling shutter at all. The charge accumulation photosites are all read out at the same time to buffers (one buffer for each photosite), then the buffers are read during the frame. The downside is increased complexity (hence cost), and extra silicon on the sensor is likely to mean a smaller light gathering area per photosite, so some impact on sensitivity. (The bigger the sensor, the less an issue it will be.)