Peter Berger
January 7th, 2012, 08:46 AM
Is there something bigger than full frame sensor today?
View Full Version : What camera has the biggest sensor today? Peter Berger January 7th, 2012, 08:46 AM Is there something bigger than full frame sensor today? R Geoff Baker January 7th, 2012, 09:49 AM The Red Epic offers a 60mm x 45mm sized sensor vs the 36mm x 24mm size of a 'full frame' sensor. If Hasselblad (or any other maker of 2 1/4 square format cameras) increase their frame rate to meet video specs (and I suppose it save to say over time they will) then we might see cameras with sensors as large as 60mm x 70mm ... the Red Epic 617 describes a sensor 168mm x 56mm, though I don't know if such a beast is actually available. Note that there are not a lot of lenses to use on such large imaging chips -- and prices are sure to stagger those not used to paying tens of thousands of dollars per ... Cheers, GB Arnie Schlissel January 7th, 2012, 03:54 PM The Red Epic offers a 60mm x 45mm sized sensor vs the 36mm x 24mm size of a 'full frame' sensor. Sorry, Geoff, but that's not correct. If you look at the current specs on the Red Epic page, the version that finally made it to market has a sensor that's 27.7mm x 14.6mm, close to the size of a super 35 frame. Epic (http://www.red.com/products/epic) Phantom 65 has a sensor that's close in size to the negative used in the old 65/70mm system, it's 52.1mm x 30.5mm. Phantom 65 (http://www.visionresearch.com/Products/High-Speed-Cameras/Phantom-65/) R Geoff Baker January 7th, 2012, 07:57 PM Thanks for the correction -- I admit I'm not a close follower of Red, I was just recalling their original announcements. So I guess we'll have to wait on Hasselblad ... Cheers, GB Arnie Schlissel January 8th, 2012, 11:14 AM So I guess we'll have to wait on Hasselblad ... Be prepared for a very long wait! ;) Shaun Roemich January 8th, 2012, 09:10 PM Oh joy... I look forward to even less footage being in focus... (tongue planted FIRMLY in cheek) Mark Donnell May 7th, 2012, 12:41 PM This is an interesting subject that I have just recently been researching. For a long time I couldn't figure out why lenses for 35 mm cameras were so much larger and more expensive than lenses included with many under $ 10,000 camcorders. The answer is sensor size. Most video cams in this price range use 1/2" or smaller CCDs or CMOSs. These are much smaller sensors than a four-thirds, APS-C or full-frame (35mm) sensor used in digital cameras. Thus, lenses for these camcorders can be much smaller and easier to make. If you go to Wikipedia and search for APS-C, there is a very good chart of relative sensor sizes. It is an unfortunate truth that if you want a large sensor, with its low-noise and better definition, you will pay dearly for the necessary lenses. Steve Game May 7th, 2012, 01:38 PM It is an unfortunate truth that if you want a large sensor, with its low-noise and better definition, you will pay dearly for the necessary lenses. Is it true in practice that large sensors give better definition? For instance, the EX1 generally gives a sharper image than the FS100. They are similarly priced cameras, the half inch EX sensors using cell sizes well within current CMOS manufacturing capability. As for the pixel-skipping SLRs, they struggle to provide much above 600 lines pph. That includes the 5D2 with its ridiculous full 35mm stills-frame sensor much beloved by the bokeh fans. They also add noise in the form of moire and resolution downconversion artifacts. Brian Drysdale May 7th, 2012, 01:52 PM The Phantom 65 could be the largest at 52.1 mm x 30.5 mm according to the spec sheets. Vision Research's Phantom 65 Camera - Vision Research (http://www.visionresearch.com/Products/High-Speed-Cameras/Phantom-65/) Chris Medico May 7th, 2012, 01:55 PM Is there something bigger than full frame sensor today? Sorry to answer a question with a question but... Why are you looking for a larger than full frame 35mm sensor? Can you share any information about your need for such a large imager? Mark Donnell May 7th, 2012, 09:01 PM Interesting points, Steve. I had always thought that the reason that the more expensive digital cameras could operate at very high ISOs was because of the larger sensor size. As for resolution, I don't really know what the most important factors are. Is larger sensor size better for increased resolution? Steve Game May 8th, 2012, 03:15 AM Interesting points, Steve. I had always thought that the reason that the more expensive digital cameras could operate at very high ISOs was because of the larger sensor size. As for resolution, I don't really know what the most important factors are. Is larger sensor size better for increased resolution? Mark, Light sensitivity is largely dependant on the size of photon sensor sites, i.e. pixels. Therefore it stands to reason that for a given number of sensor sites, a larger sensor will have larger sites and therefore greater sensitivity. The problem is that the market does not offer the same technology in all cameras. Most current 1/2 and 3/4 inch pro cameras have three sensors, each with 1920x1080 resolution. This means that the optical resolution is a good match to the output format. Three large sensors would be prohibitively expensive and totally impractical for lens design, so single sensors are normal with bayer filters or similar colour matrixing filters printed on the front of the sensor. The native resolution needs to be greater than the desired output to allow both luminance sampling at full resolution and chroma at 1/2 that or less. This immediately results in a compromise in the design of the optical low pass filter as it has to allow the maximum resolution through which creates artifact issues with the lower resolution colour. With purpose designed sensors and processing, this can be controlled, e.g. the F3 makes a good job of the sensor's output, but that comes at a high (money) cost. The net result is that the output resolution is a compromise. This performance is further degraded in most SLRs that offer a video capability. Here, the sensor is designed for the camera's primary role, i.e. stills photography. So the only practical way to reduce the typically 10-20M pixel images to about 2M pixel required for HD video is to pixel skip both columns and lines. This increases artifacts andtends to introduce severe moire patterning from any regular patterns in the scene. Manufacturers try to control this by reducing the sharpness in the subsequent digitisation which is why many of the current SLRs struggle to give a genuine output of about 600 lines pph. The reason why Vimeo and other video sites are crowded with blurry shallow dof clips from large sensor camers is that real detail of the quality produced by three chip cameras would be compromised by the poor detail performance of single chip SLR cameras. Mark Donnell May 8th, 2012, 02:37 PM Other things are starting to make sense. A number of users have complained that cameras with full-size 1920x1080 chips are not so good at recording 720p video. Ideally, it seems that you should be shooting video at the same frame size that your chip (or chips) was/were designed to shoot. That may explain why my HPX-170 does such an outstanding job with 720p. It also explains the higher 1080p resolution with 3-chip decks using 1920x1080 chips. Kevin McRoberts May 8th, 2012, 03:38 PM Canon develops world's largest CMOS sensor: Digital Photography Review (http://www.dpreview.com/news/2010/8/31/canonlargestsensor) Andrew Bower May 8th, 2012, 08:43 PM Is it true in practice that large sensors give better definition? For instance, the EX1 generally gives a sharper image than the FS100. They are similarly priced cameras, the half inch EX sensors using cell sizes well within current CMOS manufacturing capability.. Steve, I am intrigued with your comment. Would this still be true if one would capture the FS100's output via the HDMI output? I find that the EX compression algorithm gets slightly soft when dealing with lateral movement, is this evident on the FS100 as well? I guess my question is really if the 'sharpness' or lack thereof is due to the sensor, the internal compression or perhaps both (or just that I need new glasses). Andrew David Heath May 9th, 2012, 04:06 AM Steve, I am intrigued with your comment. Would this still be true if one would capture the FS100's output via the HDMI output? Yes, the EX is likely to be sharper. It's difficult (for a 1080 output) for a single chip design to equal, let alone better, a design of 3 chips, each of 1920x1080. For every output pixel, unique red, green, blue values are available. Use a single chip (as is necessary once you start getting much larger than 2/3") and you're in to the whole world of deBayering, and different colours having differing resolutions. It (theoretically) could be overcome by chips with large numbers of photosites, but then it becomes increasingly difficult to read out the whole chip at up to 60 times a second. Hence why still cameras have to resort to only reading out a lower percentage of the total available every frame. I find that the EX compression algorithm gets slightly soft when dealing with lateral movement, is this evident on the FS100 as well? Is it really the compresion algorithm, or just the natural effect of motion blur? As far as lenses, big sensors, and sensitivity go, then the ONLY fact that ultimately determines how a camera works in low light is the physical size of the front element of the lens - which may not sound logical until you think about it. (Assuming a given basic technology and resolution.) The reason is that if you quadruple the area of a sensor, then it may well lead to an increase of 2 stops in the ISO rating. (Keeping all else equal.) But think of the other implications. For the same angle of view, it will mean the focal length of the lens must be doubled. If the same diameter of lens is kept, that means when expressed in f stops, the max aperture of the lens will be two stops down on the first (smaller sensor) case. Keep the same diameter of lens and you're back to square one - two stops better camera sensitivity, but two stops less lens. Same overall performance. It's usual to think in terms of a doubling of ISO rating meaning much better in low light. It's true - but ONLY if the same f stop in each case. In the above example, that would mean quadrupling the area of the lens as well as the sensor. That means a lot of extra glass, and I'll leave you to guess how much extra weight and cost........ Steve Game May 9th, 2012, 04:28 AM Andrew, The sensor in the FS100 (the same as the F3) is stated to be 2440x1373 (effective). (The Alexa is similar at 2880x1620, so its in the right ball park.) So the FS100 luminance resolution will be largely determined by the green content, which is half the total, i.e. 1220x690, augmented by the red and blue content. The first stage of processing is to 'demosaic' the sensor data to produce the raw video which can be made available at the HDMI port. This is compressed to 4:2:2, (4:4:4 on the F3 but as the chroma source data is lower resolution than the luminance, it cannot provide a genuine full resolution). The EX1R will provide 4:2:2 video at its HDMI port based on chroma source from three full 1920x1080 sensors. So in simple terms, the luminance on the FS100 can represent 610x345 cycles and the chroma can be somewhere between 610x345 and 305x173 cycles depending on the proportions of primary colours. The EX1R represent 960x540 cycles equally for all colours. Now practice is not that simple as there are other issues and of course both cameras are part of a commercial products, each designed to address, (and protect) a particular market. All digital cameras should have an optical low pass filter to limit the detail arriving at the sensor to below the nyquist limit of the sensor. That's fine for a three chipper like the the EX1R, but the FS100 has a chroma resolution ranging between 1/2 and 1/4 of the luminance, so a single filter will be a compromise between maintaining the luminance detail whilst preventing excessive chroma sampling artifacts, (the so-called Bayer pattern signature of single chip cameras). The other issue is one of noise. In more recent times, some of the shine has rubbed off the EX camera's reputation as other newer products have better lower light performance. That's where large sensors really score, so DCT and interframe compression does not sacrifice detail at the expense of bandwidth. So for low-light clips, the large sensor cameras may well have better details recorded despite their theoretical lower resolution front ends. These are just my thoughts so they could be wrong, - stands back for flaming from the experts here. :¬} Oops, This post was started before David commented so it may repeat some of his comments - or contradict them. Steve Game May 9th, 2012, 07:41 AM As far as lenses, big sensors, and sensitivity go, then the ONLY fact that ultimately determines how a camera works in low light is the physical size of the front element of the lens - which may not sound logical until you think about it. (Assuming a given basic technology and resolution.)........ David, I've now had time to read your post. One question: If I read you correctly, the same 'basic technology and resolution' means that each photosite is about 3 microns square on a 1/2 inch sensor and about 10 microns square on a S35 sensor (of 2440x1373 resolution used to get a 'full HD' output). That's about three times the surface area so the SNR would be that much better. If the same 'basic technology' meant 6 micron sites, then the basic resolution would be about 7963*4480 which is over 35M pixels.The SNR would be no better, but there would be enormous processing overheads to get it down to 1920x1080, unless the inadequate pixel skipping techniques of SLRs was used. Have I understood your comment correctly? David Heath May 9th, 2012, 08:04 AM The sensor in the FS100 (the same as the F3) is stated to be 2440x1373 (effective). (The Alexa is similar at 2880x1620, so its in the right ball park.) So the FS100 luminance resolution will be largely determined by the green content, which is half the total, i.e. 1220x690, augmented by the red and blue content................ So in simple terms, the luminance on the FS100 can represent and the chroma can be somewhere between 610x345 and 305x173 cycles depending on the proportions of primary colours. The EX1R represent 960x540 cycles equally for all colours. In practice, the F3 and FS100 manage a lot better than the numbers you quote (and 610x345 cycles is another way of saying 690lpph), the reason is that whilst indeed green is the largest contributor to luminance, red and blue do both contribute. Without going into too much technicality, it means that a good deBayering algorithm should give luminance resolution corresponding to about 80% of the sensor linear dimensions. (Practically, it tails off, so it's not possible to give an exact number.) So if you want 1920x1080, working backwards means that you'd expect a sensor of about (1.25x1920)x(1.25x1080) - or about 2400x1350 to give that sort of resolution Use less, and the resolution will be less than optimum, use more and there are improvements - but at the expense of reading extra data off the chip, with power, heat etc penalties. It's no accident that Sony and Arri chose the figures they did! As you say, this is true for LUMINANCE - chrominance is another matter, and it's inherent in such as a Bayer sensor that chrominance resolution will be less than luminance. But resolution is one thing, aliasing is another. It follows from lum resolution being higher than chrom that you may get chrominance aliasing at resolutions not much higher than the wanted luminance band - and as such it will be impossible to design an OLPF good enough to get rid of them. Assuming a 2400x1350 sensor, think of a set of white/black horizontal lines, 1350 in total - so 675 white, 675 black - effectively one line per sensor row. In an extreme case you will get alternate rows of the sensor white, alternate black. But alternate rows are R,G,R,G,R and G,B,G,B,G. Hence, only red and green photosites will give a response - OR only blue and green if the pattern is shifted by a row. It's pretty easy to see why you'll get significant coloured aliasing around this spatial frequency - and it will be irrespective of the deBayering method used. And look at a chart of the F3 and it's exactly what you do get. Look at a chart from FS100, and you don't. You see aliasing at the corresponding point - but it's not coloured. I've said in another thread that (contrary to popular belief) the only explanation that makes sense to me is that the F3 and FS100 don't share the same sensor, that the FS100 must have one with a much higher photosite count than the F3. If anyone can explain to me a mechanism whereby the FS100 can avoid coloured aliasing around 1350lpph if it does have the same sensor as the F3, I'd be very interested to hear it. I don't believe it would be physically possible without severely reducing the overall resolution. (Which measurements don't show.) David Heath May 9th, 2012, 08:28 AM David, I've now had time to read your post. One question: If I read you correctly, the same 'basic technology and resolution' means that each photosite is about 3 microns square ........ Have I understood your comment correctly? By "same basic resolution" I meant (say) 1920x1080 (or whatever) in each case, by "same basic technology" I meant not comparing (say) CMOS with CCD etc. Hence, yes, for the larger sensor, the photosites will be bigger, yes, you'd therefore expect better SNR, so yes, you'd therefore expect a higher ISO rating. The important part is that that translates to correspondingly better lower light performance if - and only if - the comparison is done at THE SAME F STOP. Now, because the sensor is bigger, to maintain angle of view the focal length of the lens must be longer, so by the laws of optics TO KEEP THE SAME F STOP the diameter of the lens must be correspondingly bigger as well. (If l=focal length, and d=diameter of the lens, f no = l/d by definition - F-number - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/F-number).) That's why to get better low light performance you need a bigger diameter lens, period. Keep it the same, and optically, as you vary sensor size, ISO rating will go up, f stop go down in proportion - and performance stay the same! Incidentally, in your example (3 micron v 10) they represent linear distances. For area, the corresponding differences are the squares - 9 versus 100, so the differences are well over 10 - not 3. Steve Game May 9th, 2012, 04:46 PM Interesting info. Thanks. So the case of pixel skipping SLRs is different as although the total sensor size is larger than a conventional camera, only a portion of the total light falling on it is used for video. I have a Canon 550D which has an effective resolution of 5184x3456. If Canon skip every other pixel then the sensor looks like 2592x1458 before debayering. That means that less than half the total light is used and the light sites are around 1/4 the area of those on the FS100, thus increasing the noise. Incidentally, that would also explain why the setup struggles to reach 600lines pph and still creates a mess of artifacts from any detail in the scene. And yes, I did forget the area comparison rather than the linear sizes. It took so long to find out the resolution of the FS100 sensor. Despite Sony's hints about the sensor being the same on the FS100 and F3, they don't seem to have confirmed it in any publically released specifications, and some commentators seem to suspect differences (yourself included). David Heath May 10th, 2012, 02:54 AM If Canon skip every other pixel then the sensor looks like 2592x1458 before debayering. That means that less than half the total light is used and the light sites are around 1/4 the area of those on the FS100, thus increasing the noise. Incidentally, that would also explain why the setup struggles to reach 600lines pph and still creates a mess of artifacts from any detail in the scene. Yes to the point about sensitivity. It's also the case that there is an unused "blank" portion around each photosite. Increase the number of photosites for a given sensor area and the percentage "wasted" by such boundary areas will also increase. Sensitivity aside, IF your Canon was still able to look like 2592x1458 Bayer after skipping every other pixel, then it could still give comparable results to the F3 and Alexa - theoretically! That's the same number as they have in the first place. Trouble is, that simply skipping every other pixel on a Bayer sensor will give only a single colour! (If used sites are capital letters, because then you'd get something like g,R,g,R,g,R,g etc!) Hence it needs to be a bit more complicated. Even allowing for that, full deBayering and high quality downconversion (from the 2592x1458 matrix to 1920x1080) at full frame rates is processor intensive and power hungry - witness the power consumption of such as the F3 and Alexa. Early DSLR systems used to skip several whole lines, then do something complicated within each line - it gave severe colour aliasing and asymmetrical results on charts. It was really intended to give live viewfinding initially - the idea of video from a still camera followed on. I believe more recent systems use a form similar to what Canon do with the C300, basically reading out blocks of 2x2 and getting direct R,G,B values - no deBayering, and as the C300 chip has 1920x1080 blocks, no downconversion either. Hence the low power consumption. For "designed for still" chips, it seems they skip BLOCKS rather than photosites, and that's why the fundamental resolution then becomes a quarter that of the sensor - so in your case would be 1458/2, or 729 lpph as a theoretical maximum. Apart from reducing the amount that needs to be read every frame, this also means the matrix that results will be 1296x729 (using your figures) which will need UPCONVERSION, not downconversion. Which is easier to do. All this helps explain why the power consumption of DSLRs etc is less than such as the F3. Despite Sony's hints about the sensor being the same on the FS100 and F3, they don't seem to have confirmed it in any publically released specifications, and some commentators seem to suspect differences (yourself included). At the end of the day it's the results that are important, and the FS100 is pretty good. How it gets it is really of academic interest. My own suspicion is that the FS100 uses the same sensor as the FS700, and if that is true it's then pretty obvious why Sony would want to be coy over not letting the details out originally - it would have been evident that something like the FS700 was bound to follow......... (As the C500 was an inevitability given the C300 sensor details.) |