View Full Version : Why not 3-ccd for ex. latitude?
John Dovydenas December 9th, 2003, 07:50 PM Im not knowledgable about video theory, or the challenges confronting its evolution, but I have two related questions. This would seem an appropriate place to post them.
1-Why do video CCD's lag so much behind digital still cameras? The Nikon D1, with a single CCD, produces a far superior picture then any video camera. Are the problems of acheiving a certain frame rate the cause? Or is the entrenchment within industry standards to blame?
2-There is so much talk of color in video and so little of exposure latitude, yet the problem seems to be the other way around. Video CCD's, even high quality ones, dont match digital still cameras or film. Why not have a 3-CCD block with three color chips, recording the shadow, mid range and highlights (that is, for example, the two chips recoding 2 or three stops above and below normal) to create a sort of bracketing system, which is combined into one image?
I myself make documentaries without lights, so the problem may be a non problem for one who would light regardless. However, my interest was more speculative then practical.
Rob Lohman December 10th, 2003, 06:08 PM I'll try to answer some of your questions. First as you pointed
out these cameras need to take framerate into consideration
which is a HUGE burden. But also things like:
- industry compliance (tape, formats & NLE software)
- storage requirements
- costs
DV is already running @ 3.6 MB/s. Tapes can "only" hold 60
minutes. Now if you are going to be capturing at 1600 x 1200
for example the bandwidth would at least have to increase to
20 MB/s (with the current compression scheme) which would
result in a tape length of 10 minutes for example. Not to mention
that your computer most probably won't be able to handle
the bandwidth or filesizes (72 GB per hour for example).
The problem with exposure latitude is that the current CCD's
just aren't sensitive enough to cover a wider range. There
probably are some CCD's out with better ranges, but those are
just way to expensive.
Having 3 full color CCD's won't do you much good since they
all have the same sensitivity. The only way this would work is
if they each have their own optical path looking at the exact
same thing (with different iris / gain settings but same shutter
speed to not get weird motion). Now I don't think this will be
physically possible to pull off. So we'll just have to wait for
better CCD's or shoot all the pieces seperately with multiple
exposure passes and then composite those together.
John Dovydenas December 10th, 2003, 07:29 PM Why not physicall possible? All it would require is modifying the current manner of arrangeing a 3-CCD block but, as you say, changing the gain on the individual CCD's, or using some sort of ND filter in front of two of them This would have the advantage of maintaing the same gamma. It seems very possible.
Glenn Chan December 10th, 2003, 08:38 PM Current 3CCD prisms split up colours of light. I don't think it's possible to have the prism split up all colors equally.
exposure latitude: CCD makers are definitely working on this although at the consumer level they get more credit for making higher megapixel CCDs (even though those are bad for DV video).
Resolution: The industry working on this too. They are going to be coming out with more high definition cameras and more 2k/4k cameras. One problem is storage. A larger tape format is needed to record all the extra information which would make cameras too big for consumer use. At the professional level, there still needs to be a way for all that video to be edited. HD material takes a RAID arrays and lots of hard drive space to edit (becoming less of a problem now).
These technical problems prevent manufacturers from getting economies of scale.
Helen Bach December 10th, 2003, 11:45 PM Well you could have a beam splitter in the light path, and that could split the light disproportionally - eg 10% to one CCD, 90% to the other. That would be very similar to the viewfinder system in a 16 mm Bolex Rex for example (which has a more even split).
The twin scan technique is quite common in still photography to enable film scanners to read the full tonal range of film: you do one scan for the shadows and one for the highlights.
Best,
Helen
John Dovydenas December 11th, 2003, 12:00 AM <<<-- Originally posted by Helen Bach : Well you could have a beam splitter in the light path, and that could split the light disproportionally - eg 10% to one CCD, 90% to the other. That would be very similar to the viewfinder system in a 16 mm Bolex Rex for example (which has a more even split).
The twin scan technique is quite common in still photography to enable film scanners to read the full tonal range of film: you do one scan for the shadows and one for the highlights.
Best,
Helen -->>>
Exactly so. You could have just such a system for video, it would seem. This is a brilliant idea, a beamsplitter just as in the bolex. If the light was split three ways, with one ccd getting 5%, another 20% and another 75, this would created a four stop spread. If current ccd's have a latitude of 7 stops, this system would then create an image with a range of 11 stops, on par with film. The three ccd's image would be combined as in a dual pass film scanner.
Jeff Donald December 11th, 2003, 12:04 AM Find the processor to do that in real time, with no heat (don't want all those chips to overheat and cause noise). Interesting idea, but not real feasible in an ENG/EFP camera design.
John Dovydenas December 11th, 2003, 12:29 AM I dont think processesing power would be a problem. It seems to be a simple combination. If you have 3 chips reading the light from the above described beamsplitter, on the ccd reading the shadows, or the darker information (and therefore receiving more light, 75%, in the above example) all the pixels that are registering 0-60% illumination (say) are compressed to register only 0-30 in the new image. The middle ccd would have all illumination from 20-70% (say) compressed into 30-60. Anyway, what I mean to say is that the calculations needed to combine the images, while (it would seem) impossible in an analog enviornment, the operation doesnt seem to require much calculation in a digital enviornment, and in fact seems much less precessor intensive then the compression of DV or Betacam SX, very involved mathmatically.
The real reason I dont think it would work for an industrial camera is that the color of the image would be mediocre, as each CCD is RGB.
Rob Belics December 11th, 2003, 08:20 AM You're assuming ccds could record low and high light levels, which they can't. This couldn't happen without amplification of the light requiring more optics. Plus the cost of extra circuitry for processing all this.
At least at this time, I don't think you will ever see a film quality video camera for less than $125,000; probably a lot more than that.
Helen Bach December 11th, 2003, 11:47 AM Rob suggested You're assuming ccds could record low and high light levels, which they can't.
Absolutely right, they can't. But that isn't how it would work if it ever got used. As I understand John's idea, the method would work by dramatically decreasing the effective speed. The chip that recieved the least light would be the only one that didn't clip - that one would record the highlights. The one(s) receiving the most light would be recording the shadow detail, and the highlights would be blown out.
Hey, I'm not suggesting that this is a good idea for production cameras, but there's no harm in kicking a few ideas around.
Best,
Helen
John Dovydenas December 11th, 2003, 12:00 PM Helen, why wouldnt it be a good idea for production cameras? The only problem with the scheme is the issue of color. Otherwise, why wouldnt it be effective?
Helen Bach December 11th, 2003, 08:41 PM John,
Perhaps I mis-represented myself. What I was trying to say that I feel unable to judge whether it would be the best way forward, but I find your idea interesting none the less. I don't know what chips are on the horizon* etc etc. My personal view is that the 'best' camera for HD origination is a 16 mm film camera, but that is just my personal view and there are many qualifications about what I mean by 'best'.
Best,
Helen
*Film is still improving as well, and there is potential for more improvement.
Mike Rehmus December 11th, 2003, 09:54 PM Just to inject a note about sensors.
Back in 1981, when my company was building the first digital cameras, there were (and I suppose still are) sensors with much more dynamic range than CCDs.
The downside was that they took a lot more light, a lot more light than a CCD. In fact, to take a picture inside a room, we had to scan an arc light right where the camera was looking.
Single line array 1728 pixels long mechanically scanned across a 35 mm focal plane. Used Nikon 35 mm lenses.
I look just to my right and see one of the cameras now.
John Dovydenas December 11th, 2003, 10:05 PM Mike, did the senors work in a different fassion then current video CCD's? It sounds like a flatbed scanner kind of mechanism.
Mike Rehmus December 13th, 2003, 09:47 PM They were called Photo Diode sensors.
It was indeed the flatbed idea but predated the flatbed by several years.
We also introduced the first flatbed scanners. They were originally designed by Ricoh as scanners for a facsimile machine. We were in Japan looking for another sensor supplier since Fairchild Semi wanted out of the business.
We saw those flatbeds and ordered 100 of them on the spot in spite of the protests from Ricoh.
Sold those suckers for $7800 each. Monochrome, 8-bit, 200 dots per inch. Very little software, very little use for the images once they were captured. We even had to build the first desktop publishing software (a real cludge) because Pagemaker and Ventura were just glimmers in their engineer's eyes at the time.
But boy did they sell!
Marc Young December 17th, 2003, 06:13 AM <<<-Why not have a 3-CCD block with three color chips, recording the shadow, mid range and highlights (that is, for example, the two chips recoding 2 or three stops above and below normal) to create a sort of bracketing system, which is combined into one image? -->>>
There are probably several reasons, and not just cost, although cost is the big killer. One is the problem of calibrating the different sensors to one another, so that ignoring the quantization issue, they all measure the same for a common input which does not saturate the two sensors being compared. The second must have to do with flaring by the lenses or prisms, and the diffusive nature of light. Unlike a film scanner, which uses a controlled beam of light, the light coming into your camera lens is anything but controlled and spot accurate. This straddling issue must vex the logic which decides how to combine the outputs of multiple sensors of different resolutions. It takes a lot of dsp power, even in existing products which do pixel re-sampling, i.e., your typical consumer and prosumer cameras of either video or still categories. End result in either case: a fuzzier picture because you have to hedge your bets in compositing the output.
This seems to be an idea which has already been patented or disclosed, and either proved to be impractical, or prone to anamolies. I'll give you an example from the audio domain. Suppose you want to build a high res, high sampling rate A/D converter. Why not combine two inexpensive 16-bit 1 MHz sampling A/D's to get an 18-bit or 20-bit 1 MHz converter? Because of the cross mononiticity issue and extraordinary difficulty in ensuring the two converters track each other. In the FFT domain, this leads to distortion and non-linear spurs.
The final reason must have to do with being a purist. Engineers would rather wait for technology to catch up (and produce a sensor with a high dynamic range) than settle for an ad hoc approach using multiple sensors. The only time you use the combinative approach is when you exhaust the non-combinative approaches, as in full range vs. multi-driver speaker technology. (Electrostatics still can't achieve deep bass).
Robert Silvers December 26th, 2003, 07:18 PM This could be done with 2 CCDs easier than with 3 CCDs. Yes, it is a good idea and not a new idea, but I am happy with my Canon EOS 10D, and if a video camera could capture that quality but at a lower res I would be happy. I would expect Canon to lead the pack with this since they are doing so well with digital SLRs.
This has also been done with a single CCD and 1/2 the frame rate but changing the gain of the chip every other frame. It is uses by security cameras.
If you go to a video trade show you see tons of this stuff.
http://www.extremetech.com/article2/0,3973,893218,00.asp
|
|