|
|||||||||
|
Thread Tools | Search this Thread |
December 9th, 2003, 07:50 PM | #1 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
Why not 3-ccd for ex. latitude?
Im not knowledgable about video theory, or the challenges confronting its evolution, but I have two related questions. This would seem an appropriate place to post them.
1-Why do video CCD's lag so much behind digital still cameras? The Nikon D1, with a single CCD, produces a far superior picture then any video camera. Are the problems of acheiving a certain frame rate the cause? Or is the entrenchment within industry standards to blame? 2-There is so much talk of color in video and so little of exposure latitude, yet the problem seems to be the other way around. Video CCD's, even high quality ones, dont match digital still cameras or film. Why not have a 3-CCD block with three color chips, recording the shadow, mid range and highlights (that is, for example, the two chips recoding 2 or three stops above and below normal) to create a sort of bracketing system, which is combined into one image? I myself make documentaries without lights, so the problem may be a non problem for one who would light regardless. However, my interest was more speculative then practical. |
December 10th, 2003, 06:08 PM | #2 |
RED Code Chef
Join Date: Oct 2001
Location: Holland
Posts: 12,514
|
I'll try to answer some of your questions. First as you pointed
out these cameras need to take framerate into consideration which is a HUGE burden. But also things like: - industry compliance (tape, formats & NLE software) - storage requirements - costs DV is already running @ 3.6 MB/s. Tapes can "only" hold 60 minutes. Now if you are going to be capturing at 1600 x 1200 for example the bandwidth would at least have to increase to 20 MB/s (with the current compression scheme) which would result in a tape length of 10 minutes for example. Not to mention that your computer most probably won't be able to handle the bandwidth or filesizes (72 GB per hour for example). The problem with exposure latitude is that the current CCD's just aren't sensitive enough to cover a wider range. There probably are some CCD's out with better ranges, but those are just way to expensive. Having 3 full color CCD's won't do you much good since they all have the same sensitivity. The only way this would work is if they each have their own optical path looking at the exact same thing (with different iris / gain settings but same shutter speed to not get weird motion). Now I don't think this will be physically possible to pull off. So we'll just have to wait for better CCD's or shoot all the pieces seperately with multiple exposure passes and then composite those together.
__________________
Rob Lohman, visuar@iname.com DV Info Wrangler & RED Code Chef Join the DV Challenge | Lady X Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors |
December 10th, 2003, 07:29 PM | #3 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
Why not physicall possible? All it would require is modifying the current manner of arrangeing a 3-CCD block but, as you say, changing the gain on the individual CCD's, or using some sort of ND filter in front of two of them This would have the advantage of maintaing the same gamma. It seems very possible.
|
December 10th, 2003, 08:38 PM | #4 |
Inner Circle
Join Date: Jun 2003
Location: Toronto, Canada
Posts: 4,750
|
Current 3CCD prisms split up colours of light. I don't think it's possible to have the prism split up all colors equally.
exposure latitude: CCD makers are definitely working on this although at the consumer level they get more credit for making higher megapixel CCDs (even though those are bad for DV video). Resolution: The industry working on this too. They are going to be coming out with more high definition cameras and more 2k/4k cameras. One problem is storage. A larger tape format is needed to record all the extra information which would make cameras too big for consumer use. At the professional level, there still needs to be a way for all that video to be edited. HD material takes a RAID arrays and lots of hard drive space to edit (becoming less of a problem now). These technical problems prevent manufacturers from getting economies of scale. |
December 10th, 2003, 11:45 PM | #5 |
Regular Crew
Join Date: Nov 2003
Location: New York, NY
Posts: 120
|
Well you could have a beam splitter in the light path, and that could split the light disproportionally - eg 10% to one CCD, 90% to the other. That would be very similar to the viewfinder system in a 16 mm Bolex Rex for example (which has a more even split).
The twin scan technique is quite common in still photography to enable film scanners to read the full tonal range of film: you do one scan for the shadows and one for the highlights. Best, Helen |
December 11th, 2003, 12:00 AM | #6 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
<<<-- Originally posted by Helen Bach : Well you could have a beam splitter in the light path, and that could split the light disproportionally - eg 10% to one CCD, 90% to the other. That would be very similar to the viewfinder system in a 16 mm Bolex Rex for example (which has a more even split).
The twin scan technique is quite common in still photography to enable film scanners to read the full tonal range of film: you do one scan for the shadows and one for the highlights. Best, Helen -->>> Exactly so. You could have just such a system for video, it would seem. This is a brilliant idea, a beamsplitter just as in the bolex. If the light was split three ways, with one ccd getting 5%, another 20% and another 75, this would created a four stop spread. If current ccd's have a latitude of 7 stops, this system would then create an image with a range of 11 stops, on par with film. The three ccd's image would be combined as in a dual pass film scanner. |
December 11th, 2003, 12:04 AM | #7 |
Warden
Join Date: Mar 2002
Location: Clearwater, FL
Posts: 8,287
|
Find the processor to do that in real time, with no heat (don't want all those chips to overheat and cause noise). Interesting idea, but not real feasible in an ENG/EFP camera design.
__________________
Jeff Donald Carpe Diem Search DVinfo.net for quick answers | Where to Buy? From the best in the business: DVinfo.net sponsors |
December 11th, 2003, 12:29 AM | #8 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
I dont think processesing power would be a problem. It seems to be a simple combination. If you have 3 chips reading the light from the above described beamsplitter, on the ccd reading the shadows, or the darker information (and therefore receiving more light, 75%, in the above example) all the pixels that are registering 0-60% illumination (say) are compressed to register only 0-30 in the new image. The middle ccd would have all illumination from 20-70% (say) compressed into 30-60. Anyway, what I mean to say is that the calculations needed to combine the images, while (it would seem) impossible in an analog enviornment, the operation doesnt seem to require much calculation in a digital enviornment, and in fact seems much less precessor intensive then the compression of DV or Betacam SX, very involved mathmatically.
The real reason I dont think it would work for an industrial camera is that the color of the image would be mediocre, as each CCD is RGB. |
December 11th, 2003, 08:20 AM | #9 |
Major Player
Join Date: Apr 2003
Location: St. Louis, MO
Posts: 581
|
You're assuming ccds could record low and high light levels, which they can't. This couldn't happen without amplification of the light requiring more optics. Plus the cost of extra circuitry for processing all this.
At least at this time, I don't think you will ever see a film quality video camera for less than $125,000; probably a lot more than that. |
December 11th, 2003, 11:47 AM | #10 |
Regular Crew
Join Date: Nov 2003
Location: New York, NY
Posts: 120
|
Rob suggested You're assuming ccds could record low and high light levels, which they can't.
Absolutely right, they can't. But that isn't how it would work if it ever got used. As I understand John's idea, the method would work by dramatically decreasing the effective speed. The chip that recieved the least light would be the only one that didn't clip - that one would record the highlights. The one(s) receiving the most light would be recording the shadow detail, and the highlights would be blown out. Hey, I'm not suggesting that this is a good idea for production cameras, but there's no harm in kicking a few ideas around. Best, Helen |
December 11th, 2003, 12:00 PM | #11 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
Helen, why wouldnt it be a good idea for production cameras? The only problem with the scheme is the issue of color. Otherwise, why wouldnt it be effective?
|
December 11th, 2003, 08:41 PM | #12 |
Regular Crew
Join Date: Nov 2003
Location: New York, NY
Posts: 120
|
John,
Perhaps I mis-represented myself. What I was trying to say that I feel unable to judge whether it would be the best way forward, but I find your idea interesting none the less. I don't know what chips are on the horizon* etc etc. My personal view is that the 'best' camera for HD origination is a 16 mm film camera, but that is just my personal view and there are many qualifications about what I mean by 'best'. Best, Helen *Film is still improving as well, and there is potential for more improvement. |
December 11th, 2003, 09:54 PM | #13 |
Wrangler
Join Date: May 2002
Location: Vallejo, California
Posts: 4,049
|
Just to inject a note about sensors.
Back in 1981, when my company was building the first digital cameras, there were (and I suppose still are) sensors with much more dynamic range than CCDs. The downside was that they took a lot more light, a lot more light than a CCD. In fact, to take a picture inside a room, we had to scan an arc light right where the camera was looking. Single line array 1728 pixels long mechanically scanned across a 35 mm focal plane. Used Nikon 35 mm lenses. I look just to my right and see one of the cameras now.
__________________
Mike Rehmus Hey, I can see the carrot at the end of the tunnel! |
December 11th, 2003, 10:05 PM | #14 |
New Boot
Join Date: Dec 2003
Location: Chicago
Posts: 10
|
Mike, did the senors work in a different fassion then current video CCD's? It sounds like a flatbed scanner kind of mechanism.
|
December 13th, 2003, 09:47 PM | #15 |
Wrangler
Join Date: May 2002
Location: Vallejo, California
Posts: 4,049
|
They were called Photo Diode sensors.
It was indeed the flatbed idea but predated the flatbed by several years. We also introduced the first flatbed scanners. They were originally designed by Ricoh as scanners for a facsimile machine. We were in Japan looking for another sensor supplier since Fairchild Semi wanted out of the business. We saw those flatbeds and ordered 100 of them on the spot in spite of the protests from Ricoh. Sold those suckers for $7800 each. Monochrome, 8-bit, 200 dots per inch. Very little software, very little use for the images once they were captured. We even had to build the first desktop publishing software (a real cludge) because Pagemaker and Ventura were just glimmers in their engineer's eyes at the time. But boy did they sell!
__________________
Mike Rehmus Hey, I can see the carrot at the end of the tunnel! |
| ||||||
|
|