|
|||||||||
|
Thread Tools | Search this Thread |
July 26th, 2004, 02:01 PM | #31 |
Inner Circle
Join Date: Jun 2003
Location: San Jose, CA
Posts: 2,222
|
>But, I can't help but think that there is something to the idea of >having a higher bit DSP. Sony uses higher bit DSPs in some of >their cameras. And, as you pointed out, Panasonic uses a 12 bit >DSP in the DVX100a. Even more interesting to me is that they >had a 10 bit DSP in the DVX100.
Higher bit DSPs are more efficent at their native word length, but they also may consumer more power, depending on the design. At the very least, their local, high-speed memories are delivering a greater memory bandwidth. Even though a DSP is lower, bit, the code can be written to do higher precision arithmetic. So, an 8-bit DSP can do 16-bit operations at roughly 1/4 speed. Digital signal processing involves gain changes, noise (clipping at the top and quantization noise at the bottom of the dynamic range) will be introduced unless there is more headroom for high values and more footroom for low values that exceed the dynamic range in intermediate operations. While noise in one operation may be negligible, noise in successive operations will be easier to notice. Many have seen noise artifacts in low-light video. I am quite sure that some of this is attributable to the shorter word lengths of the DSPs, although this is only one contributor of noise. One fellow commented on inspecting images at the 8-bit level vs. 16-bit level. The cameras in question aren't able to deliver real 16-bit images, and the algorithms that convert RAW files to screen pixels do not operate at even an 8-bit noise level for most of the picture because they are simple interpolators. Fast, but noisy. What I don't understand is why high end digital camera people aren't complaining about this, other than the tradeoff is to wait much longer for a RAW conversion. Hints to these differences are dropped in, say, the Nikon D70 group, where the color performance of Nikon Capture is better than Photoshop CS, although the latter finishes the job faster. I bet we all would be happy with a super-fast but low quality RAW previewer. Also, our eyes can't see more than a 10 to 11-bit range (contrast ratio of 1:1000 to 1:2000) in a single scene, although the dynamic range of our eyes is WAY larger given time to adjust. |
July 26th, 2004, 04:58 PM | #32 |
Wrangler
Join Date: Sep 2001
Location: Northern VA
Posts: 4,489
|
Where did the Canon marketing manager come back with the 8-bit answer? Last I saw from Chris is he was looking for the answer.
I've heard that on normal displays/monitors, the limit of most people's eyes is about 6- to 7-bit gray scale. Anyone have firm data on this?
__________________
dpalomaki@dspalomaki.com |
July 26th, 2004, 08:55 PM | #33 |
Trustee
Join Date: Apr 2002
Location: Auckland, New Zealand
Posts: 1,727
|
Don, even though our eyes might only be able to see 6-7 bits of the final output, the quantisation that happens during processing (Assuming it's all done in 8 bits and truncated etc) could still cause banding which could be quite noticeable.
Aaron |
July 26th, 2004, 09:12 PM | #34 |
Major Player
Join Date: May 2003
Location: Chicago, IL
Posts: 991
|
whats with all the fuss about this 8bit DSP? We all know that the DVX100 was (before the DVX100A) the benchmark for 1/3 MiniDV image quality. That camera was 8bit. So we all know the result that can be achieved with an 8bit DSP so why are people all of the sudden appauled that the XL2 doesn't have 12bit processing?
|
July 26th, 2004, 10:27 PM | #35 |
Trustee
Join Date: May 2003
Location: Atlanta GA
Posts: 1,427
|
Yang
Someone better knowledgeable then me can correct me fi I'm wrong but, it was my understanding that most cameras CCD's are 12bit and then when it's taken into the camera it get's transfered fdown to 8 bit, So I think that' in fact the dvx100 was 12 bit and then an algorithm or somethign converts it to dv format 8bit (which may have been the cause for the original canon persons quote, who knows?) |
July 26th, 2004, 10:27 PM | #36 |
Major Player
Join Date: Apr 2004
Location: Austin, Texas
Posts: 704
|
Actually Yang I believe the DVX100 had a 10 bit DSP.
In the end, of course, the image will speak for itself hopefully. I can't see that anyone is going to buy, or not buy, a camera based on it's DSP bit depth.
__________________
Luis Caffesse Pitch Productions Austin, Texas |
July 26th, 2004, 10:31 PM | #37 |
Trustee
Join Date: May 2003
Location: Atlanta GA
Posts: 1,427
|
yeah, i mean er 10bit like luis said
|
July 26th, 2004, 10:50 PM | #38 |
Obstreperous Rex
|
<< Where did the Canon marketing manager come back with the 8-bit answer? >>
David Ziegelheim's claim is incorrect. The Canon product manager (not the marketing manager) has most definitely not come back with that answer. Looks like they're leaving it in the air for now. |
July 27th, 2004, 03:55 AM | #39 |
Wrangler
Join Date: Sep 2001
Location: Northern VA
Posts: 4,489
|
Thanks for the info Chris. As Dilbert shows, marketing types rarely have solid technical information.
As points of reference, the Canon A1 Digital and L2 Hi8 camcorders used 8-bit DSP, but used 8-bit A/D on the "Y" signal and 6-but on the "C" signal. For Nick: CCD pixels are analog output. The analog voltage is read from the CCD pixel, fed to amplifiers that provide the gain, white balance and pedestal, and then it is convereted to a digital value (10-bit A/D in the XL1). The least significant bit is truncated prior to 9-bit DSP.
__________________
dpalomaki@dspalomaki.com |
July 27th, 2004, 06:57 AM | #40 |
Trustee
Join Date: May 2003
Location: Atlanta GA
Posts: 1,427
|
Don,
Thanks, I should have known better then to get involved in the whole DSP thing, I'll leave it to you guys. |
July 27th, 2004, 05:29 PM | #41 |
Wrangler
Join Date: Sep 2001
Location: Northern VA
Posts: 4,489
|
Nick - not wrong to get involved, that is how we all learn.
__________________
dpalomaki@dspalomaki.com |
July 28th, 2004, 10:34 AM | #42 |
Regular Crew
Join Date: Apr 2004
Location: ocho rios
Posts: 45
|
hi everybody
Canon XL2 is definetely 8 bit A/D Digital Quantization. Its specified on simplydv.co.uk
url :http://www.simplydv.co.uk/docs/CanonXL2_specifications.pdf thanx |
July 28th, 2004, 12:21 PM | #43 |
Major Player
Join Date: Sep 2002
Location: Belgium
Posts: 804
|
If setup, and WB are performed in analog a 10 bit quantization depth for the analog CCD signals is OK (8+2 bits extra for standard gamma correction which is difficult in analog)) for not too complicated (digital) processing (excluding cine gamma , knee processing...) Less bitdepths result in banding and/or spatial dithering noise. Of course the final quantization after processing allways remains 8 bit for DV.
|
July 28th, 2004, 08:20 PM | #44 |
Wrangler
Join Date: Sep 2001
Location: Northern VA
Posts: 4,489
|
DV is by definition an 8-bit quantization signal. That is what goes to tape and out the firewire. The question at hand is the internals before the video reaches the tape.
I would like to read the 8-bit A/D and DSP front-end specification from an authorative Canon voice, rather than a third party website. Per the Canon service manuals, in the XL1 and the GL1 gamma correction takes place in the analog section before the initial A/D conversion and 9-bit DSP. It drops to 8-bit when it leaves the DSP to go to the recorder section. If find it difficult to believe that Canon would dumb it down in the XL2.
__________________
dpalomaki@dspalomaki.com |
August 12th, 2004, 09:38 AM | #45 |
Obstreperous Rex
|
I have received notification from Canon USA that the XL2 has a 12-bit DSP. This applies of course to both NTSC and PAL versions.
|
| ||||||
|
|