|
|||||||||
|
Thread Tools | Search this Thread |
July 29th, 2012, 03:12 PM | #16 | ||
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Re: 8bit vs 10bit Aquisition
Quote:
It's difficult to be specific, since with broadcast encoders all sorts of things can vary. The ratio of data allocated to the luminance and chroma channel being one, and variation of allocation between I-frames and difference frames being another. What is key is the point you made before - that the banding issues mainly noticeable on gradients with saturated colours. That's largely due to bitrate allocation to chroma compression being low cf luminance, and also that chroma block sizes are large compared to luminance because of subsampling. Quote:
In the acquisition world then systems tend to have defined bandwidths, and if they are restricted (as with AVC-HD) designers have to decide how to balance compromises. And if they went for 10bit/4:2:2 it means more data to be compressed and hence far higher compression - and likely worse overall than 8bit/4:2:0!!! If you see problems, likelihood is the prime cause is too low bitrate, too high compression. Moving to 10 bit may just make matters worse unless the bitrate is increased proportionately - it would just mean even higher compression. |
||
July 29th, 2012, 03:12 PM | #17 |
Inner Circle
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
|
Re: 8bit vs 10bit Aquisition
Saaresh:
You won't see a difference between 10 bit and 8 bit sampling on a vector scope or waveform monitor because waveform monitors and vector scopes measure amplitude and phase and there is no difference between amplitude and phase between 8 bit and 10 bit. A waveform monitor rarely has the resolution to show the 235 grey shades in an 8 bit signal, let alone the 956 shades in a 10 bit signal and if your looking at a cameras output, noise will diffuse any steps that might be possible to see. A normal waveform monitor/vectorsope is completely the wrong tool for trying to find any difference between 8 bit and 10 bit. It's like using a VU meter or audio level meter to determine the audio frequency. Some histograms will tell you whether a signal is 8 bit or 10 bit by the number of steps there are from left to right across the histogram. Some NLE's and Grading tools can return the data value for points within recorded images, this may also tell you whether the signal is 8 bit or 10 bit. The Alan Roberts reference is a bit of a red herring. The reason the RED's output was not deemed suitable for broadcast has nothing to do with the bit depth. It is because the real time de-bayering employed by RED introduces significant artefacts into the image. RED is designed around it's raw workflow, the HDSDi output is for on set monitoring only and not really meant to be used for off board recording. Engineers don't just look at a monitor and trust their eyes, if it was that simple there would be no need for engineers. One test you can do with almost any NLE to asses the practical, real world difference between acquisition in 8 bit and 10 bit for your camera is to record the same scene at both 8 bit and 10 bit. You can try different scenes to see how different subjects are handled. Blue sky, flat walls can be very revealing. Then bring the clips in to the NLE or grading package and use a gain/brightness effect or filter to reduce the image brightness by 50%. Then render out that now dark clip as an uncompressed 10 bit file. Then apply a gain/brightness fitter to on the new uncompressed file to return the video levels to that of the original. By layering the original over the now corrected uncompressed clip and using a difference matte you can see the differences between the 8 bit and 10 bit performance. How much or little of a difference there will be depends on many factors including subject, noise, compression artefacts etc. It is best to view the pictures on a large monitor. For this test to be meaningful it is vital that you ensure the NLE is not truncating the clips to 8 bit. While Edius may be 10 bit, I think you still need to check whether quicktime on a PC is 10 bit, If quicktime on a PC still truncates to 8 bit then having a 10 bit edit package won't help. Bruce. Excessive compression will absolutely cause banding in an image. Most banding artefacts that people see are not down to bit depth but quantisation noise caused by insufficient data to record subtle image changes. Perhaps there isn't enough data to record 10 shades in a gradient, the 10 shades get averaged together into 4 and the end result is steps. Another issue can be that the OTA signal is at best 8 bit, this is then passed to the TV's processing circuits which will be doing all kinds of image manipulation in an attempt to make the pictures look good on the screen, this processing is commonly done at 8 bits and 8 bit processing of an 8 bit signal can lead to further issues.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com |
July 29th, 2012, 03:44 PM | #18 | |||||
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Re: 8bit vs 10bit Aquisition
Quote:
Alister earler said: Quote:
The extra bits are needed for the processing - after which 8 bits will then normally be adequate. S-log and 10 bit will certainly give more scope for post processing than 8 bit - but it's the combination that makes the difference, not just the 10 bit factor. Processed 10 bit video is not the same as 10 bit s-log. Quote:
Quote:
Quote:
|
|||||
July 29th, 2012, 08:18 PM | #19 | |
Major Player
Join Date: Oct 2009
Location: Reno, NV
Posts: 553
|
Re: 8bit vs 10bit Aquisition
Quote:
Note that computer displays are typically 8-bit, distribution is 8-bit while most flat screen TVs often display only 6-bits of color. Acquisition and post processing can benefit from higher color depths, but your final product never needs more than 8-bit color. Last edited by Eric Olson; July 30th, 2012 at 01:09 AM. |
|
July 29th, 2012, 10:07 PM | #20 | ||||
Trustee
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
|
Re: 8bit vs 10bit Aquisition
Quote:
Quote:
Quote:
Quote:
I learnt this lesson while programming an image processing engine (similar to photoshop) 12 years ago for my college final year project - I used BMP, TIFF and JPEG specifically. To be honest I don't care about the numbers anymore - what I really learnt was that my eyes were the best judge. Manufacturers hide too many things, and marketing is very powerful, and who has the time to sit and analyze each camera system - especially when it will be obsolete by the next trade show?
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa. |
||||
July 30th, 2012, 12:17 AM | #21 | ||||||
Trustee
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
|
Re: 8bit vs 10bit Aquisition
Quote:
By the way, a waveform monitor displays voltage over time. An oscilloscope (or a digital waveform monitor will do as well) can freeze a wave for further study. A vectorscope picks the frequency of two simultaneous waves - even within a complex wave such as a video signal - and compares them side by side (or one on top of the other). In video, I could display Cb and Cr on a vectorscope. For 292M, I will need a scope that has been designed to test this particular signal. It will show me the wave pattern in relative amplitude and relative phase - from which I can derive the wave function of that particular wave. The wave function tells me everything I need to know. Quote:
From such a device, by studying the signals, Y'CbCr values, and cross referencing them against test patterns I can reverse engineer the sampling process. By comparing this data with other signals (test, random and actual) I can tell very easily the 'quality' of the color information present in a signal. If I felt particularly loony, I could also reverse engineer the tristimulus values from the chrominance information and derive the sensor and Rec.709 color spaces, just to show off. This is how the scope knows whether you are within the bounds of a particular gamut or not - except I might do the calculations manually just because I don't trust my scope either! It all depends on how paranoid I am on any given day. Once I know which color space I'm in, I know how many colors I need - from that information I will know whether the data needs 8-bit or 10-bit word lengths to be represented accurately. I don't care what the data already is - you can put a scooter engine, an elephant or a ferrari engine in a Ferrari body. What I really want to know is how it was sampled. Guess what I learnt? No matter what the color space, I always need 32-bit (or the maximum possible) words - every gamut has infinite potential combinations - it's like Zeno's paradox. But since 292M can only output 10-bit files, I have to use my eyes and judge for myself whether I can live with it. 8-bit is minimum wage. 10-bit is a pat on the back with your minimum wage. The difference between 8-bit and 10-bit in practice is negligible - both in its signal characteristics and visually. But this is my opinion, for my own worfklow, based on my training and experience. I would like to believe I am right, but I might be totally wrong, and I might be the weakest link in my workflow. Quote:
E.g., in digital photography, many high end cameras only show JPEG histograms with clipping warnings. When one pulls these files into a processing engine, one is surprised to see their histogram was not really accurate. Whom should I believe - the sensor manufacturer, the signal processing engineer, the compression engineer or the software developer who coded the RAW engine? I'm totally for simple tools to understand data - histograms, waveforms, vectorscopes, etc - these are tools that tell me what ball park I'm in. On the field they are a great help. But I still prefer a good monitor as the easiest way to get where I want to go. The eye is just another tool - one of my favorites! As a side note, I love the fact that BM has decided to ship the Ultrascope free with their camera, using thunderbolt. Quote:
It's a sampling problem, and is caused by the compression RAW scheme employed by Red. They probably had to resample an already sampled image for HD-SDI. I'm not sure how many have wondered why Red can't give out an uncompressed 4K/5K redcode stream. The sampling of the sensor signals, combined with the sensor's gamut, bayering mechanism and filtering process, determines everything. Quote:
E.g., a RAW file is just data - if you open a RAW file on different RAW processing engines you will get different results. If you apply different algorithms you'll get different results. Two issues qualify as suspects to explain this: 1. Patents. 2. Subjectivity. If I'm looking at a signal and doing my math based on what I know - I'll arrive at a certain conclusion. Another engineer will see it a totally different way. The variety of electronic devices and software programs in the world show that clearly. You can interpret results differently, and change the world based on those interpretations. The only way I can know if I'm still sane at the end of the day is by looking at the result like a lay person. Does red look red? Does the music note sound the way I want it to sound? Only then is the math worth it. Don't you think? Anyway, I don't speak for all engineers, only myself! Quote:
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa. |
||||||
July 30th, 2012, 10:42 AM | #22 |
Inner Circle
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
|
Re: 8bit vs 10bit Aquisition
Thanks everybody for your input on this thread. As usual I have learned a lot. I called Dan Keaton today and he highlighted to me that HD-SDI signals are always 10bit, but the Varicam is still an 8bit camera.
This was what I had always thought but mis-read a some information recently which caused me to think the camera was actually 10bit. So my decision is easy to stay with the Nanoflash. |
| ||||||
|
|