View Full Version : 8bit vs 10bit Aquisition


Tim Polster
July 27th, 2012, 08:34 AM
Hello,

I wanted to ask the community about a decision I am looking to make regarding external recorders. I have a Varicam "H" model camera that has 10bit HDSDI output. For a year now I have been using a Nanoflash to record my footage which is an 8bit recorder. For the same money I could be using a 10bit Samurai recorder.

In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?

The Nanoflash is usually set a 50mbps 4:2:2 and I would probably use the 100mbps 4:2:2 ProRes setting.

Another twist - I use Edius 6.5 which can now color correct in 10bit. How much does this effect my decision?

a) the 10bit processing in Edius will polish up 8bit footage so well that 8bit vs 10bit does not matter?

b) the 10bit processing makes 10bit aquisition really come into its own?

Thanks for your input!

Sareesh Sudhakaran
July 27th, 2012, 11:14 PM
In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?

Hardly worth the trouble for normal broadcast work. Not even for keying. Unless you have a strict acquisition and delivery requirement for 10-bit, it's not worth it - my experience

To truly start seeing a difference, you'll have to compare 8-bit to 12-bit. Or at least record the bit stream to uncompressed video instead of transcoding it.

And after that, you'll need to grade in a 32-bit float environment, regardless of the bit depth of your footage, hardware, driver or monitor. When you are working in 32 bit mode the hardware is always playing catchup. What is truly been done, is done by the software. Have no experience with Edius so can't help you with that.

But doesn't Edius have a 16-bit environment at least?

Tim Polster
July 28th, 2012, 07:51 AM
Thanks for your reply.

I am surprised there is not much of a benefit. Could I ask for more information regarding your experiences? Why was it not worth it?

Since my camera is outputting 10bit and the Nanoflash and Samurai are pretty close in price, I could go either way without impact.

Well, how about the comparison between using the 50mbps 4:2:2 mpeg-2 vs the 100mbps ProRes LT codecs?

I don't know what is going on under the hood, but Edius just added the ability to color correct 10bit footage with 10bit effects. It may process more behind the scenes.

Bruce Watson
July 28th, 2012, 08:41 AM
In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?

Like almost everything related to photography or video, it depends. In this case, it depends on what you are trying to capture, and on what you want to do with it after you capture it.

More bits means more shades of any given color. Given an individual hue and a range of tones of this color, from black to white, 8 bit can theoretically capture as many as 256 separate tones, while 10 bit can capture four times as many, at 1024.

In the real world, 8 bit capture won't really capture 256 because of inefficiencies. More like 240 or so. And 10 bit more like 960.

So what's it mean? It means the difference between a beautiful smooth blue sky, and a beautiful banded blue sky.

I actually see this all the time. I use a Panasonic plasma screen for TV, with an over the air antenna. I get excellent reception which meas I can see everything, every bit, that they broadcast, for good or ill. And some of it is pretty ugly. In commercials I often see banding in the backgrounds. Seems like the current vogue is to have a very saturated background that darkens toward the corners. This typically results in a nice set of oval banding (from the oval vignette mask's gradient). Because the ads are often made locally, and the local companies are using less expensive equipment, and are therefore making 8 bit captures.

Everything else being equal (never happens, but still), 10 bit captures have the potential to look better than 8 bit captures.

And I should note that applying that gradient against an 8 bit capture in a 10 bit editing space will help. A little. What it means is that you have more headroom to push the color around a little while color grading without crushing the dark end or clipping the light end. But it won't magically pull tonal information out of the air -- an 8 bit capture is still an 8 bit capture.

Tim Polster
July 28th, 2012, 10:49 AM
Thanks Bruce. So you would lean towards the 10bit capture? This would be an (almost) equal situation...plug in a Nano or a Samurai to the back of the camera. Resulting images would reflect the recorder's potential.

I enjoy proper color and use secondary color correction often as Edius makes it real-time.

Eric Olson
July 28th, 2012, 11:40 AM
I enjoy proper color and use secondary color correction often as Edius makes it real-time.

From your original post it sounded like you already owned a Nanoflash and wanted to replace it with a Samurai. I've never used either, but given your workflow, 10-bit seems the obvious choice for a new purchase.

I have a little question concerning 10-bit video source. I know with audio there are encoders that work with 24-bit audio directly so there is no need to dither it down to 16-bits before encoding. Are there video encoders that work directly with 10-bit source, or does one have to downsample it to 8-bit before encoding?

Tim Polster
July 28th, 2012, 12:43 PM
I would sell the Nano if 10bit is the better option as I recently learned the Varicam had 10bit output.

Good question about encoding. I am assuming the encoder would take a 10bit source and create the 8bit final. It would be good to hear from those more in the know.

David Heath
July 28th, 2012, 01:23 PM
So what's it mean? It means the difference between a beautiful smooth blue sky, and a beautiful banded blue sky.

I actually see this all the time. I use a Panasonic plasma screen for TV, with an over the air antenna. I get excellent reception which meas I can see everything, every bit, that they broadcast, for good or ill. And some of it is pretty ugly. In commercials I often see banding in the backgrounds.
I very much doubt that what you are seeing has anything to do with bitdepth. If you went to the studio and saw it it in the gallery (also 8 bit) you wouldn't see that banding.

Banding CAN be caused by insufficient bitdepth, but can also be caused by too heavy compression, and if you're talking about home off-air viewing and see banding it's pretty certain to be a compression problem.

It's not at first obvious why it should be so, but you can easily demonstrate it in Photoshop. Make a blank canvas, then apply the gradient tool left to right - you should see a nice smooth gradient with no obvious banding, despite being 8 bit. Now save the image as a JPEG at varying compression settings, but including max compression. Lo and behold, severe banding, but as the bitdepth has stayed the same it is solely down to compression.

"Why?" is quite difficult to answer without getting too technical, but as simply as possible is down to most compression systems using a block basis. Individual pixel values are given by having an average value for the block, then giving the values of the pixels as differences relative to the average. Harder you compress, the more mangled these differences get until in an extreme case they all become zero - all pixels then just have the average value. Hence banding - what you are seeing as bands are the coarse differences between block average values.

Secondly, it's common for codecs to compress the chroma signals more than luminance. The fact that you mention seeing the problem more often on coloured gradients reflects that.

Now if we're talking about a defined bitrate (say 10 Mbs) then 10 bit compression quality will be the same as 8 bit at 8Mbs. Arguably, it's better (less likely to generate banding) if that extra 2 Mbs is used to give lower overall compression - not 10 bit.

But Tims question was about acquisition and here bitrates should be far higher. Fundamentally, 10 bit will only give an advantage for very low noise camera front ends. If it's above a certain level, you're better off using the bitate to more accurately encode 8 bit.

The other point is what is being encoded? 10 bit can have real advantages for signals like s-log, far less so for "ready to view" video. Move into the RAW world and better than 8 bit is essential, probably 12 bit.

Tim Polster
July 28th, 2012, 02:26 PM
This is where the 8bit vs 10bit gets confusing for me. 10bit seems to represent better color definition. But if the signal is noisy, the extra information is compromised at lower bitrates.

So what to do? I shoot a lot of events so I need the recording time. 100mbps is stretching it but I with a hard drive in the Samurai it would be very easy to deal with.

On the outside it seems a waste to have a 10bit signal and knowingly record 8bit.

The more help the better if I need to fix exposure in post and I admit to enjoying pushing files around in post to enhance.

David Heath
July 28th, 2012, 02:44 PM
This is where the 8bit vs 10bit gets confusing for me. 10bit seems to represent better color definition.
Nothing to do with colour or definition in it's normal meaning. It's the difference between adjacent shades that the eye can distinguish. (Think of a greyscale with the step differences getting smaller and smaller.) And most of the time 7 bit offers better than the eye can resolve.
But if the signal is noisy, the extra information is compromised at lower bitrates.
If the signal is noisy, all that 10 bit will do is more accurately define the noise! With unlimited bitrate that doesn't matter, but for a fixed bitrate you're better off using the bits to lower compression at 8 bit - not record the noise more accurately!

Alister Chapman
July 28th, 2012, 02:59 PM
I think you might be disappointed by the samurai. Not saying the Samurai isn't a good device, I have one and I do like it, but I also like the NanoFlash.

It may seem trivial, but the lack of tally lights on the Samurai is really frustrating. It means that you must mount it somewhere where you can see the LCD screen to know whether it's recording and the LCD is not good in sunshine. HDD's are risky, SSD's are fine, but CF cards are so much more convenient. 50Mb/s material needs less long term storage space than 100Mb/s ProRes. The noise reduction inherent with the XDCAM mpeg encoding results in a cleaner looking image than you'll get with the Samurai. No cache record or over cranking with the Samurai. I would not want to use a hard drive. I've experienced recording corruption caused by vibration from loud music, bumping the camera and rapidly turning or panning the camera with hard drives. SSD's are fine but more expensive.

As for 10 bit over 8 bit? For regular gammas visually you probably won't ever see a difference. Heavy grading or post work might start to reveal some tiny differences between 50Mb/s mpeg 2 and ProRes 422. Move up to ProRes HQ and then there may be a more noticeable difference, but it's still only small. Unless the camera is very low noise the noise in the image will tend to limit how far you can push it before the bit depth. Also do your homework on your workflow. Many edit applications truncate 10 bit to 8 bit especially quicktime on a PC (Last time I checked this includes Edius and Avid). Most of FCP's effects will truncate the clip to 8 bit. So it is a bit of a nightmare. If you already have the NanoFlash I would suggest there is very little to be gained. S-Log and other Log formats designed for 10 bit, now that's another story all together.

Tim Polster
July 28th, 2012, 03:54 PM
Thanks for your opinions. The Nano is a great unit. I also like that it has an HDMI output which opens up monitor choices.

What started this was the change in Edius to a true 10bit program in v6.5 with most of its filters. That change alone helps 8bit footage as well so it looks like the Nano is getting the nod. The Varicam H model is not the cleanest camera around either.

I am still curious though so I will try to seek out a 10bit capture device and test them side by side.

Thanks again!

Sareesh Sudhakaran
July 29th, 2012, 01:42 AM
Thanks for your reply.
I am surprised there is not much of a benefit. Could I ask for more information regarding your experiences? Why was it not worth it?

I couldn't see any substantial differences, either visibly or through a scope between the internal recording - both on a waveform monitor or a vectroscope. Honestly, I have no idea why that was. And I'm an electrical engineer, so I like to believe I know how to use the damn things.

The rest of my answer is my rationalization of why I failed to see any differences (Warning: It's technical and long-winded):

1. The HD-SDI base specification - the lowest it can go - is SMPTE 292M. A single SDI link allows (maximum):

10-bit word size
1.5 Gbps throughput (190 MB/s)
Single SDI link
4:2:2 chroma subsampling, Rec. 709, encoding gamma applied
720p60
1080i59.94 (50p and 59.94p not supported in 292M)


Every professional broadcast system built on HDSDI must adhere to 292M. To increase bandwidth (for 4:4:4 and 1080p59.94), two SDI cables are used under the specification 372M, or one 3G-SDI link is used under 424M. For your purposes, we don't need to worry about the latter two. But I mentioned them because there is one important consideration - both dual-link HD-SDI and 3G-SDI have to follow the 292M protocol when required.

Point being: 292M (10-bit) is the minimum standard for broadcast systems. You could throw 8-bit or 1-bit in there, but your system is already ready for 10-bit - it's a one size fits all shoe.

To keep up with this, using the MPEG-4 compression standard, Sony developed HDCAM SR - the gold standard for delivery. The base data rate is 440 Mbps (55 MB/s) (SQ). 64 minutes @59.94i 4:4:4 10-bit is about 206 GB. 4:2:2 is about 138 GB.

1080i59.94 RGB (4:4:4) uncompressed calculations:

8-bit is about 6 MB/frame. @59.94i, data rate is 178 MB/s. 60 minutes cost me 625 GB.
10-bit is 7.4 MB/frame. @59.94i, data rate is 222 MB/s. 60 minutes cost me 780 GB.


1080i59.94 RGB (4:2:2) uncompressed calculations:

8-bit is about 4 MB/frame. @59.94i, data rate is 120 MB/s. 60 minutes cost me 417 GB.
10-bit is 5 MB/frame. @59.94i, data rate is 150 MB/s. 60 minutes cost me 520 GB. (Note: Along with audio this falls within the 190 MB/s limit of 292M)


The difference? A mere 20% in actual data rate. But is it? Look at it this way, if I'm shooting at 15:1 ratio for a 2 hour project, how many HDCAM SR tapes can I save if I stick to 8-bit over 10-bit? Drum roll...none. A 64 minute HDCAM SR tape in SQ mode can give 64 minutes of both 10-bit 4:4:4 and 4:2:2 - there is no financial penalty for going 10-bit over 8-bit.

In your case, you want to compress to Prores LT - 100 Mbps - 12.5 MB/s. For 1,800 minutes of footage the difference in data between 8-bit and 10-bit is about 250 GB - that's it. Total project data is 1.3 TB, so if you have a 2 TB drive for each backup you won't notice the extra 250 GB.

Long story short - if the minimum HD-SDI standard is 10-bit, and a professional broadcast HDSDI backbone already supports 10-bit by default, and HDCAM SR forces no preference on the videographer - then why the hell isn't everyone shooting 10-bit?

Now why would any global camera manufacturer waste time and effort in providing 8-bit on camera when they could keep everything in 10-bit mode without any penalty? It causes no pain to the workflow, so it must be the acquisition:

2. Live broadcast is on a 10-bit backbone (that can stream 8-bit DTH). So who shoots 8-bit in camera?

Answer: Only those who are not connected to the live HDSDI backbone. Who are these people? The ones on the field - documentary and news cameraman - but their 8-bit footage has to match (meaning they have to be visually indistinguishable) their channel's 10-bit studio/live output.

Guess what? The difference is visually lossless. When was the last time anybody on earth looked at a video in a studio environment and guessed what bit depth it was? And even if somebody thought something was 'off', how do they know the guilty party is not the chroma sampling, the luma sampling, the file color space, the signal processing, the sampling, the software, the LUT, the gamma, the monitor, card, calibration, viewing environment, etc?

There is one final hurdle:

3. Many people use the 255 vs 1024 tonal range analogy to explain why 10-bit is better than 8-bit by an order of 12x. In theory. Why isn't this difference seen in practice?

E.g., The BBC white paper on the Red One (by Alan Roberts) - http://thebrownings.name/WHP034/pdf/WHP034-ADD32_RED_ONE.pdf - concludes that even though the 4K image is exceptional and acceptable for broadcast, the 10-bit 1080i from the HD-SDI feed is unacceptable for broadcast. Red's mistake? Either poor sampling engineering or a sensor that cannot conform to the traditional HDSDI format. To know how sampling can hide a lot of sins, you might find my blog about it interesting: Driving Miss Digital | wolfcrow (http://wolfcrow.com/blog/driving-miss-digital/)

The Red One has 12-bit processing. The C300 has 8-bit processing. Yet the latter is fine for broadcast, the former converted to 10-bit is not. Which one looks better on a 10-bit monitor?

Just because you have that many extra bits it doesn't mean those bits contain anything useful. To understand this, one needs to understand how sampling works, and how exactly analog signals from the sensor are converted to luma and chorma information, sampled (or sub-sampled as the case may be), converted to digital data and then conformed to the 292M specification.

If a manufacturer claims 12-bit/14-bit processing. What do they mean? Nothing really. Instead of worrying about bit-depth I recommend one study the file color space, viewing color pipeline, monitor color space, gamma curves, viewing LUTs, chroma sub-sampling and compression first. The bit depth is the least important parameter in the color chain. Only if you have the 'information' do you need the bits.

What do you do for your particular camera+recorder combination? You have one of two choices:

1. Scope the HD-SDI feed off the camera.
2. Generate a test signal and scope the nanoflash/samurai
3. Record/monitor camera+recorder while scoping the second SDI signal on the recorder
4. Analyze the results

Or - you could just look at the monitor and trust your eyes. That's what the engineers do at the end of the day, along with the suits who run things, and the poor videographers who are burdened with these decisions.

That's what I did, and after eliminating any errors from my side, I came to the conclusion that 8-bit over 10-bit in practical terms is not worth the effort - unless I was working uncompressed. At least for me, as far as prosumer cameras are concerned - it's either shoot in 8-bit in-camera recording, or completely uncompressed via HDMI/HSDSI. The middle ground (and the world of intermediate codecs) does not interest me.

The difference is clearly visible in the new 12-bit cameras: Epic, Alexa and F65 (16-bit). This corresponds to tests I have conducted using DSLR RAW files (12-bit) over JPEGS (8-bit).


Well, how about the comparison between using the 50mbps 4:2:2 mpeg-2 vs the 100mbps ProRes LT codecs?

I have only used prores once my entire life - I tried both HQ and 422 but not LT. So I really don't know based on experience. Based on theory, Prores LT with its DCT intraframe codec should leave MPEG-2 in the dust - however in practice I believe they should be just about visually equal.

But I'm secretly rooting for MPEG-2.


I don't know what is going on under the hood, but Edius just added the ability to color correct 10bit footage with 10bit effects. It may process more behind the scenes.

To put things in perspective, After Effects (and I think Premiere CS6) has the ability to work in a 32-bit float environment. If Edius only supports 10-bit, I suggest you do your color grading on another platform.

Tim Polster
July 29th, 2012, 07:02 AM
Thank you for your excellent reply Sareesh. Great information.

Please do not let me represent Edius and its processing. It can now keep a 10bit file in its native space all the way through the workflow. I do not know its processing bit depth.

Bruce Watson
July 29th, 2012, 11:47 AM
Banding CAN be caused by insufficient bitdepth, but can also be caused by too heavy compression, and if you're talking about home off-air viewing and see banding it's pretty certain to be a compression problem.

OTA typically has considerably less compression of any type compared to what you get from providers like TWC or AT&T. Which is why I'm using OTA in the first place. That said, OTA is still compressed, and sometimes severely so depending on how many sub-channels a broadcaster wants to slice out of their available bandwidth. It becomes interesting here in basketball season because a panning camera stresses the heck out of a CODEC, especially looking at a hardwood basketball floor's checking pattern. Often turns to macroblocking mush -- but a number of us locals make enough phone calls that most of the local broadcasters have learned to up their bit-rate during games. The local NBC affiliate did just that before the Olympics started -- noticeable bump up in signal quality. Our training during the last Winter Olympics two years ago apparently paid off. ;-)

I'm familiar with macro-blocking, and what I'm seeing isn't macro-blocking. It's classic standard banding. Just like you get from Photoshop working on an 8 bit scan. That kind of banding.

Banding can indeed be caused by compression, but compression can causes effects very similar to insufficient bit depth. Most people would be hard pressed to tell them apart. And I'll include myself in that assessment.

I'll also second the comment that 4:2:0 chroma subsampling is part of the problem. I suspect that the local commercials I'm talking about were captured to AVCHD, so compressed in a bunch of different ways, including 4:2:0 and eight bits per channel. Then color graded and polished up. I'm not surprised this results in banding. No one should be.

But that's my point. Better capture minimizes these kinds of artifacts. So given a choice, I'd capture in 10 bit over 8 bit. Which is what the OP was asking.

David Heath
July 29th, 2012, 03:12 PM
I'm familiar with macro-blocking, and what I'm seeing isn't macro-blocking. It's classic standard banding. Just like you get from Photoshop working on an 8 bit scan. That kind of banding.
Please, please do the Photoshop test I outlined earlier. I guarantee you'll be surprised. Macro-blocking is one form of artifacting due to compression, but banding (which does indeed look "like you get from Photoshop working on an 8 bit scan") is another, and nothing to do with bit depth.

It's difficult to be specific, since with broadcast encoders all sorts of things can vary. The ratio of data allocated to the luminance and chroma channel being one, and variation of allocation between I-frames and difference frames being another. What is key is the point you made before - that the banding issues mainly noticeable on gradients with saturated colours. That's largely due to bitrate allocation to chroma compression being low cf luminance, and also that chroma block sizes are large compared to luminance because of subsampling.
I'll also second the comment that 4:2:0 chroma subsampling is part of the problem. I suspect that the local commercials I'm talking about were captured to AVCHD, so compressed in a bunch of different ways, including 4:2:0 and eight bits per channel. Then color graded and polished up. I'm not surprised this results in banding. No one should be.
Bear in mind that AFAIK *ALL* transmission to home, Blu-Ray etc systems are 8 bit and 4:2:0. And those factors in themselves do not cause any problems - the problems come through not giving them enough bits to do their job well!

In the acquisition world then systems tend to have defined bandwidths, and if they are restricted (as with AVC-HD) designers have to decide how to balance compromises. And if they went for 10bit/4:2:2 it means more data to be compressed and hence far higher compression - and likely worse overall than 8bit/4:2:0!!!

If you see problems, likelihood is the prime cause is too low bitrate, too high compression. Moving to 10 bit may just make matters worse unless the bitrate is increased proportionately - it would just mean even higher compression.

Alister Chapman
July 29th, 2012, 03:12 PM
Saaresh:

You won't see a difference between 10 bit and 8 bit sampling on a vector scope or waveform monitor because waveform monitors and vector scopes measure amplitude and phase and there is no difference between amplitude and phase between 8 bit and 10 bit. A waveform monitor rarely has the resolution to show the 235 grey shades in an 8 bit signal, let alone the 956 shades in a 10 bit signal and if your looking at a cameras output, noise will diffuse any steps that might be possible to see. A normal waveform monitor/vectorsope is completely the wrong tool for trying to find any difference between 8 bit and 10 bit. It's like using a VU meter or audio level meter to determine the audio frequency.

Some histograms will tell you whether a signal is 8 bit or 10 bit by the number of steps there are from left to right across the histogram. Some NLE's and Grading tools can return the data value for points within recorded images, this may also tell you whether the signal is 8 bit or 10 bit.

The Alan Roberts reference is a bit of a red herring. The reason the RED's output was not deemed suitable for broadcast has nothing to do with the bit depth. It is because the real time de-bayering employed by RED introduces significant artefacts into the image. RED is designed around it's raw workflow, the HDSDi output is for on set monitoring only and not really meant to be used for off board recording.

Engineers don't just look at a monitor and trust their eyes, if it was that simple there would be no need for engineers.

One test you can do with almost any NLE to asses the practical, real world difference between acquisition in 8 bit and 10 bit for your camera is to record the same scene at both 8 bit and 10 bit. You can try different scenes to see how different subjects are handled. Blue sky, flat walls can be very revealing. Then bring the clips in to the NLE or grading package and use a gain/brightness effect or filter to reduce the image brightness by 50%. Then render out that now dark clip as an uncompressed 10 bit file. Then apply a gain/brightness fitter to on the new uncompressed file to return the video levels to that of the original. By layering the original over the now corrected uncompressed clip and using a difference matte you can see the differences between the 8 bit and 10 bit performance. How much or little of a difference there will be depends on many factors including subject, noise, compression artefacts etc. It is best to view the pictures on a large monitor. For this test to be meaningful it is vital that you ensure the NLE is not truncating the clips to 8 bit.

While Edius may be 10 bit, I think you still need to check whether quicktime on a PC is 10 bit, If quicktime on a PC still truncates to 8 bit then having a 10 bit edit package won't help.

Bruce. Excessive compression will absolutely cause banding in an image. Most banding artefacts that people see are not down to bit depth but quantisation noise caused by insufficient data to record subtle image changes. Perhaps there isn't enough data to record 10 shades in a gradient, the 10 shades get averaged together into 4 and the end result is steps. Another issue can be that the OTA signal is at best 8 bit, this is then passed to the TV's processing circuits which will be doing all kinds of image manipulation in an attempt to make the pictures look good on the screen, this processing is commonly done at 8 bits and 8 bit processing of an 8 bit signal can lead to further issues.

David Heath
July 29th, 2012, 03:44 PM
IThe Red One has 12-bit processing. The C300 has 8-bit processing. Yet the latter is fine for broadcast, the former converted to 10-bit is not. Which one looks better on a 10-bit monitor?
Not true, for starters the C300 is an 8 bit output signal. That is not the same as 8 bit processing. The reference to Red I suspect is really referring to 12 bit RAW recording.

Alister earler said:
S-Log and other Log formats designed for 10 bit, now that's another story all together.
Quite correct, and it's crucial to understand the difference between processed video, s-log and RAW. For the former, 8 bits are generally enough. (Which is why the C300 gets full approval.) For s-log and certainly RAW they are most certainly not. The latter two record a much wider range of values than is normally the case, and display on a normal monitor will likely look in a very flat low contrast image.

The extra bits are needed for the processing - after which 8 bits will then normally be adequate.

S-log and 10 bit will certainly give more scope for post processing than 8 bit - but it's the combination that makes the difference, not just the 10 bit factor. Processed 10 bit video is not the same as 10 bit s-log.
Just because you have that many extra bits it doesn't mean those bits contain anything useful.
Quite true. And this is where the comments about noise come in. Except for the best cameras, the extra bits are likely to be just filled with noise!
Based on theory, Prores LT with its DCT intraframe codec should leave MPEG-2 in the dust - however in practice I believe they should be just about visually equal.
Sorry, but MPEG2 is DCT based as well. Discrete cosine transform - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/Discrete_cosine_transform) If you really want to plough through the theory, that explains why excessive compression leads to banding as said earlier, otherwise key phrase is:
The DCT is used in JPEG image compression, MJPEG, MPEG, DV, and ......

Eric Olson
July 29th, 2012, 08:18 PM
I have a Varicam "H" model camera that has 10bit HDSDI output. For a year now I have been using a Nanoflash to record my footage which is an 8bit recorder. For the same money I could be using a 10bit Samurai recorder.

While I expect the 1280x720 resolution 2/3" chips in the H model produce relatively less noise than most of the cameras discussed in this forum, the main issue appears to be whether you are recording a flat image profile such as cinegamma or not. If you record a flat image with the intention of pushing the colors in post, then 10-bit 4:2:2 can make a noticable difference. If you are using standard video gamma with no color-correction in post, then 8-bit recording is enough.

Note that computer displays are typically 8-bit, distribution is 8-bit while most flat screen TVs often display only 6-bits of color. Acquisition and post processing can benefit from higher color depths, but your final product never needs more than 8-bit color.

Sareesh Sudhakaran
July 29th, 2012, 10:07 PM
Not true, for starters the C300 is an 8 bit output signal. That is not the same as 8 bit processing. The reference to Red I suspect is really referring to 12 bit RAW recording.

David, what is the real signal processing bit depth in the C300? I've shot with it and I have no clue. Red One claims 12-bit DSP and in my experience with it I've seen it's better than 8 bit and slightly worse than 12-bit linear. Red has a sampling system quite unlike any other camera, due to their compressed RAW codec. So I wouldn't believe their claims either.


Quite correct, and it's crucial to understand the difference between processed video, s-log and RAW.

Before S-log, RAW and everything else - there's sampling. That's where everything is decided. Please feel free to ask any camera manufacturer for details of their sampling process.


The extra bits are needed for the processing - after which 8 bits will then normally be adequate.

S-log and 10 bit will certainly give more scope for post processing than 8 bit - but it's the combination that makes the difference, not just the 10 bit factor. Processed 10 bit video is not the same as 10 bit s-log.

Thanks for clarifying - I was only strictly referring to linear data.


Sorry, but MPEG2 is DCT based as well. Discrete cosine transform - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/Discrete_cosine_transform) If you really want to plough through the theory, that explains why excessive compression leads to banding as said earlier, otherwise key phrase is:

DCT is an algorithm, not a compression scheme. Under the hood, many standards mix and match algorithms. A codec is a framework, a protocol - that is why it can be manipulated, as in the case of H.264 or AVCHD over the MPEG-4 protocol. DCT over intraframe is a different thing altogether.

I learnt this lesson while programming an image processing engine (similar to photoshop) 12 years ago for my college final year project - I used BMP, TIFF and JPEG specifically.

To be honest I don't care about the numbers anymore - what I really learnt was that my eyes were the best judge. Manufacturers hide too many things, and marketing is very powerful, and who has the time to sit and analyze each camera system - especially when it will be obsolete by the next trade show?

Sareesh Sudhakaran
July 30th, 2012, 12:17 AM
Saaresh:

You won't see a difference between 10 bit and 8 bit sampling on a vector scope or waveform monitor because waveform monitors and vector scopes measure amplitude and phase and there is no difference between amplitude and phase between 8 bit and 10 bit.

Never said I was looking for differences between 10-bit and 8-bit. I am looking at the signals and the sampling.

By the way, a waveform monitor displays voltage over time. An oscilloscope (or a digital waveform monitor will do as well) can freeze a wave for further study.

A vectorscope picks the frequency of two simultaneous waves - even within a complex wave such as a video signal - and compares them side by side (or one on top of the other). In video, I could display Cb and Cr on a vectorscope.

For 292M, I will need a scope that has been designed to test this particular signal. It will show me the wave pattern in relative amplitude and relative phase - from which I can derive the wave function of that particular wave. The wave function tells me everything I need to know.


A waveform monitor rarely has the resolution to show the 235 grey shades in an 8 bit signal, let alone the 956 shades in a 10 bit signal and if your looking at a cameras output, noise will diffuse any steps that might be possible to see. A normal waveform monitor/vectorsope is completely the wrong tool for trying to find any difference between 8 bit and 10 bit. It's like using a VU meter or audio level meter to determine the audio frequency.

Forgive me, but I feel we are talking about two different things. I am to blame for it. To clarify, I'm talking about a digital waveform analyzer, capable of generating test signals under the 292M specification, like the ones Leader makes.

From such a device, by studying the signals, Y'CbCr values, and cross referencing them against test patterns I can reverse engineer the sampling process.

By comparing this data with other signals (test, random and actual) I can tell very easily the 'quality' of the color information present in a signal. If I felt particularly loony, I could also reverse engineer the tristimulus values from the chrominance information and derive the sensor and Rec.709 color spaces, just to show off. This is how the scope knows whether you are within the bounds of a particular gamut or not - except I might do the calculations manually just because I don't trust my scope either! It all depends on how paranoid I am on any given day.

Once I know which color space I'm in, I know how many colors I need - from that information I will know whether the data needs 8-bit or 10-bit word lengths to be represented accurately. I don't care what the data already is - you can put a scooter engine, an elephant or a ferrari engine in a Ferrari body. What I really want to know is how it was sampled.

Guess what I learnt? No matter what the color space, I always need 32-bit (or the maximum possible) words - every gamut has infinite potential combinations - it's like Zeno's paradox.

But since 292M can only output 10-bit files, I have to use my eyes and judge for myself whether I can live with it. 8-bit is minimum wage. 10-bit is a pat on the back with your minimum wage. The difference between 8-bit and 10-bit in practice is negligible - both in its signal characteristics and visually.

But this is my opinion, for my own worfklow, based on my training and experience. I would like to believe I am right, but I might be totally wrong, and I might be the weakest link in my workflow.


Some histograms will tell you whether a signal is 8 bit or 10 bit by the number of steps there are from left to right across the histogram. Some NLE's and Grading tools can return the data value for points within recorded images, this may also tell you whether the signal is 8 bit or 10 bit.

Beware of histograms - are they showing the RAW data or a processed signal that has already been sampled? How is the histogram color information derived? Is it debayered information? Is it 8-bit or 10-bit or 16-bit or what? What are the 'clipping' parameters of any particular histogram - you might be surprised to learn it was a 'subjective' thing. You can see this in practice because no two camera manufacturers design histograms the same way. And no two RAW processing engines read histograms the same way either.

E.g., in digital photography, many high end cameras only show JPEG histograms with clipping warnings. When one pulls these files into a processing engine, one is surprised to see their histogram was not really accurate. Whom should I believe - the sensor manufacturer, the signal processing engineer, the compression engineer or the software developer who coded the RAW engine?

I'm totally for simple tools to understand data - histograms, waveforms, vectorscopes, etc - these are tools that tell me what ball park I'm in. On the field they are a great help. But I still prefer a good monitor as the easiest way to get where I want to go. The eye is just another tool - one of my favorites!

As a side note, I love the fact that BM has decided to ship the Ultrascope free with their camera, using thunderbolt.


The Alan Roberts reference is a bit of a red herring. The reason the RED's output was not deemed suitable for broadcast has nothing to do with the bit depth. It is because the real time de-bayering employed by RED introduces significant artefacts into the image. RED is designed around it's raw workflow, the HDSDi output is for on set monitoring only and not really meant to be used for off board recording.

Never meant the reference to be an example of bit depth. My words: "Red's mistake? Either poor sampling engineering or a sensor that cannot conform to the traditional HDSDI format."

It's a sampling problem, and is caused by the compression RAW scheme employed by Red. They probably had to resample an already sampled image for HD-SDI. I'm not sure how many have wondered why Red can't give out an uncompressed 4K/5K redcode stream.

The sampling of the sensor signals, combined with the sensor's gamut, bayering mechanism and filtering process, determines everything.


Engineers don't just look at a monitor and trust their eyes, if it was that simple there would be no need for engineers.

Sorry to disappoint you, but engineers are human too. :) There are no compulsions or laws of the universe that force engineers to choose between two legal voltage ranges in a single semiconductor transistor, let alone a circuit or microprocessor or sensor. When it comes to software coding, it's all subjective. In the end, the exact parameters of camera systems are arrived at on subjective estimates - even if it's a committee's.

E.g., a RAW file is just data - if you open a RAW file on different RAW processing engines you will get different results. If you apply different algorithms you'll get different results. Two issues qualify as suspects to explain this: 1. Patents. 2. Subjectivity.

If I'm looking at a signal and doing my math based on what I know - I'll arrive at a certain conclusion. Another engineer will see it a totally different way. The variety of electronic devices and software programs in the world show that clearly. You can interpret results differently, and change the world based on those interpretations.

The only way I can know if I'm still sane at the end of the day is by looking at the result like a lay person. Does red look red? Does the music note sound the way I want it to sound? Only then is the math worth it. Don't you think?

Anyway, I don't speak for all engineers, only myself!


One test you can do with almost any NLE to asses the practical, real world difference between acquisition in 8 bit and 10 bit for your camera is to record the same scene at both 8 bit and 10 bit. You can try different scenes to see how different subjects are handled. Blue sky, flat walls can be very revealing. Then bring the clips in to the NLE or grading package and use a gain/brightness effect or filter to reduce the image brightness by 50%. Then render out that now dark clip as an uncompressed 10 bit file. Then apply a gain/brightness fitter to on the new uncompressed file to return the video levels to that of the original. By layering the original over the now corrected uncompressed clip and using a difference matte you can see the differences between the 8 bit and 10 bit performance. How much or little of a difference there will be depends on many factors including subject, noise, compression artefacts etc. It is best to view the pictures on a large monitor. For this test to be meaningful it is vital that you ensure the NLE is not truncating the clips to 8 bit.

Excellent tip, Alistair - this is EXACTLY what I can do with a signal analyzer, except there's no romance to the process when an engineer does it!

Tim Polster
July 30th, 2012, 10:42 AM
Thanks everybody for your input on this thread. As usual I have learned a lot. I called Dan Keaton today and he highlighted to me that HD-SDI signals are always 10bit, but the Varicam is still an 8bit camera.

This was what I had always thought but mis-read a some information recently which caused me to think the camera was actually 10bit. So my decision is easy to stay with the Nanoflash.