8bit vs 10bit Aquisition at DVinfo.net
DV Info Net

Go Back   DV Info Net > High Definition Video Acquisition > General HD (720 / 1080) Acquisition
Register FAQ Today's Posts Buyer's Guides

General HD (720 / 1080) Acquisition
Topics about HD production.

Reply
 
Thread Tools Search this Thread
Old July 27th, 2012, 08:34 AM   #1
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
8bit vs 10bit Aquisition

Hello,

I wanted to ask the community about a decision I am looking to make regarding external recorders. I have a Varicam "H" model camera that has 10bit HDSDI output. For a year now I have been using a Nanoflash to record my footage which is an 8bit recorder. For the same money I could be using a 10bit Samurai recorder.

In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?

The Nanoflash is usually set a 50mbps 4:2:2 and I would probably use the 100mbps 4:2:2 ProRes setting.

Another twist - I use Edius 6.5 which can now color correct in 10bit. How much does this effect my decision?

a) the 10bit processing in Edius will polish up 8bit footage so well that 8bit vs 10bit does not matter?

b) the 10bit processing makes 10bit aquisition really come into its own?

Thanks for your input!
Tim Polster is offline   Reply With Quote
Old July 27th, 2012, 11:14 PM   #2
Trustee
 
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?
Hardly worth the trouble for normal broadcast work. Not even for keying. Unless you have a strict acquisition and delivery requirement for 10-bit, it's not worth it - my experience

To truly start seeing a difference, you'll have to compare 8-bit to 12-bit. Or at least record the bit stream to uncompressed video instead of transcoding it.

And after that, you'll need to grade in a 32-bit float environment, regardless of the bit depth of your footage, hardware, driver or monitor. When you are working in 32 bit mode the hardware is always playing catchup. What is truly been done, is done by the software. Have no experience with Edius so can't help you with that.

But doesn't Edius have a 16-bit environment at least?
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Sareesh Sudhakaran is offline   Reply With Quote
Old July 28th, 2012, 07:51 AM   #3
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

Thanks for your reply.

I am surprised there is not much of a benefit. Could I ask for more information regarding your experiences? Why was it not worth it?

Since my camera is outputting 10bit and the Nanoflash and Samurai are pretty close in price, I could go either way without impact.

Well, how about the comparison between using the 50mbps 4:2:2 mpeg-2 vs the 100mbps ProRes LT codecs?

I don't know what is going on under the hood, but Edius just added the ability to color correct 10bit footage with 10bit effects. It may process more behind the scenes.
Tim Polster is offline   Reply With Quote
Old July 28th, 2012, 08:41 AM   #4
Major Player
 
Join Date: Mar 2010
Location: Raleigh, NC, USA
Posts: 710
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
In your real world usage (not tech specs please) how much of a difference does one notice between 8bit and 10bit aquisition?
Like almost everything related to photography or video, it depends. In this case, it depends on what you are trying to capture, and on what you want to do with it after you capture it.

More bits means more shades of any given color. Given an individual hue and a range of tones of this color, from black to white, 8 bit can theoretically capture as many as 256 separate tones, while 10 bit can capture four times as many, at 1024.

In the real world, 8 bit capture won't really capture 256 because of inefficiencies. More like 240 or so. And 10 bit more like 960.

So what's it mean? It means the difference between a beautiful smooth blue sky, and a beautiful banded blue sky.

I actually see this all the time. I use a Panasonic plasma screen for TV, with an over the air antenna. I get excellent reception which meas I can see everything, every bit, that they broadcast, for good or ill. And some of it is pretty ugly. In commercials I often see banding in the backgrounds. Seems like the current vogue is to have a very saturated background that darkens toward the corners. This typically results in a nice set of oval banding (from the oval vignette mask's gradient). Because the ads are often made locally, and the local companies are using less expensive equipment, and are therefore making 8 bit captures.

Everything else being equal (never happens, but still), 10 bit captures have the potential to look better than 8 bit captures.

And I should note that applying that gradient against an 8 bit capture in a 10 bit editing space will help. A little. What it means is that you have more headroom to push the color around a little while color grading without crushing the dark end or clipping the light end. But it won't magically pull tonal information out of the air -- an 8 bit capture is still an 8 bit capture.
Bruce Watson is offline   Reply With Quote
Old July 28th, 2012, 10:49 AM   #5
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

Thanks Bruce. So you would lean towards the 10bit capture? This would be an (almost) equal situation...plug in a Nano or a Samurai to the back of the camera. Resulting images would reflect the recorder's potential.

I enjoy proper color and use secondary color correction often as Edius makes it real-time.
Tim Polster is offline   Reply With Quote
Old July 28th, 2012, 11:40 AM   #6
Major Player
 
Join Date: Oct 2009
Location: Reno, NV
Posts: 553
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
I enjoy proper color and use secondary color correction often as Edius makes it real-time.
From your original post it sounded like you already owned a Nanoflash and wanted to replace it with a Samurai. I've never used either, but given your workflow, 10-bit seems the obvious choice for a new purchase.

I have a little question concerning 10-bit video source. I know with audio there are encoders that work with 24-bit audio directly so there is no need to dither it down to 16-bits before encoding. Are there video encoders that work directly with 10-bit source, or does one have to downsample it to 8-bit before encoding?
Eric Olson is offline   Reply With Quote
Old July 28th, 2012, 12:43 PM   #7
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

I would sell the Nano if 10bit is the better option as I recently learned the Varicam had 10bit output.

Good question about encoding. I am assuming the encoder would take a 10bit source and create the 8bit final. It would be good to hear from those more in the know.
Tim Polster is offline   Reply With Quote
Old July 28th, 2012, 01:23 PM   #8
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Bruce Watson View Post
So what's it mean? It means the difference between a beautiful smooth blue sky, and a beautiful banded blue sky.

I actually see this all the time. I use a Panasonic plasma screen for TV, with an over the air antenna. I get excellent reception which meas I can see everything, every bit, that they broadcast, for good or ill. And some of it is pretty ugly. In commercials I often see banding in the backgrounds.
I very much doubt that what you are seeing has anything to do with bitdepth. If you went to the studio and saw it it in the gallery (also 8 bit) you wouldn't see that banding.

Banding CAN be caused by insufficient bitdepth, but can also be caused by too heavy compression, and if you're talking about home off-air viewing and see banding it's pretty certain to be a compression problem.

It's not at first obvious why it should be so, but you can easily demonstrate it in Photoshop. Make a blank canvas, then apply the gradient tool left to right - you should see a nice smooth gradient with no obvious banding, despite being 8 bit. Now save the image as a JPEG at varying compression settings, but including max compression. Lo and behold, severe banding, but as the bitdepth has stayed the same it is solely down to compression.

"Why?" is quite difficult to answer without getting too technical, but as simply as possible is down to most compression systems using a block basis. Individual pixel values are given by having an average value for the block, then giving the values of the pixels as differences relative to the average. Harder you compress, the more mangled these differences get until in an extreme case they all become zero - all pixels then just have the average value. Hence banding - what you are seeing as bands are the coarse differences between block average values.

Secondly, it's common for codecs to compress the chroma signals more than luminance. The fact that you mention seeing the problem more often on coloured gradients reflects that.

Now if we're talking about a defined bitrate (say 10 Mbs) then 10 bit compression quality will be the same as 8 bit at 8Mbs. Arguably, it's better (less likely to generate banding) if that extra 2 Mbs is used to give lower overall compression - not 10 bit.

But Tims question was about acquisition and here bitrates should be far higher. Fundamentally, 10 bit will only give an advantage for very low noise camera front ends. If it's above a certain level, you're better off using the bitate to more accurately encode 8 bit.

The other point is what is being encoded? 10 bit can have real advantages for signals like s-log, far less so for "ready to view" video. Move into the RAW world and better than 8 bit is essential, probably 12 bit.
David Heath is offline   Reply With Quote
Old July 28th, 2012, 02:26 PM   #9
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

This is where the 8bit vs 10bit gets confusing for me. 10bit seems to represent better color definition. But if the signal is noisy, the extra information is compromised at lower bitrates.

So what to do? I shoot a lot of events so I need the recording time. 100mbps is stretching it but I with a hard drive in the Samurai it would be very easy to deal with.

On the outside it seems a waste to have a 10bit signal and knowingly record 8bit.

The more help the better if I need to fix exposure in post and I admit to enjoying pushing files around in post to enhance.
Tim Polster is offline   Reply With Quote
Old July 28th, 2012, 02:44 PM   #10
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
This is where the 8bit vs 10bit gets confusing for me. 10bit seems to represent better color definition.
Nothing to do with colour or definition in it's normal meaning. It's the difference between adjacent shades that the eye can distinguish. (Think of a greyscale with the step differences getting smaller and smaller.) And most of the time 7 bit offers better than the eye can resolve.
Quote:
But if the signal is noisy, the extra information is compromised at lower bitrates.
If the signal is noisy, all that 10 bit will do is more accurately define the noise! With unlimited bitrate that doesn't matter, but for a fixed bitrate you're better off using the bits to lower compression at 8 bit - not record the noise more accurately!
David Heath is offline   Reply With Quote
Old July 28th, 2012, 02:59 PM   #11
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Re: 8bit vs 10bit Aquisition

I think you might be disappointed by the samurai. Not saying the Samurai isn't a good device, I have one and I do like it, but I also like the NanoFlash.

It may seem trivial, but the lack of tally lights on the Samurai is really frustrating. It means that you must mount it somewhere where you can see the LCD screen to know whether it's recording and the LCD is not good in sunshine. HDD's are risky, SSD's are fine, but CF cards are so much more convenient. 50Mb/s material needs less long term storage space than 100Mb/s ProRes. The noise reduction inherent with the XDCAM mpeg encoding results in a cleaner looking image than you'll get with the Samurai. No cache record or over cranking with the Samurai. I would not want to use a hard drive. I've experienced recording corruption caused by vibration from loud music, bumping the camera and rapidly turning or panning the camera with hard drives. SSD's are fine but more expensive.

As for 10 bit over 8 bit? For regular gammas visually you probably won't ever see a difference. Heavy grading or post work might start to reveal some tiny differences between 50Mb/s mpeg 2 and ProRes 422. Move up to ProRes HQ and then there may be a more noticeable difference, but it's still only small. Unless the camera is very low noise the noise in the image will tend to limit how far you can push it before the bit depth. Also do your homework on your workflow. Many edit applications truncate 10 bit to 8 bit especially quicktime on a PC (Last time I checked this includes Edius and Avid). Most of FCP's effects will truncate the clip to 8 bit. So it is a bit of a nightmare. If you already have the NanoFlash I would suggest there is very little to be gained. S-Log and other Log formats designed for 10 bit, now that's another story all together.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old July 28th, 2012, 03:54 PM   #12
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

Thanks for your opinions. The Nano is a great unit. I also like that it has an HDMI output which opens up monitor choices.

What started this was the change in Edius to a true 10bit program in v6.5 with most of its filters. That change alone helps 8bit footage as well so it looks like the Nano is getting the nod. The Varicam H model is not the cleanest camera around either.

I am still curious though so I will try to seek out a 10bit capture device and test them side by side.

Thanks again!
Tim Polster is offline   Reply With Quote
Old July 29th, 2012, 01:42 AM   #13
Trustee
 
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
Thanks for your reply.
I am surprised there is not much of a benefit. Could I ask for more information regarding your experiences? Why was it not worth it?
I couldn't see any substantial differences, either visibly or through a scope between the internal recording - both on a waveform monitor or a vectroscope. Honestly, I have no idea why that was. And I'm an electrical engineer, so I like to believe I know how to use the damn things.

The rest of my answer is my rationalization of why I failed to see any differences (Warning: It's technical and long-winded):

1. The HD-SDI base specification - the lowest it can go - is SMPTE 292M. A single SDI link allows (maximum):
  • 10-bit word size
  • 1.5 Gbps throughput (190 MB/s)
  • Single SDI link
  • 4:2:2 chroma subsampling, Rec. 709, encoding gamma applied
  • 720p60
  • 1080i59.94 (50p and 59.94p not supported in 292M)

Every professional broadcast system built on HDSDI must adhere to 292M. To increase bandwidth (for 4:4:4 and 1080p59.94), two SDI cables are used under the specification 372M, or one 3G-SDI link is used under 424M. For your purposes, we don't need to worry about the latter two. But I mentioned them because there is one important consideration - both dual-link HD-SDI and 3G-SDI have to follow the 292M protocol when required.

Point being: 292M (10-bit) is the minimum standard for broadcast systems. You could throw 8-bit or 1-bit in there, but your system is already ready for 10-bit - it's a one size fits all shoe.

To keep up with this, using the MPEG-4 compression standard, Sony developed HDCAM SR - the gold standard for delivery. The base data rate is 440 Mbps (55 MB/s) (SQ). 64 minutes @59.94i 4:4:4 10-bit is about 206 GB. 4:2:2 is about 138 GB.

1080i59.94 RGB (4:4:4) uncompressed calculations:
  • 8-bit is about 6 MB/frame. @59.94i, data rate is 178 MB/s. 60 minutes cost me 625 GB.
  • 10-bit is 7.4 MB/frame. @59.94i, data rate is 222 MB/s. 60 minutes cost me 780 GB.

1080i59.94 RGB (4:2:2) uncompressed calculations:
  • 8-bit is about 4 MB/frame. @59.94i, data rate is 120 MB/s. 60 minutes cost me 417 GB.
  • 10-bit is 5 MB/frame. @59.94i, data rate is 150 MB/s. 60 minutes cost me 520 GB. (Note: Along with audio this falls within the 190 MB/s limit of 292M)

The difference? A mere 20% in actual data rate. But is it? Look at it this way, if I'm shooting at 15:1 ratio for a 2 hour project, how many HDCAM SR tapes can I save if I stick to 8-bit over 10-bit? Drum roll...none. A 64 minute HDCAM SR tape in SQ mode can give 64 minutes of both 10-bit 4:4:4 and 4:2:2 - there is no financial penalty for going 10-bit over 8-bit.

In your case, you want to compress to Prores LT - 100 Mbps - 12.5 MB/s. For 1,800 minutes of footage the difference in data between 8-bit and 10-bit is about 250 GB - that's it. Total project data is 1.3 TB, so if you have a 2 TB drive for each backup you won't notice the extra 250 GB.

Long story short - if the minimum HD-SDI standard is 10-bit, and a professional broadcast HDSDI backbone already supports 10-bit by default, and HDCAM SR forces no preference on the videographer - then why the hell isn't everyone shooting 10-bit?

Now why would any global camera manufacturer waste time and effort in providing 8-bit on camera when they could keep everything in 10-bit mode without any penalty? It causes no pain to the workflow, so it must be the acquisition:

2. Live broadcast is on a 10-bit backbone (that can stream 8-bit DTH). So who shoots 8-bit in camera?

Answer: Only those who are not connected to the live HDSDI backbone. Who are these people? The ones on the field - documentary and news cameraman - but their 8-bit footage has to match (meaning they have to be visually indistinguishable) their channel's 10-bit studio/live output.

Guess what? The difference is visually lossless. When was the last time anybody on earth looked at a video in a studio environment and guessed what bit depth it was? And even if somebody thought something was 'off', how do they know the guilty party is not the chroma sampling, the luma sampling, the file color space, the signal processing, the sampling, the software, the LUT, the gamma, the monitor, card, calibration, viewing environment, etc?

There is one final hurdle:

3. Many people use the 255 vs 1024 tonal range analogy to explain why 10-bit is better than 8-bit by an order of 12x. In theory. Why isn't this difference seen in practice?

E.g., The BBC white paper on the Red One (by Alan Roberts) - http://thebrownings.name/WHP034/pdf/...32_RED_ONE.pdf - concludes that even though the 4K image is exceptional and acceptable for broadcast, the 10-bit 1080i from the HD-SDI feed is unacceptable for broadcast. Red's mistake? Either poor sampling engineering or a sensor that cannot conform to the traditional HDSDI format. To know how sampling can hide a lot of sins, you might find my blog about it interesting: Driving Miss Digital | wolfcrow

The Red One has 12-bit processing. The C300 has 8-bit processing. Yet the latter is fine for broadcast, the former converted to 10-bit is not. Which one looks better on a 10-bit monitor?

Just because you have that many extra bits it doesn't mean those bits contain anything useful. To understand this, one needs to understand how sampling works, and how exactly analog signals from the sensor are converted to luma and chorma information, sampled (or sub-sampled as the case may be), converted to digital data and then conformed to the 292M specification.

If a manufacturer claims 12-bit/14-bit processing. What do they mean? Nothing really. Instead of worrying about bit-depth I recommend one study the file color space, viewing color pipeline, monitor color space, gamma curves, viewing LUTs, chroma sub-sampling and compression first. The bit depth is the least important parameter in the color chain. Only if you have the 'information' do you need the bits.

What do you do for your particular camera+recorder combination? You have one of two choices:

1. Scope the HD-SDI feed off the camera.
2. Generate a test signal and scope the nanoflash/samurai
3. Record/monitor camera+recorder while scoping the second SDI signal on the recorder
4. Analyze the results

Or - you could just look at the monitor and trust your eyes. That's what the engineers do at the end of the day, along with the suits who run things, and the poor videographers who are burdened with these decisions.

That's what I did, and after eliminating any errors from my side, I came to the conclusion that 8-bit over 10-bit in practical terms is not worth the effort - unless I was working uncompressed. At least for me, as far as prosumer cameras are concerned - it's either shoot in 8-bit in-camera recording, or completely uncompressed via HDMI/HSDSI. The middle ground (and the world of intermediate codecs) does not interest me.

The difference is clearly visible in the new 12-bit cameras: Epic, Alexa and F65 (16-bit). This corresponds to tests I have conducted using DSLR RAW files (12-bit) over JPEGS (8-bit).

Quote:
Well, how about the comparison between using the 50mbps 4:2:2 mpeg-2 vs the 100mbps ProRes LT codecs?
I have only used prores once my entire life - I tried both HQ and 422 but not LT. So I really don't know based on experience. Based on theory, Prores LT with its DCT intraframe codec should leave MPEG-2 in the dust - however in practice I believe they should be just about visually equal.

But I'm secretly rooting for MPEG-2.

Quote:
I don't know what is going on under the hood, but Edius just added the ability to color correct 10bit footage with 10bit effects. It may process more behind the scenes.
To put things in perspective, After Effects (and I think Premiere CS6) has the ability to work in a 32-bit float environment. If Edius only supports 10-bit, I suggest you do your color grading on another platform.
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Sareesh Sudhakaran is offline   Reply With Quote
Old July 29th, 2012, 07:02 AM   #14
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

Thank you for your excellent reply Sareesh. Great information.

Please do not let me represent Edius and its processing. It can now keep a 10bit file in its native space all the way through the workflow. I do not know its processing bit depth.
Tim Polster is offline   Reply With Quote
Old July 29th, 2012, 11:47 AM   #15
Major Player
 
Join Date: Mar 2010
Location: Raleigh, NC, USA
Posts: 710
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by David Heath View Post
Banding CAN be caused by insufficient bitdepth, but can also be caused by too heavy compression, and if you're talking about home off-air viewing and see banding it's pretty certain to be a compression problem.
OTA typically has considerably less compression of any type compared to what you get from providers like TWC or AT&T. Which is why I'm using OTA in the first place. That said, OTA is still compressed, and sometimes severely so depending on how many sub-channels a broadcaster wants to slice out of their available bandwidth. It becomes interesting here in basketball season because a panning camera stresses the heck out of a CODEC, especially looking at a hardwood basketball floor's checking pattern. Often turns to macroblocking mush -- but a number of us locals make enough phone calls that most of the local broadcasters have learned to up their bit-rate during games. The local NBC affiliate did just that before the Olympics started -- noticeable bump up in signal quality. Our training during the last Winter Olympics two years ago apparently paid off. ;-)

I'm familiar with macro-blocking, and what I'm seeing isn't macro-blocking. It's classic standard banding. Just like you get from Photoshop working on an 8 bit scan. That kind of banding.

Banding can indeed be caused by compression, but compression can causes effects very similar to insufficient bit depth. Most people would be hard pressed to tell them apart. And I'll include myself in that assessment.

I'll also second the comment that 4:2:0 chroma subsampling is part of the problem. I suspect that the local commercials I'm talking about were captured to AVCHD, so compressed in a bunch of different ways, including 4:2:0 and eight bits per channel. Then color graded and polished up. I'm not surprised this results in banding. No one should be.

But that's my point. Better capture minimizes these kinds of artifacts. So given a choice, I'd capture in 10 bit over 8 bit. Which is what the OP was asking.
Bruce Watson is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > High Definition Video Acquisition > General HD (720 / 1080) Acquisition


 



All times are GMT -6. The time now is 08:39 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network