View Full Version : Do you really need an external recorder?
Mark Rosenzweig November 4th, 2016, 09:55 AM An external recorder like the Atomos Shogun gives you the ability to shoot at high bitrates and 4:2:2 color that most cameras cannot do, sometimes even real 10bit. But does that really make the video any better compared to what the camera can do on its own using 4:2:0 and high compression?
In this video shots were recorded simultaneously (in camera) at 100 Mbps X AVC S and externally via HDMI by the Shogun Inferno using ProRes HQ 4K (754 Mbps, intraframe). Then the Slog2/SGamut clips from each device were color graded with exactly the same settings and combined sequentially. Rendered in XAVC Intra, which is a 4:2:2 codec.
Atomos (the maker of the Shogun) claims that even at 8-bit, shooting in 4:2:2 rather than 4:2:0 makes a difference, and arguably the extremely high bitrate of ProRes HQ avoids the macroblocking that some claim to see from XAVC S at even 100 Mbps. This video has plenty of colors and details, including moving leaves that are hard on long GOP codecs. See any differences?
XAVC S 4:2:0 vs. ProRes HQ 4:2:2 Graded from Slog2/SGamut: Fall Colors in 4K on Vimeo
Bruce Watson November 4th, 2016, 01:19 PM An external recorder like the Atomos Shogun gives you the ability to shoot at high bitrates and 4:2:2 color that most cameras cannot do, sometimes even real 10bit. But does that really make the video any better compared to what the camera can do on its own using 4:2:0 and high compression?
Where the extra bits help is in color correcting and grading. You can't push video that's 4:2:0 and 8 bit very hard at all before generating visual artifacts like banding.
But if you're just going to take the video out of the camera (4:2:0 8 bit) without any correction or grading, and compare the same capture at 4:2:2 10 bit, you aren't likely to see a lot of difference except in certain specific ranges of colors and certain gradients (clear skies, etc.).
As to compression, mostly this shows in motion, especially in pans and tilts, where every pixel in each successive frame is different. Under these conditions, the more bits the better. Indeed, under these conditions you want an intra-frame CODEC (compression of each frame individually) as opposed to an inter-frame CODEC (compression using long group-of-pictures (GOP) with p- and b-frames). The "big test" is often running water like a river splashing over rocks -- long GOP CODECs like AVCHD tend to turn to mush with such a scene (macroblocking), and the human visual system really picks up on it.
Mark Rosenzweig November 4th, 2016, 01:57 PM Where the extra bits help is in color correcting and grading. You can't push video that's 4:2:0 and 8 bit very hard at all before generating visual artifacts like banding.
But if you're just going to take the video out of the camera (4:2:0 8 bit) without any correction or grading, and compare the same capture at 4:2:2 10 bit, you aren't likely to see a lot of difference except in certain specific ranges of colors and certain gradients (clear skies, etc.).
As to compression, mostly this shows in motion, especially in pans and tilts, where every pixel in each successive frame is different. Under these conditions, the more bits the better. Indeed, under these conditions you want an intra-frame CODEC (compression of each frame individually) as opposed to an inter-frame CODEC (compression using long group-of-pictures (GOP) with p- and b-frames). The "big test" is often running water like a river splashing over rocks -- long GOP CODECs like AVCHD tend to turn to mush with such a scene (macroblocking), and the human visual system really picks up on it.
The video I posted underwent extensive color grading and correction, as the original files were SLOG. That is the point. And the question is do you see any difference given the heavy grading? We all know the theory. This is an empirical test. Maybe the test is invalid, but it is not because there was not extensive manipulation of the image.
Noa Put November 4th, 2016, 02:34 PM This recent video goes a bit deeper into the subject
https://youtu.be/AekKwgvS5K0
Cliff Totten November 5th, 2016, 09:05 PM In my experience, I would argue that 10bit is "generally" not necessary if you are shooting in a rec709-ish profile that is already 95% of the color you want for delivery. That bit depth is certainly is a nice thing to have but can you do perfectly fine without it? Certainly! Can you shoot GORGEOUS imagery in just 8 bit? Of course you can.
What about SLOG? I have even heard some people say that you would have to be a "fool" to shoot SLOG on 8 bit. That's complete and total B.S. If you expose it properly, you absolutely CAN shoot fantastic video in 8bit with SLOG-2. Is 10bit optimal? Absolutely, there is no denying that. However, that's not to say that you cant benefit from SLOG-2's dynamic range and gamma compression using an 8bit color depth. You "might" possibly pay a color banding price in "some" circumstances but the benefits can many times outweigh the penalty. I myself have graded scenes in 8bit SLOG-2 with great success in gradients. However, I have at times, been hit with a little banding here and there. (not dramatic and not often)
There are two things that cause color banding:
#1 - Color bit depth (8bit) 256 shades per channel steps.
#2 - Very high compression ratio artifacts. (this is less often talked about today on forums and YouTube)
I used to do testing between 8bit AVCHD and 8Bit ProRes and found that the exact same scene in AVCHD revealed WAY more (or more pronounced) banding than it's ProRes clone shot when stretched hard in post. Why is this? They both had the exact same 8bit, 256 colors per channel. Easy,...because of reason #2. ProRes' intra-frame structure is just far more "durable" than a high ratio long GOP CODEC. What does MPEG do when a scene is too complex for it's capped bit rate? It resorts to averaging itself into larger blocks to stay under the bitrate limit and it does it in the darker areas of the scene first.
XAVC-S/L's h.264 compression quality in the highlights and midtones and very good but it's in the shadows that can reveal problems. Lifting up the lows is where you could find quantization and macroblocking. These areas are where ProRes will stand out better. Lifting shadows wont exactly be "cleaner" but they could be less "blocky" depending on the complexity of your scene.
I have even heard people say that 10bit is "cleaner" than 8bit! Seriously? Very strictly speaking, a 10bit CODEC could have a better signal to noise ratio than an 8bit....but in the real world, both CODECS are clean and 10bit with not "cover up" image sensor noise any more than 8 bit will. The majority of noise you will see in an image is sensor noise, not CODEC noise that can be blamed on anything having to do with 8 bit. (garbage, noise and junk from very low MPEG bitrate,....yes)
An external recorder also gives you a nicer and larger screen for 4k focusing as well as very robust exposure tools that are often much easier to see and use.
CT
John McCully November 5th, 2016, 09:29 PM [QUOTE=Mark Rosenzweig;1923182] See any differences?
No, should I have? Do you?
(viewed on a 4k monitor)
Mark Rosenzweig November 5th, 2016, 11:01 PM [QUOTE=Mark Rosenzweig;1923182] See any differences?
No, should I have? Do you?
(viewed on a 4k monitor)
No I also see no difference, also viewed on 4K monitor. But all that theorizing said we should - color grading slog produces banding, 4:2:2 is better than 4:2:0, 100mbps is too low a bitrate for 4K and so on.
Jack Zhang November 7th, 2016, 07:43 AM A slight tangent, have you tested the Inferno with HFR content? Can it monitor HFR in HFR? That's something the Odyssey can't do. Easy test would be connecting an HDMI cable from a PC to the Inferno, setting the refresh rate to 60hz, and going to Blur Busters UFO Motion Tests (http://testufo.com/#test=framerates) on a browser that supports Vsync and seeing if the frame rate is smooth.
Mark Rosenzweig November 7th, 2016, 02:47 PM A slight tangent, have you tested the Inferno with HFR content? Can it monitor HFR in HFR? That's something the Odyssey can't do. Easy test would be connecting an HDMI cable from a PC to the Inferno, setting the refresh rate to 60hz, and going to Blur Busters UFO Motion Tests (http://testufo.com/#test=framerates) on a browser that supports Vsync and seeing if the frame rate is smooth.
I just updated my graphics card to output 60Hz at 4K. I know Edge does not support Vsynch - which browsers do?
The Inferno certainly ingests 60fps at 4K and even 120Hz fps (1080). And it advertises that it can be used as a computer monitor.
I am more interested in 4K HDR than HFR, though, for which the new Shogun is unique.
Jack Zhang November 7th, 2016, 06:18 PM Chrome and the newest Firefox support Vsync. Would be very appreciated if you could get some video of that. Maybe post a 720p60 video from one of your cameras capable of 720p60 capturing the testing process for HFR monitoring on the Inferno. Vimeo supports 720p60.
While you're also at it, can you see if 1080i monitors in HFR (60 fields) on the Inferno too?
Jack Zhang November 14th, 2016, 11:58 PM For the love of god... How hard is it to get someone to test the Inferno with true HFR inputs and record it in HFR for a review?!? Found a 1080p50 review video but it only demonstrates up to 25p...
Noa Put November 15th, 2016, 02:43 AM With that attitude I doubt if anyone would be willing to test this for you.
Jack Zhang November 15th, 2016, 06:20 PM Sorry, been battling depression for the last while and was severely disappointed when a recorder I recommended to a friend from that other company had a surprising lack of HFR support. I guarantee you this unnamed company is going to do nothing when it comes to increasing HFR support for monitoring and passthrough, cause HFR is the devil to cinema people.
I just need to know how the Inferno handles HFR, whether to skip it cause it still monitors in half framerate or get it cause it always monitors and passes through native high frame rate.
Mark Rosenzweig November 15th, 2016, 07:23 PM YouTube now takes and displays HDR videos, if your device can view HDR. Otherwise it converts the uploaded HDR video to SDR, so it looks good in REC709.
I created a 4K HDR video (10bit, 12-stop, 4:2:2 REC2020 color) using the Shogun Inferno, color graded in Resolve to preserve HDR parameters and uploaded the video to Youtube after injecting the required metadata. Here it is:
https://youtu.be/9WRW3f8ZGWQ
If you have an HDR viewer you will see it in HDR, otherwise you are watching Youtube's SDR conversion.
|
|