DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Panasonic DVX / DVC Assistant (https://www.dvinfo.net/forum/panasonic-dvx-dvc-assistant/)
-   -   When to use V-Log (https://www.dvinfo.net/forum/panasonic-dvx-dvc-assistant/530024-when-use-v-log.html)

Roland Schulz October 19th, 2015 09:33 AM

Re: When to use V-Log
 
Quote:

Originally Posted by Barry Green (Post 1900873)
The benefit is most obvious in neighboring pixels, but that doesn't mean that's the only place that would benefit. You're grossly oversimplifying the situation by assuming that entire sections of the frame are going to be rendered at only one pixel level.

Assuming a perfectly flat gray, you're saying that it will be rendered by the camera as either 127-127-127-127, or 128-128-128-128. I'm saying that it may very well be rendered as 127-128-127-128-127-128. And if so, that shade will be properly represented by the 8-bit-to-10-bit downconversion.

I would dare say that idealized perfectly flat illumination such as what you're suggesting are not the norm. There will be variation to some degree just in natural distribution. The 10-bit downconversion will preserve those variations. Will they be identical to what a 10-bit FHD camera would have delivered in the same scenario? Probably not; a 10-bit camera could be able to preserve that absolutely perfectly flat tone better. But is that a realistic concern? I would say such absolutely perfectly neutral flat shades are not generally naturally occurring. Even in a blue sky there's variation.

8-bit banding is more a symptom of compression throwing away and exaggerating the linearity and flatness of an area, than it is of the area itself being linear and flat. Adding just the smallest amount of dithering corrects that problem, as Canon found in their XL1 with the DV codec.

As I also said, this happens on "noisier" cameras - again, no real 10-bit information by downscaling! That´s the only thing I am talking about.

Barry Green October 19th, 2015 09:44 AM

Re: When to use V-Log
 
Quote:

Originally Posted by Gary Huff (Post 1900871)
It's not going to be any more 10-bit than ... sending 8-bit 422 into a Shogun where it becomes a 10-bit 4:2:2 ProRes.

Of course it is. Sending 8-bit 422 into a Shogun, which records in 10 bit, causes the Shogun to pad the 8-bit data with two zeroes at the end. There's no more information in that Shogun recording; it's just taking up two more bits.

Summing four pixels together retains the differences in shade between them. The difference between 508, 510, and 512 will be retained by the downconversion method, where it would be lost in the "bit padding" method used when recording 8-bit data into a 10-bit codec.

Quote:

So shoot something that stresses 8-bit that 10-bit can handle, and then show it look exactly the same both in 1080p 10-bit and 4K 8-bit downsampled to 1080. And no "adding noise will help smooth it out" because that just proves what I'm saying.
It's easily enough demonstrated by using a simple gradient in Photoshop. No need to add variables of shooting something, codecs, noise levels, inconsistent lighting, etc; just prove the theory simply enough. Make an 8-bit 3840x2160 gradient. Switch to 16-bit mode and resize it to 1920x1080. Then, in another window, make an 8-bit 1920x1080 gradient. Compare the two and see if they're the same, or if one shows more banding than the other. Or, hey, I'll do it for you...

http://fiftv.com/Gradient/Gradient-8...nconverted.psd

Download that file. It's a photoshop document with two layers. Each layer is an identically-parameter gradient created in Photoshop; one layer is native FHD at 8 bits, the other is FHD which was created by making an 8-bit UHD gradient, and downconverting it to 16-bit using bilinear. Note, I used 16-bit because Photoshop doesn't have a 10-bit option, but -- it won't matter at all, because if your theory is correct there should be no benefit whatsoever, right? So it shouldn't matter if it was done in 8-bit or 10-bit... also, I used bilinear mode; I didn't even use bicubic, which would have done much better. Of course, if you use "nearest neighbor", then there won't be hardly any improvement, so -- why do that?

So just view that at 100% size, and toggle the layers on and off, you'll see the differences. Stretch it up or yank it down, manipulate it however you want, you'll see that the downconverted UHD holds up much better than the native 8-bit FHD. So there's obviously more data being stored and retained, more shades available, in the downconverted UHD.

Gary Huff October 19th, 2015 09:47 AM

Re: When to use V-Log
 
The banding is smoothed out somewhat, but not by a whole lot.

And you said nearest neighbor wouldn't make any difference, and there's a lot of comments using "proper" software, so what is this proper software explicitly? After Effects? Resolve? EditReady? And which settings?

Barry Green October 19th, 2015 09:48 AM

Re: When to use V-Log
 
Quote:

Originally Posted by Roland Schulz (Post 1900879)
As I also said, this happens on "noisier" cameras - again, no real 10-bit information by downscaling! That´s the only thing I am talking about.

There is most definitely information gained by downscaling. Your graphs are correct insofar as they go, but they also represent an unrealistic situation. There is practically no possibility of a video sensor rendering an absolutely flat image like what you show. As I pointed out, first of all you're discarding half the benefit which would come from the use of a 2x2 matrix instead of 2x1, and second you're assuming that the only possible downconversion method is nearest neighbor.

Look at the gradient photos I just posted. The benefit of downconverting 8-bit UHD to 10+ bit FHD is obvious, and contributes to much less banding.

Barry Green October 19th, 2015 09:58 AM

Re: When to use V-Log
 
Quote:

Originally Posted by Gary Huff (Post 1900883)
The banding is smoothed out somewhat, but not by a whole lot.

It's definitely better. And that's just using bilinear.

Quote:

And you said nearest neighbor wouldn't make any difference
Nearest neighbor will act like what Roland's been saying on an absolute mathematical gradient, and as such it would result in only the slightest improvement at the transition point. It would probably work out better on a real-world scenario where there's some minor variation in flat fields. Bilinear is a more comprehensive conversion and takes into account more than just the immediately neighboring pixels, thus it results in much better results. Bicubic does much better still. If you were to resize the UHD gradient using bicubic, you'd see that the banding is nearly entirely eliminated in the resulting FHD 10-bit image.

Quote:

and there's a lot of comments using "proper" software, so what is this proper software explicitly?
Anything that scales with something more sophisticated than simple decimation or nearest neighbor.

Quote:

After Effects? Resolve? EditReady? And which settings?
I am not an expert on all post-production software out there. I would say that thirty seconds of experimentation should reveal whether any particular program you're interested in would do a satisfactory or unsatisfactory job. Heck, just import the source UHD 8-bit gradient, render it out as an uncompressed FHD 10-bit still, and see how it looks to you. If it looks like the native photoshop gradient I supplied, then yeah, that software with those settings isn't going to show you any real benefit. But I'm sure that with whatever modern program you're using, there'll be an option to get much better results, at least as good as the bilinear resize I showed in the photoshop example.

Here's the source 8-bit UHD gradient if you want to use it to experiment with.
http://fiftv.com/Gradient/Gradient-8-bit-UHD.psd

Roland Schulz October 19th, 2015 10:23 AM

Re: When to use V-Log
 
Quote:

Originally Posted by Barry Green (Post 1900884)
There is most definitely information gained by downscaling. Your graphs are correct insofar as they go, but they also represent an unrealistic situation. There is practically no possibility of a video sensor rendering an absolutely flat image like what you show. As I pointed out, first of all you're discarding half the benefit which would come from the use of a 2x2 matrix instead of 2x1, and second you're assuming that the only possible downconversion method is nearest neighbor.

Look at the gradient photos I just posted. The benefit of downconverting 8-bit UHD to 10+ bit FHD is obvious, and contributes to much less banding.

Using an x-LOG gamma, as in the topic, is an absolute realistic situation where you easily get into visible banding on 8-bit recordings. No 10-bit FHD downscaling really helps you there!!


All times are GMT -6. The time now is 01:37 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network