Originally Posted by Bill Ravens
(Post 935411)
Hi Steve...
I really wish Sony would provide more documentation. Relative to 32-bit mode, their help files says ...
*32-bit floating point is recommended when working with 10-bit YUV input/output or when using xvYCC/x.v.Color media.
*When using 8-bit input/output, the 32-bit floating point setting can prevent banding from compositing that contains fades, feathered edges, or gradients.
I notice, also, that using 32-bit mode allows some file types(mxf, m2t) to display superwhites. Turning 32-bit mode off shrinks everything to broadcast luma ranges. This is most apparent when importing a colorbar pattern from either my EX1 or HD110. Of course, in 8-bit mode, if I want full 0-255 RGB, I have to apply a levels filter. Why is that? The files are certainly not 10-bit, and the excessive render time in 32 bit mode is unwieldly. If I, first, convert my native mxf file to avi with Cineform Neo HD and import the result, I get full 0-255 RGB, regardless of the bit depth. Now, that adds to my confusion.
edit: the inconsistency with cineform seems to have been a problem with cineform, which is now fixed
Of particular concern is that some render codecs apply an RGB conversion on top of what Vegas already applies, resulting in washed out images. So, for example I start with a colorbar pattern in 8-bit, that shows RGB 16-235. I render this out with something like DVCPRO50, and the result shows a colorbar pattern with RGB 33-225, which is clearly incorrect.
I continue to be baffled. Sony continues to not provide any explanation, documentation or advice regarding this behavior. There are plenty of 3 party pundits who claim to explain the behavior, however, I've yet to understand their logic.
|