DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   What Happens in Vegas... (https://www.dvinfo.net/forum/what-happens-vegas/)
-   -   Vegas Pro 8.0c and Vegas Pro 8.1X64 are out !!! (https://www.dvinfo.net/forum/what-happens-vegas/130646-vegas-pro-8-0c-vegas-pro-8-1x64-out.html)

Bill Ravens September 16th, 2008 05:45 AM

Hi Steve...

I really wish Sony would provide more documentation. Relative to 32-bit mode, their help files says ...
*32-bit floating point is recommended when working with 10-bit YUV input/output or when using xvYCC/x.v.Color media.

*When using 8-bit input/output, the 32-bit floating point setting can prevent banding from compositing that contains fades, feathered edges, or gradients.

I notice, also, that using 32-bit mode allows some file types(mxf, m2t) to display superwhites. Turning 32-bit mode off shrinks everything to broadcast luma ranges. This is most apparent when importing a colorbar pattern from either my EX1 or HD110. Of course, in 8-bit mode, if I want full 0-255 RGB, I have to apply a levels filter. Why is that? The files are certainly not 10-bit, and the excessive render time in 32 bit mode is unwieldly. If I, first, convert my native mxf file to avi with Cineform Neo HD and import the result, I get full 0-255 RGB, regardless of the bit depth. Now, that adds to my confusion.
edit: the inconsistency with cineform seems to have been a problem with cineform, which is now fixed

Of particular concern is that some render codecs apply an RGB conversion on top of what Vegas already applies, resulting in washed out images. So, for example I start with a colorbar pattern in 8-bit, that shows RGB 16-235. I render this out with something like DVCPRO50, and the result shows a colorbar pattern with RGB 33-225, which is clearly incorrect.

I continue to be baffled. Sony continues to not provide any explanation, documentation or advice regarding this behavior. There are plenty of 3 party pundits who claim to explain the behavior, however, I've yet to understand their logic.

Bob Safay September 16th, 2008 06:43 AM

I tried to download 8.0c, but my ICON still says 8.0a. Am I doing something wrong? Do I download and save to Vegas 8.0 file? Also, same issue with DVD 5.0. Any suggestions? Bob

Edward Troxel September 16th, 2008 06:49 AM

Bob, did you also install it? You can always see what version you're running by starting Vegas. The splash screen will indicate your version. If Vegas is already started, you can go to Help - About and that screen will also tell you the actual version.

Jeff Harper September 16th, 2008 11:46 AM

I agree with Edward, it sounds like you downloaded the .exe file, but didn't actually run it. You need to click on the file that you downoaded and install the program.

Bob Safay September 17th, 2008 06:28 AM

Ed/Jeff, thanks for your replys. I re-downloaded this morning and "saved" it to Vegas 8. Last time I think I screwed up and saved it to my documents. After reloading, I restarted the computer and there it was. Vegas 8.0c. Tonight I will upload dvd 5.0. Thanks again. Bob

Jeff Harper September 17th, 2008 06:33 AM

You don't have to "Save it" any place in particular. You shouldn't store the exe file in your vegas folder...you are wasting space there.

Just download a file like this to your desktop, or anywhere. Click on it, let it run and then when it's finished delete it. Or if you are like me save it on another hd IN your software folder.

Steven Thomas September 20th, 2008 01:27 PM

Thanks Bill...
Man, I was really hoping Sony would address this madness.

Right when I think I now know how to deal with certain footage with RGB level corrections (that should not be needed in the first place), things can go astray in the end by the time its rendered to the final media.

What the....



Quote:

Originally Posted by Bill Ravens (Post 935411)
Hi Steve...

I really wish Sony would provide more documentation. Relative to 32-bit mode, their help files says ...
*32-bit floating point is recommended when working with 10-bit YUV input/output or when using xvYCC/x.v.Color media.

*When using 8-bit input/output, the 32-bit floating point setting can prevent banding from compositing that contains fades, feathered edges, or gradients.

I notice, also, that using 32-bit mode allows some file types(mxf, m2t) to display superwhites. Turning 32-bit mode off shrinks everything to broadcast luma ranges. This is most apparent when importing a colorbar pattern from either my EX1 or HD110. Of course, in 8-bit mode, if I want full 0-255 RGB, I have to apply a levels filter. Why is that? The files are certainly not 10-bit, and the excessive render time in 32 bit mode is unwieldly. If I, first, convert my native mxf file to avi with Cineform Neo HD and import the result, I get full 0-255 RGB, regardless of the bit depth. Now, that adds to my confusion.
edit: the inconsistency with cineform seems to have been a problem with cineform, which is now fixed

Of particular concern is that some render codecs apply an RGB conversion on top of what Vegas already applies, resulting in washed out images. So, for example I start with a colorbar pattern in 8-bit, that shows RGB 16-235. I render this out with something like DVCPRO50, and the result shows a colorbar pattern with RGB 33-225, which is clearly incorrect.

I continue to be baffled. Sony continues to not provide any explanation, documentation or advice regarding this behavior. There are plenty of 3 party pundits who claim to explain the behavior, however, I've yet to understand their logic.



All times are GMT -6. The time now is 01:17 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network