|
|||||||||
|
Thread Tools | Search this Thread |
July 16th, 2007, 05:43 AM | #1 |
Regular Crew
Join Date: Jul 2004
Location: Japan
Posts: 93
|
HDV Mpeg2 8bit to Cineform 10bit 4:4:4 RGB
I am considering upgrading from NEO HDV to NEO 2K. Is their any definitive quality benefits to be gained by doing so?
My work flow is: Capturing via fire wire and on the fly converting the HDV Mpeg2 from the tape to 10bit RGB 4:4:4 setting in HDLink. I am primarily concerned with being able to perform better chroma keying and have an increased color space for CC. Thank you in advance for your responses.
__________________
Intel Core 2 Quad E6600(@3GHz) | Nvidia GeForce 9600 GT/1GB PCI-e x16 DVI | 8GB DDR2-SDRAM | 2 x 1TB S-ATA2/7200rpm/32MB Hard Drives | Vista Ultimate 64 |
July 16th, 2007, 06:45 AM | #2 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
No doubt 10 bit 4:4:4 is better than 8 bit 4:2:0. However if your source is 8 bit 4:2:0, by what magic trick can the signal be upgraded to 10 bit 4:4:4? In other words, how can a lossy compression to MPEG2 be reverted to original? IMO the compression to MPEG2 has caused irretrievable loss that can never be regained. About as futile as to ask the IRS to refund your paid income tax, just because you feel it better to have your gross income as net (BTW I agree with that feeling).
|
July 16th, 2007, 07:57 AM | #4 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Bill,
On that I agree, but my doubt is where is the information coming from, if it has been lost by compression? Lets say I start out with 8 bits: 1 0 1 1 0 1 1 0 and want to turn that into 10 bits, so in theory I end up with 1 0 1 1 0 1 1 0 x x. The question is what will the two x's turn out to be, 0 or 1? Isn't that rather arbitrary? The same applies to the color space, how do you get from 4:2:0 back to 4:4:4 if you are missing the data? Compare it to a HDV signal at 25 Mbps, which is far less than a 1485 Mbps HD-SDI signal. How can you recover the missing 1460 Mbps? |
July 16th, 2007, 08:56 AM | #5 |
Regular Crew
Join Date: Jul 2004
Location: Japan
Posts: 93
|
NEO HDV resamples your ingested file from mpeg2 4:2:0 into the Cineform 4:2:2 codec in NEO HDV and up to 12bit 4:4:4 when ingesting via NEO 2K (from the Cineform site). I want to know if there is noticiable benefit for keying and CC.
__________________
Intel Core 2 Quad E6600(@3GHz) | Nvidia GeForce 9600 GT/1GB PCI-e x16 DVI | 8GB DDR2-SDRAM | 2 x 1TB S-ATA2/7200rpm/32MB Hard Drives | Vista Ultimate 64 |
July 17th, 2007, 11:24 AM | #6 | |
Trustee
Join Date: Mar 2004
Location: Milwaukee, WI
Posts: 1,719
|
Quote:
|
|
July 17th, 2007, 01:58 PM | #7 |
Major Player
Join Date: Sep 2003
Location: Solana Beach, CA
Posts: 853
|
Mark,
When capturing from HDV source we do a nice job re-interpolating 4:2:0 chroma back to 4:2:2. We have also posted a comparative example of banding results when color correcting an 8 bit sources versus a 10 bit source: http://www.cineform.com/products/Asp...pect.htm#10bit. Even if your camera source is 8 bits you benefit by converting it to 10 bits and carrying 10 bits in post. I see this all the time in Photoshop when dealing with 8-bit sources. But...if you're not "pushing" the image too far in post then you're probably fine with 8 bits. So it's a subjective line that you cross when you really need 10 bits. I think for 4:2:0 sources you may not need to go all the way up to 12-bit CineForm 444, but again, your workflow may require this. That's why we make the Trial versions available. |
July 17th, 2007, 08:54 PM | #8 | |
Regular Crew
Join Date: Jul 2004
Location: Japan
Posts: 93
|
Quote:
Basically what I want to do is have 3 clips of the same scene captured via firewire from camera using the highest quality available from each trial 1a- Cineform 8 bit 1440x1080 4:2:2 yuv 1b- Cineform 10 bit 1920x1080 4:2:2 yuv 1c- Cineform 12bit 1920x1080 4:4:4 yuv And then put the clips through a Chroma keying process in Fusion and a CC in Combustion. Independently swapping the clips using the same settings to view the results. How would I go about doing this and has anyone done a test like this? Thank you for your responses.
__________________
Intel Core 2 Quad E6600(@3GHz) | Nvidia GeForce 9600 GT/1GB PCI-e x16 DVI | 8GB DDR2-SDRAM | 2 x 1TB S-ATA2/7200rpm/32MB Hard Drives | Vista Ultimate 64 |
|
July 17th, 2007, 09:00 PM | #9 |
It's , actually, fairly simple to visualize the effects of doing CC in a variety of different color spaces, via a histogram display of a frame before and after cc'ing. You can experience tghis yourself by performing some CC on a still frame in Photoshp at 8 bit color space. After any correction is performed, the histogram develops a "comb" appearance where the gaps between the peaks represent those areas that are missing color information. Performing the same operation in 16 bit space resultes in far fewer comb tooth effects because the color information is divided into much smaller pieces. Converting 8 bit to 10 or 16 bit information doesn't really "invent" info, rather it interpolates and dithers up to higher order, allowing a much finer quantum size when it is editted.
|
|
July 17th, 2007, 10:35 PM | #10 |
Regular Crew
Join Date: Jul 2004
Location: Japan
Posts: 93
|
Thank you for your answers. However, my question is related to the Cineform codec and its performance and quality. I understand the theory and practice behind CC and the increased benefits from 8 to 10 to 12bit. My original question is about how well Cineform performs its conversion and if there is a real benefit from using its premier NEO 2k or if staying with NEO HDV or just upgrading to Neo HD would be better choice. For example, on a scale form 1 to 10 (using the highest quality setting for each) if going from Neo HDV to Neo HD is a jump from a 5 to an 8 but going from Neo HDV to Neo 2k is only a 5 to 8.5 or 9 jump. Is the difference worth the money?
__________________
Intel Core 2 Quad E6600(@3GHz) | Nvidia GeForce 9600 GT/1GB PCI-e x16 DVI | 8GB DDR2-SDRAM | 2 x 1TB S-ATA2/7200rpm/32MB Hard Drives | Vista Ultimate 64 Last edited by Mark Duckworth; July 18th, 2007 at 12:25 AM. Reason: clarification |
July 18th, 2007, 09:22 AM | #12 |
CTO, CineForm Inc.
Join Date: Jul 2003
Location: Cardiff-by-the-Sea, California
Posts: 8,095
|
There are factors beyond just the source, NEO 2K is aimed are those with a lot of graphic elements, which are not limited by the video sources chroma resolution. NEO 2K is also getting alpha channel encoding, very soon, which is another incentive to upgrade to this premium product.
__________________
David Newman -- web: www.gopro.com blog: cineform.blogspot.com -- twitter: twitter.com/David_Newman |
July 18th, 2007, 11:02 AM | #13 |
Trustee
Join Date: Sep 2005
Location: Gilbert, AZ
Posts: 1,896
|
More questions regarding 8bit verses 10bit
moved to new subject..
Last edited by Steven Thomas; July 19th, 2007 at 08:38 AM. |
August 11th, 2007, 03:54 AM | #14 |
Regular Crew
Join Date: Jun 2007
Location: Belo Horizonte BRAZIL
Posts: 154
|
better blacks
I think I found a way to minimize blocking artifacts in HDV blacks. I would like to explain and if someone try it, let me know.
1 - put the same m2t footage aligned in two video tracks in Adobe Premiere. 2 - in the upper track apply gaussian blur and adjust between 4 and 6. (adjust looking the blacks) 3 - in the upper track apply arbitrary map and adjust between -5 and -1 (adjust looking the blacks) 4 - in the upper track apply luminance key (adjust the key to keep only the blacks of the upper track over the lower track, be carefull to not get color fringing in key edges) 5 - render to uncompressed. The goal is to minimize the blocking artifacts in the blacks where HDV compression is more visible. If someone try this, let me know. Thanks. |
| ||||||
|
|