View Full Version : HDV problems gone?
Spike Spiegel November 26th, 2006, 12:16 AM I just came across this article and wanted to get some feedback:
http://www.studiodaily.com/studiomonthly/currentissue/7292.html
Supposedly, with this peice of gear, 1000 USD, a decent computer 2500USD, a HDV deck (2500) and a HD-sdi capture card (1200) , it simplifies the HDV workflow, and makes it better, supposedly...
It says that you can import HDV material into a HD-SDI uncompressed project. What if all you had was HDV matieral? Through this, you get 4:2:2 color space, eliminate the long GOP, the only problem is the massive amount of hard drive space needed.
Am i right in thinking this is the solution for HDV: in which you can dub out to HDCAM, DVCPRO HD , eliminate the long gops, keep a high color sampling space, etc. Seems perfect!
Spike Spiegel November 26th, 2006, 12:24 AM Also, just to add, the whole point of this post is to figure out a workflow in which we skip the compression of firewire and how that affects the quality of the footage...
That being said, the AJA capture card, xena lh/e mentions this "analog component connections from HDV cameras that allow direct ingest of HDV-acquired material into uncompressed projects". So, if we directly connect to the capture card (not thru firewire) we will avoid firewire's compression during captures?
Shane Ross November 26th, 2006, 01:22 AM It doesn't solve the compression issue. If you shoot HDV to tape, it then gets compressed MPEG-2 with the GOP. Running it thru this device doesn't get rid of that. What you can do with that workflow is capture HDV as a different codec, like DVCPRO HD or uncompressed 8-bit HD. What the advantage to that is that you have a better colorspace and compression for color correction and effects. Renders are faster, and outputting is easier.
The only way to avoid the compression is to capture directly from the camera thru that device and bypass the tape recording.
Spike Spiegel November 26th, 2006, 03:10 AM Ah, i see what you're saying, i realize that as soon as you shoot to tape you are compressing it, HDV is the codec!
Thanks for clearing that up! Is the extent you can tweak the previously HDV footage in 4:2:2 workspace significantly better than say native HDV post color correction/tweaking?
Ben Gurvich November 26th, 2006, 05:33 AM Im not sure i understamd why you would need the hardware. Couldnt one just capture in HDV and then export out to the desired codec. What can the hardware do that the software cannot?
Cheers,
Ben Gurvich
Spike Spiegel November 26th, 2006, 12:12 PM The hard ware is for bypassing firewire transfer from HDV deck/camera to the computer. Soona s you hit firewire there is a 5 to 1 compression, i believe.
The hardware allows you to capture via HD-SDI pipeline into a hardware capture card (on your comp thru pci/pcie) and it lets you work HDV footage (which is now captured to a different format) in a 4:2:2 environment. This means you get to tweak the image more than you could in a traditional HDV sequence/project and you have access to real HD monitoring (thru HD-SDI to a broadcast HDmonitor), you get rid of long GOP , uncompressed HD if you wish, you have the option of dubbing out to HDCAM, etc.
Note, this is probably not realistic for anyone unless they are trying to do network projects. HDV is beautiful when it is shot right, I'm trying to get the most out of the color space via this method..BTW, anyone feel free to correct me if i'm wrong about any of the above ifo.. :)
Andrew Ott November 26th, 2006, 12:20 PM Can somebody confirm or disprove (hopefully) the statement that firewire adds 5:1 compression?
I've never heard that before.
Mark Donnell November 26th, 2006, 12:31 PM Use of a firewire connection does not add any compression to anything. The HVX-200 outputs 100 Mbps DVCPRO-HD via a firewire port. The only limit is the data rate of the sending and receiving units, up to the 400 MB maximum that the firewire port can handle (800 MB on special firewire ports).
Chris Hurd November 26th, 2006, 12:33 PM FireWire does not "add" any compression. It's simply a pipe through which data moves, from one device to another. DV compression has *always* been 5:1, that's where that ratio comes from.
Spike Spiegel November 26th, 2006, 12:40 PM oops, thanks for the correction. I got that tangled up!
Anyway, i've color corrected HDV footage before, and there is not a lot of leeway. I've never touched 4:2:2 uncompressed material before. This route allows you have this color space, just how much is it possible to tweak the image around?
Scott Sullivan November 26th, 2006, 12:46 PM Even if you convert HDV to a 4:2:2 codec, you're not going to get 4:2:2 values.
Isn't that sort of like scanning a black and white faxed piece of paper into a scanner at 600 dpi? You're not going to gain any info that wasn't there to begin with. Just because it's a bigger hose, doesn't mean there's any more water.
If you want 4:2:2, you need to capture 4:2:2 like the HVX does. You'd have 100 mbps (DVCPRO HD) instead of 25 (HDV) and more color info.
With the device linked to, all you'd be doing is wasting hard drive space. Someone please correct me if I am incorrect.
Scott
Spike Spiegel November 26th, 2006, 12:50 PM hmm, i see what you're saying. So basically, that setup is to let you bypass the Long GOP of HDV, and capture to a codec such as DVCpro HD ....
Also, couldn'y you capture HDV thru analog HD to a capture card like Kona/AJA? Why would this be better (or not) than firewire?
Nate Weaver November 26th, 2006, 12:52 PM Anyway, i've color corrected HDV footage before, and there is not a lot of leeway.
There can be, under certain conditions.
If the footage is noisy low-light (even 0db gain), then the codec introduces more blocking because it can't handle the extra high-freq info. Try to bend the image around in CC and the blocking comes out. Pretty bad looking. I shot a concert with 9 Sony Z1s a long while back where the base light level left a lot to be desired, and in the end the show looked like it could have been shot with regular 30p DV.
If you have a real clean image shot with plenty of light, then things get a lot better.
If you downconvert the HDV to uncompressed 10bit SD (for an SD deliverable), and THEN color correct, it gets even better, because a lot of the blocking gets averaged out in the downconvert.
The cleaner the camera head, the better the codec performs. Stuff from my F350 at 35mbits is a much different animal to color correct for all the above reasons, plus the extra 10mbits.
Spike Spiegel November 26th, 2006, 01:07 PM I see what you are saying Nate. The fact that you can now convert the HDV to DVCPROHD(or uncompressed) using the convergent design/hd-sdi is a bit confusing... Does this now give you that extra color space for CC or is it simply just a workaround for eliminating long gop?
Nate Weaver November 26th, 2006, 01:54 PM The way I see it, it's a workaround for NLEs that don't have very good HDV/MPEG support quite yet.
A lot of people that work in Avid and have existing workflows in place for episodic TV can use a solution for HDV that doesn't involve native HDV use, nor dubbing all their HDV reels to HDCAM (like a lot of shows have done in the past).
In my experience, transcoding HDV to another codec always brings out anywhere from a little recompression artifacts to a lot, so I never do it unless I absolutely have to. My experience does not apply to the Cineform products though, most users say that transcoding to that looks great.
Ken Hodson November 26th, 2006, 08:11 PM That product will save render time if you want to use a different codec. Thats it. A software product like AspectHD that does intelligent up-sampling to 4:2:2 seems like a better option to me, rather then down-sampling to DVCproHD or the massive size of uncompressed.
Spike Spiegel November 26th, 2006, 08:16 PM whoa, aspect HD does that ? It upsamples the HDV footage to 4:2:2 when you create a cineform 1080i project?
Ben Winter November 26th, 2006, 09:14 PM whoa, aspect HD does that ? It upsamples the HDV footage to 4:2:2 when you create a cineform 1080i project?
Not when you create a project, when you convert your footage to the cineform .avi intermediate. The intermediate incorporates 4:2:2 colorspace for less artifacting when CC'ing and such.
Spike Spiegel November 27th, 2006, 01:05 AM what do you guys think of capturing analog HD to a capture card? Any benefits to try out that method rather than thru firewire?
Chris Hurd November 27th, 2006, 06:11 AM Analog HD capture has been discussed here many times... do a search on component video capture and respond directly to those threads, please. Thanks in advance,
David Kennett December 1st, 2006, 12:19 PM Maybe a little clarification on some previous posts.
DV (standard definition) is 5:1 compression and uses INTRA-frame compression similar to JPEG.
HDV uses MPEG2 (a more efficient encoder than DV, whereby complete frames are sent only occasionally, and subsequent frames are derived from the complete frame. Each complete frame begins a Group Of Pictures.) This complicates editing because only a small percentage of frames are entities unto themselves. HDV is typically compressed more than 15:1.
With DV or HDV, the compression is done BEFORE recording data to tape (or whatever).
You can NEVER recover something which has been lost to compression, reduction of resolution, or reduction of color resolution.
ANY decompression and subsequent re-compression will ALWAYS result in further losses - even a lossless codec. There is loss in the conversion.
I feel there is too much concern about color resolution (4:2:2 vs 4:1:1). There are bigger fish to fry. Maybe 4:2:2 better for chroma key. In any case, you cannot make a 4:2:2 out of a 4:1:1.
This has already been said by other others, but various comments lead me to believe there is still some confusion. Hope this helps.
Tim Brown December 1st, 2006, 01:31 PM In any case, you cannot make a 4:2:2 out of a 4:1:1.
With all due respect David, of course you can. Transcoding DV through a capture card into any of the lessor compressed formats on another machine will greatly increase the color space of your original material "technically" changing 4:1:1 to 4:2:2 and greatly increasing the data used for color correction and graphics. But technically you are correct as you are unable to add anything to the original. Only increase the levels of gray in what was captured.
Ken Hodson December 1st, 2006, 11:51 PM Now this seemed to have been made confusing again. Yes you can convert a 4:1:1( or 4:2:0) into a codec that is 4:2:2. It will not change your original colour space one bit. Essentially you will have a 4:1:1(or 4:2:0) frame held within a 4:2:2 frame. It will look the same and has no advantage at all until it is processed further such as colour correction. Then we have the case of 4:1:1(or 4:2:0) where it is processed to a codec such as AspectHD or Magic Bullet, that will intelligently upsample the chroma info to 4:2:2. This type of process does in fact change the image by filling in chroma info. So yes, technically adding to the origional.
David Kennett December 2nd, 2006, 04:16 PM Ken,
Kinda sounds like an upscaling DVD player to me. There are tricks to improve the appearance of an image as one converts to something better (resolution or whatever). But you simply cannot create something you never had in the first place. This isn't even hi-tech, it's just logic.
Ken Hodson December 3rd, 2006, 03:51 PM Yes you can create something that wasn't there. How well it does it depends on the software. It isn't creating luma info, it is using the original chroma info (4:2:0) and the luma values to fill the chroma info in an intelligent manner. Please check out Cineform.com for further explanation.
It is a little hard to find on their site so here is a link
http://www.cineform.com/technology/HDVQualityAnalysis051011/HDVQualityAnalysis051011.htm
Click on Original M2T file and then Cineform avi to see the difference.
David Kennett December 4th, 2006, 03:17 PM Don't know how you can say that, Ken. If you "make something up", then you "make something up", no matter how slick you are. They state "For many production needs the quality of CineForm intermediate is actually higher in it's first generation than the original". They did not state it IS actually higher! I see some color streaks pointed out by the arrows. I don't know where they came from, but they do not come from 4:2:0 chroma sampling. It's possible that their technique to fill in the missing data could inadvertantly fill in these small color streaks. I have difficulty finding any artifacts due to 4:2:0 sampling, as many analog errors are there before digitizing, which can easily mask the 4:2:0. I think the jaggies on the purple hat at the bottom of the cropped frame are far more objectionable. They are caused by interlaced scanning.
They show you the deterioration with TEN generations of HDV decoding and recoding, which I would expect to be degraded somewhat. It DOES make sense upscaling to a less compressed format THEN degrade by ten generations. My editing is done with NO recoding, or at the most two generations.
They make some statement as to 4:2:0 errors being more apparent for deeply saturated colors. If you have ANY color errors, they're going to be more apparent on saturated colors! Keep in mind that any errors caused by reducing the color sampling will ONLY show up at the edges, and I can't see anywhere where they have improved the chroma detail.
The original idea of reducing chroma resolution came from tests, showing that the human eye could not percieve fine chroma detail - only luminance. Since 4:2:0 is reducing chroma detail more than 4:2:2, it would seem to me that some pretty scientific tests would have to be done to show that we can actually percieve the difference.
There are worse "picture demons" than 4:2:0, and while the CineForm codec has it's advantages, it cannot "improve" 4:2:0.
A. J. deLange December 4th, 2006, 04:17 PM Once high frequency picture (or sound for that matter) information is lost (removed by an atialiasing filter) prior to sampling or resampling it is gone forever and cannot be restored. It can, however, be replaced by information synthesized by an up-rezzing algorithm and, if that is done properly, we may well perceive the new picture to be better than the original. An outstanding example of this is sharpening by algorithms such as USM. The sharpness we see in a USM doctored picture is not the sharpness of the original thus the distortion of the original has been increased but the edges are more distinct and thus the picture looks sharper and more pleasing up to the point where we start to notice the halo along the edge. Another example is when SD video is uprezzed to play on an HD set. If, instead of just doubling pixels we average adjacent ones and display [original1][average][original2] the jaggies are suppressed even though no new picture information has been generated. [average] is not what appeared in the corresponding spot on the CCD (which couldn't resolve it) and thus represents distortion though in this case, as in USM what we see is more pleasing.
So it's a perception thing. Downsampling and then uprezzing increases mathematically measured distortion but the picture may look better.
Ken Hodson December 4th, 2006, 04:50 PM A.J- We are talking chroma not luma. And we are not talking about uprezing. Two completely different things. Intelligent upsampling.
David- Yes it is making it up, that is the whole point. But when you have all of the luma info and a 0:2 chroma already, it does a very good job. It isn't so much that the human eye can see much different, it is the fact that when you start to CC or other FX work such as chroma key, then the difference is far more noticeable, and the primary reason why people were considering working in uncompressed to begin with.
A. J. deLange December 4th, 2006, 05:44 PM When you double the number of chroma samples you are upping the chroma resolution. Intelligent upsampling it is and my comments still apply. Granted both my examples are both best understood with respect to luma but the color difference signals are sharpened and interpolated as well. Anyway, a signal is a signal and the principals are the same. It may look better (and or work better for some applications) but it isn't real.
But aren't you all demphasizing the main reason for converting HDV to a less compressed CODEC i.e. interpolation (uprezzing) in time? Instead of the very compressed combination of 1 I frame per second and motion vectors you get the P and B frames encoded as I frames so that you can step through them one at a time, cut between them etc. Again, it's not the real data - that got thrown away when the HDV compression was done - but rather something that looks credibly like the original stuff to the point that you can't tell whether an HDVPRO frame came from an I, B or P frame.
Ken Hodson December 5th, 2006, 04:10 AM When you up-rez luma it is is unknown territory, being it has to completely guess what information would be there. When up-sampling chroma the outline of where information goes is already mapped out by the luma info. Based on this luma info and recorded surrounding chroma, the HDV frame gives a very logical path of where chroma info should go and good software can figure it out quite well. I have been more then impressed. It gives HDV footage, especially progressive, a great head start on overcoming its deficiencies, especially if you move to uncompressed for high end CC.
As to the second part of your post, I really can't tell what you mean. For one I frames are 5 times per second for 720pHDV and twice per second for 1080iHDV. Not one per second.
A. J. deLange December 5th, 2006, 07:41 AM When you up-rez luma it is is unknown territory, being it has to completely guess what information would be there. When up-sampling chroma the outline of where information goes is already mapped out by the luma info.
The statement is true with respect to luma because luma uses the full bandwidth of the sensor in a 3 chip camera. In a Bayer camera (single chip) it doesn't and luma and chroma are both interpolated (approximated) though chroma more so. Because the chroma bandwidth of the channel (our eyes) is demonstrably less than the luma bandwidth we subsample to say 4:2:2. We have irrevocably thrown away information when we do this but it's information that the eye can't use (under normal viewing conditions) so we can get away with it. If we try to throw away more information (by going to 4:1:1 or 4:2:0) we rely on the same principal - that the eye can't use the information we tossed. But in this case, as many can see the difference, the theory falls down and it becomes an engineering trade between quality and bandwidth. There is no way to use luma to accurately reconstruct chroma. If there were the sampling schemes would employ them. There are ways to use luma to approximate what missing chroma might be and these are used all the time. It is the latter that schemes like sharpening, uprezzing and even the processing of data from Bayer masked (single chips) employ
It isn't imagining how things will look. Based on the luma pattern and surrounding chroma, good software can figure it out quite well.There are no algorithms which can restore the information which has been thrown away. At the risk of being repetitive there are many algorithms which can guess what >might< have been in the missing slot (called generically "interpolators" though a variety of techniques other than simple linear interpolation have been used) and reconstruct a pleaing picture - perhaps even more pleasing than the original to the point where...
I have been more then impressed....one is quite impressed. But if one looks closely at such images (i.e. subtracts them from the originals) one finds distortion has increased. But if it looks better, who cares up to the point where the distortions become plainly visible.
Combining the luma info with the half sparce 2:0 of the HDV frame gives a very logical path of where chroma info should go, given good software.If by this you mean combining Y with R-Y and B-Y to get R and B (and eventually G) that's true but there is no information in Y about where B and R are likley to go except that the values were band limited to the Nuyquist frequency of the original sampling. But as we have no energy above the Nyquist frequency at the current sampling rate (and we don't) we can only make something up above that frequency if we want to increase detail. In nonlinear processing this can be done but it is very risky so the more usual technique is to stuff 0's in to the data steam and low pass filter it thus producing intermediate samples which are below the >lower< Nuyquist frequency. This is actually using the chroma signal itself to tell you "where the info should go". After upsampling other techniques like peaking can be used to, for example increase >apparent< chroma sharpness.
I frames are 5 times per second for 720pHDV and twice per second for 1080iHDV. True. My mistake.
The theory that much is lost from one HDV frame to the next is much misunderstood. Each frame isn't a degraded form of the frame before it under most circumstances. The differences between each frame is what is recorded, not the whole frame in a degrading fashion, which saves a lot of redundant data. It is not a case of every frame degrading horribly between each I frame as you suggest .In MPEG 2 each macroblock is examined to see how far it moved from one frame to the next (if at all). In later codecs rotations and distortions are also estimated. The motion is recorded (motion vector) and the frame reconstructed. The difference between the reconstructed frame and the actual frame is then quantized and recorded along with the motion vectors. The quantization level is set according to the fixed available bandwidth. When the camera is focused on a still scene the system works very well. The data load is 2 I frames per second. Theoretically all motion vectors are 0 and all differential data is 0. When things move the story is different. No estimation scheme gives error free estimates so motion vectors are not perfect and thus the differential data load goes up. When it reaches a certain point the system has to quantize more coarsly and data is lost. As with lost temporal samples data lost to quantizing can never be restored though again a good reconstruction algorithm can produce pleasing pictures. Reconstructed (i.e. non I) frames may contain a lot of distortion which is sometimes quite visible depending on the codec. The codec used by my local PBS station, for example, is terrible. If anything moves it's a mess. OTOH the codec on my XL-H1 is excellent. Haven't had it fall down on me to date.
My point about the use of the intermediate codecs is that editing HDV natively is difficult because if you want to cut on a B frame or P frame you have to reconstruct that frame which often involves frames which come after it. This is awkward in the timeline and more so when rendering but I guess it can be done. Seems easier to convert IBBBP.... to IIIII.... for editing and that is, AFAIK, the approach most people use. The larger point I was trying to make is that sampling theorem is sampling theorem. It doesn't matter whether the signal is one dimensional (audio) two dimensional (a picture) or three dimensional (a movie with x, y and time being the three dimensions) Mr. Shannon's wisdom still applies.
["chroma" added in first sentence for clarity]
David Kennett December 5th, 2006, 04:30 PM A. J., youve described things very well.
To clarify something, 4:2:2, 4:2:0, or 4:1:1 only reduce color RESOLUTION from the full 4:4:4. The only effect is resolution loss. It does not affect anything related to color depth or. color space. Assuming that we cannot see the color resolution loss, it certainly is reasonable that a chroma key could benefit from higher color resolution.
All the issues about MPEG encoding don't really have anything to do with color sampling. It is worth mentioning that for any given data rate, intERframe encoding will beat intRAframe encoding EVERY time, it just makes editing tougher. Converting to an intRAframe encoder from the MPEG intERframe encoding certainly is an advantage for complex editing. I think some CineForm marketing types have made too big an issue of color sampling.
Here are things I would do BEFORE improving color sampling.
1. Do away with interlaced scanning.
2. Use the best intERframe encoder (AVC or VC1), and use the highest data rate I could.
3. Make camera improvements (sensitivity, resolution, noise).
There are more, but I think you get the idea!
Ken Hodson December 5th, 2006, 10:21 PM A.J I appreciate your thorough reply. I don't know if we disagree as much as I thought we did. We have gotten very technical, but at what end. We both know HDV doesn't produce junk on its non I frames, unless pushed in extremes, and even that varies between different cams. Editing native HDV isn't really advantageous. Native HDV at 4:2:0 isn't the best in many post situations. Especially ones that rely on chroma info. Products like AspectHD help very much in this regard, yet do not diminish the image one bit. I don't know what else to say. For me, intelligent upsampling to 4:2:2 has been a god-send in post(not to mention work flow simplification), and I would not be such a fan of HDV without it. Period. I think HDV capture is a great compression all around. But for me that is where it ends and I don't want to work with it from that point on.
David I agree with your points 1-3. Colour sampling isn't the most important part. In fact as I have noted I am very happy with software enhancements in post which do help overcome some of the few deficiencies in post when using a lower chroma sampled format. It is all still such a leap over SD DV that it is all good to me.
Nick Hiltgen December 6th, 2006, 02:51 AM To get back to the original question, we used a variant of this box to drop the HDV footage into an avid before there was any support for 24F. It works rock solid (I'm told) with the z1u but we had a few issues with it. When you color correct footage imported through this box, you have the same leway you would if you were able to import the HDV directly into the computer. It's mostly for capturing time code and giving computers that need it deck control over an HDV device. To be completely honest I wasn't too impressed, but I think if they've finally got the bugs out then it might be something for those looking fr this very specific need.
Ronald Lee December 22nd, 2006, 01:26 AM If I may divert the discussion away from color space for a bit, suppose we shot in a progressive (say 24p HDV) format, rather than interlaced. That should remove the anti aliasing, sure, but if the footage was decompressed, would it show less loss than if we had used an interlaced original source?
Common sense says probably, but in practice?
Also, I don't suppose that artifacting in the blacks is an HDV problem is that 'gone' with this device is there?
The device still shows some promise with the better color space range though, I've taken 4:1:1 SD DV and put it into a 4:2:2 enviroment and there IS more leeway in what you can do with the color correction.
Ronald
David Kennett December 22nd, 2006, 09:00 AM Ronald,
My guess is that the additional leeway in color correction is your CC software's ability. Do you use the exact same software in both cases? Can you describe the limitations with 4:2:0?
Keep in mind, too, that the chroma information being reduced is only RESOLUTION. We have done nothing to reduce the number of available colors, or limit the richness or saturation of colors. The only possible observable effect would be the smearing of color about a clearly visible luminance edge.
4:2:2 makes 2 color samples of Pb and Pr for every 4 luminance samples (Y) ON EACH SCAN LINE, reducing the color HORIZONTAL resolution to half that of luminance. 4:2:0 ALSO reduces the vertical chroma resolution by half. So, in a sense, 4:2:0 just reduces the vertical resolution by the same amount as the horizontal resolution in 4:2:2. A difference that would very difficult to observe. Interlaced scanning does make the process more difficult, since adjacent lines accur in alternate fields, and there may be motion during that time.
Incidentally, the analog bandwidth reduction of chroma (in NTSC and PAL) was to about 1/3 that of the luminance (about 3:1:1).
|
|