View Full Version : Edit HD at lower res / zoomed etc - how best ?


David Esp
March 19th, 2011, 12:37 PM
Not talking about simply making a lower resolution product than the footage. For that I know e.g. to edit at footage resolution and only down-convert at final render. Instead talking about deliberately editing at lower resolution than footage, for reasons explained below.

Using another NLE, it has become my practice to shoot HD (1080i50) then edit at target product (e.g. SD DVD or half this for Web) resolution, employing the excess resolution to allow digital zooming / reframing / stabilization in post. Whenever possible, I first use motion-compensated de-interlacing to convert the 1080i50 source to 1080p25 so digital zooms can operate on full frames (not just interlace fields), or sometimes to 1080p50 so as to maintain maximum information for other in-post options such as slow-motion. This workflow works well apart from sometimes slow preview frame-rates.

**
What's the best way of doing this kind of thing in Final Cut? Does anyone else use this (atypical) workflow? Any hardware acceleration options? Or is FCS the wrong kind of tool for this sort of job?
**

I guess in FCP-land, the 1080p conversion is best accomplished by Compressor with motion-compensated mode selected in frame options, generating 1080p25 or 1080p50 in ProRes. Is it that simple or are there issues / better alternatives? I wonder for example whether ProRes has enough bandwidth to cope with 1080 at 50 frames/sec.

Craig Parkes
March 19th, 2011, 07:23 PM
There's actually a lot of differing workflows within Final Cut itself or using a mixture of Compressor and Final Cut to achieve differing results here. Some are faster, some are more complicated - and I wouldn't categorically say any produce the 'best' result with the tools available because it really depends on what you are trying to achieve with what sort of footage.

So maybe you can explain what you are trying to achieve with this workflow? Obviously interlaced resizing interlaced footage can be problematic in any workflow, but it's often less so when you are doing a down res rather than an up res. For the web obviously you are going to want a deinterlaced output at the end either way - but you might get better DVD results with an interlaced output - and only deinterlacing the footage you need to do zooms on - figuring out what is going to work for you really depends on your footage and the end result you are going for.

David Esp
March 20th, 2011, 03:19 AM
Hi Craig,

What I'm trying to achieve:

Well I began shooting in excess resolution in the early 2000's (SD for a half or quarter resolution web vid) for live rock music events, so I could shoot the whole stage then be able to zoom in on and follow the lead guitarist etc. strutting around unpredictably on the stage. Doing this in post I would have 20-20 hindsight, so no overshoots or missing of out-of-frame action. Nowadays it's more about corporate and professional lectures but it's still useful to "get one camera to act as several", e.g. shooting a locked-off MS of a live presenter but being able to cut to close-ups of hand gestures, props, facial expressions etc. (sometimes via some other camera audience or long-shot etc). Some presenters strut around like lead guitarists and the audience for that kind of video tends not to like tight close-up tracking. By zooming and following in post one can minimize the tracking - e.g. smooth it out - and can zoom their framing adaptably. Zooming and following can be keyframed, maintaining a smooth "feel" to everything. Then there's motion stabilization in post - which requires some zooming in post to avoid black frame-edges. Normally that would lose resolution, but by shooting higher rez than target, that loss is reduced.

The only thing I did hear was that Peter Jackson had done something broadly similar with an early RED camera and a WW1 short film and I bet others are doing something like this somehow.

The need for de-Interlacing (even when source and target are both interlaced):

The following is not just theory, I have proven it by experiments and practice. It is after all a "pain" so I had to be certain I was gaining anything in practice.

Even if shooting interlaced at HD then doing no zooming etc. and simply rendering to interlaced SD (e.g. DVD) this would not sidestep the re-interlacing issue. That's because SD interlaced field lines do not correspond 1:1 with HD interlaced field lines (since 1080 is not a multiple of 720). So if we consider for a moment only the even lines of the target, Even-Line 1 of target might correspond to Even-Line 1 of source, but as we move down the frame, at some point line-n of target might correspond to line n-1 of source. In effect the source and target lines are two different frequencies (spatially, as we move down the frame). The only way to extract all the information (sharpness) of the source into the target is to de-interlace it first, and if there is any motion then the de-interlacing should be motion-compensated (mocomp), to notionally move the odd and even fields in time, so that e.g. "even+mocomp(odd)" lines combine to make a single consistent frame of all lines. Then it can be resized or zoomed, whatever, and if the target is to be interlaced then either the time-shift can be estimated by further motion compensation or else at the original de-interlacing stage each field can be made into a complete frame, giving (in the case of PAL), 50 complete frames per second (50 fps). In that case, the interlaced target can be made simply by picking alternate even and odd target-lines from correspondingly alternate complete (50 fps) frames.

All this effort does achieve greater sharpness/resolution in practice, I have applied it to many real projects and compared results against those from simpler approaches. It definitely helps, the image is sharper. But over the years I've gained the distinct impression that I'm in the minority doing it this way, so advice has been hard to come by and I suspect most tools (e.g. NLEs) and workflows don't specifically allow for it. But since it works, most likely I'm not the only one doing this, in which case it would be great to "swap notes".

I wonder for instance if some kind of GPU can be used to do the CPU-intensive stuff of resizing or even deinterlacing (and maybe frame-doubling) on the fly at rates sufficient for cutting. Proxies are possible but not ideal since then, while cutting, its not certain what the final product quality will be like.

Any experiences / insights welcome!

Craig Parkes
March 21st, 2011, 03:54 PM
Ok, well my next question would simply be have you got a camera that can shoot 720p50, because while it allow less reframing at least deinterlacing up front doesn't become an issue. In the Peter Jackon exmple, you are looking at a 4k image shooting progressively, so scan and pan for even a HD delivery has resolution to spare and no interlacing to worry about.

Beyond that, the approach you are using does make the most sense for what you are trying to achieve, and unfortunately the current version of Final Cut doesnt offer much in the way of compelling solution intenally as its designed mostly around broadcat and film worklows. High end finishing tools such as Smoke may o a better job, and After Effects would be anothe obvious choice or such heavy manipulation, but maybe not work very quickly because of its design ethos. Premiere Pro deign wise is even more fomat agnostic in some respect han final cut, and is a much more recent engine, so will probably do this better.

Forgive the missing letters, not entirely happy with the ipad keyboard!

Within Final Cut, the workflws and limitations you face in terms of hardware acceleration are these:

If you cut on an progressive sd timeline at 25fps, but use the native HD format, you won't get any field interpolation, final cut will simply throw away the second field in my experience, this will lead to a loss in vertical and temporal resolution. As 1080 is less than double 576, you will be taking a vertical resolution hit compared to 1080 playback even if you simply cut the footage so the framing matches the original. You can however zoom up to 266% before you get to a 1:1 pixel count on the horizontal, but at this point you are at about a quarter of the original vertical resolution so you may get aliasing issues.

You can use filters on the timeline to do a better job of dinterlacing the source footage, but the render times are pretty heinous.

Or you could deinterlace first, out of final cut, and treat the new footage as your master.

However, going to 1080p50 might be a problem

1080p50 and isn't a recognised boadcast format at the moment, so video hardware accelerators won't generally help you there in Final Cut because its not a workflow that industry uses these tools for en masse, also going the 1080p50 route may not be supported by the current version of FCP RT Extreme engine, for the same reason. 1080psf25 would work 1080p in a 50i wrapper, so and you'd get there through the same workflow. Its going to have to grunt tings out on the cpu i think.

If you are doing it up front, i would find a third party piee ofsoftware for deinterlacing.

Compressor is, unfortunatly, very inefficient when it comes to deinterlacing. It can do an ok job but it takes a totally unreasonable amount of time to do so in my experience.

If you are considering another NLE then Premiere pros CUDA engine is newer and may be more efficient for this sort of workflow, and is more likely to take advantage of hardware acceleration in this instance.

So yeah, long story short, for the most efficient workflow in FCP at he moment - deinterlace in a third part piece of software, to an RT accelerated broadcast fomat (1080psf25) or shoot in a progressive format (1080psf25 or 720p50) and the bring into final cut on an SD progressive timeline. People recommend JES deinterlacer a lot, but i haven't gone dwn this road because it doesn't really apply to broadcast workflows.

David Esp
March 22nd, 2011, 05:19 PM
Thanks enormously for your thoughtful response Craig.

I typically shoot on an EX3 by the way, and yes I have been known to use 720p50 mode, but in my earliest tests, I noticed that mo-comp deinterlaced 1080 had a definite edge over progressive 720.

Now that you mention it, I do remember testing Compressor and finding it very slow at mo-comp. Instead, normally I use free third-party AviSynth and TDeint on WIndows (on my Mac), compressing to Cineform (which is available for Windows and Mac).

I'll try and find someone or a shop that will allow me to try out some benchmark tests on Adobe+CUDA of suitable variety. Certainly sounds worth a go.

And it's useful to hear about non-support for non-standard broadcast formats. It confirms what I suspected. I will bear in mind 1080psf25.

Craig Parkes
March 24th, 2011, 04:17 AM
On an EX, because of it's native 1080 sensor and only 35mb/s codec, the 720 captured natively may be a bit of a compromise anyway - however I would think if you were shooting particularly fast moving subject matter it would be better, but most of the time it doesn't sound like you are.