View Full Version : What codec are you editing?


Daniel Alexander
February 2nd, 2008, 09:38 PM
Just curious to know some of the work flows you all are taking at working with the footage from this camera. For those shooting in 1080p hq, are you editing in the native xdcam codec or are you transcoding to something else for any particular reason?

Justin Carlson
February 2nd, 2008, 09:44 PM
For just about everything I convert to ProRez HQ after I import the Raw footage. Then it makes it a little sharper (and faster) when I edit in in FPS 'Color'.

Eric Pascarelli
February 2nd, 2008, 10:12 PM
XDCAM. Seems to work great and I love how little space it uses on the drive. Also, it's nice to just have one copy of everything to keep track of.

Rainer Mann
February 3rd, 2008, 04:50 AM
I also use the original XDCAM Files. Importing to FC is so easy and I can start editing immediately. That's one of the reasons I switched from Avid/Windows to Final Cut/Apple.

Matt Davis
February 3rd, 2008, 05:17 AM
For just about everything I convert to ProRez HQ.

Have you experimented with the 'render in ProRez' option? In that, for cuts-only, unprocessed footage, you stick with the native format, but as soon as transitions or other twiddles are introduced, they're all done at ProRez at the selected quality. The final edit is exported as ProRez regardless.

Therefore one's editing quickly, no footage gets transcoded unless it needs to, and you've got a frame based codec just when you need it.

Paul Cronin
February 3rd, 2008, 09:13 AM
Use XDCAM EX settings in FCP.

Justin Carlson
February 3rd, 2008, 09:54 AM
Matt, yes I have and it's a great option to use. Its just that when I grade my footage I tend to add a very soft 'gradual' vignette to my shots so I want to make sure there's the least amount of banding as possible. Thats why I convert all my footage to ProRez. And then once in Color I edit in 32-bit floating point.

Eric Pascarelli
February 3rd, 2008, 10:03 AM
Justin,

Wouldn't you get the same benefit from just going into Color in XDCAM and coming out in ProRes? Less conversion required?

I haven't really done much with the EX1 in Color yet but have found this basic concept to work for other codecs.

Alexander Ibrahim
February 6th, 2008, 06:24 PM
Justin,

Wouldn't you get the same benefit from just going into Color in XDCAM and coming out in ProRes? Less conversion required?

I haven't really done much with the EX1 in Color yet but have found this basic concept to work for other codecs.

I don't think Color supports XDCAM. Color is a bit pickier about what it supports than the rest of FCP studio.

In any case, while what you are suggesting works well enough when supported, you are imposing something of a processing burden on your system. ProRes decodes easier- and there is hardware to accelerate it.

Its a give and take though. You have to evaluate your circumstances and decide what makes sense for your workflow. I like ProRes all the way through the pipeline for everything where that is practical- starting at acquisition if I can.

Of course in some cases, like say covering a school play or the like, XDCAM HQ is overkill, so there isn't any point in playing around with "better" codecs. Then there are situations where data storage is an issue.

I could go on in either direction.

Eric Pascarelli
February 6th, 2008, 07:01 PM
Color does indeed do XDCAM on the input side.

Michael H. Stevens
February 7th, 2008, 01:26 AM
Just curious to know some of the work flows you all are taking at working with the footage from this camera. For those shooting in 1080p hq, are you editing in the native xdcam codec or are you transcoding to something else for any particular reason?

A lot of my friends use Cineform - you get full frame rate editing - but for now I am editing the original .mxf files on the time line with no prob on a laptop. get a about 12fps.

I notice you are from the West Midlands - many, many years ago I got pissed in Sutton Coalfield. Just thought you might like to know.

Alexander Ibrahim
February 7th, 2008, 01:30 AM
Color does indeed do XDCAM on the input side.

Hrm. Good to know. That eliminates that consideration from my workflows.

So, that being the case... you are saving processing power and that is all. On new fast machines it may not be an issue at all.

One thing I noticed, an ambiguity which may conceal a hidden assumption in your earlier statement about codecs. You are striving to minimize "conversion."

There are three reasons to concern yourself with that. One is time. I can't speak to that. Depends entirely on your system and deadlines. The second is storage capacity. Again your system etc. If either of these are your concern, then you'll find I am being pedantic. Sorry if that is the case.

The third is "generational loss." (More properly its accumulated rounding errors due to color representation conversions... but people never say that.)

If you are using a "quality" modern codec like Cineform, DNxHD and ProRes, they shouldn't cause visible generational deterioration until the tenth or later generation. I mean it- even "golden eyed" viewers standing on the material coming from a D-Cinema projector shouldn't see differences until at least that many generations.

So bringing your data into these codecs as early in the process as practical helps preserve data. This is especially true for the typical acquisition codec, like XDCAM, which is 8 bit and has a paucity of color data and is highly compressed.

Again, sorry if I am being pedantic. Hope it gives you something to consider if you are in the third case.

Steve Mullen
February 7th, 2008, 01:53 AM
Color does indeed do XDCAM on the input side.

Which means a huge savings in disk space and disk bandwidth compared to transcoding to ProRes 422 during import. The much lower XDCAM bandwidth should translate into far more real-time streams -- even IF, and it is a if, the long GOP MPEG-2 is slower to decode than ProRes is to decompress.

And, since any effect one adds to XDCAM EX is -- in reality -- being added 4:4:4 uncompressed video, there is no need for any other codec in a workflow.

The choice of how many bits are used for FX rendering is determined by the setting you choose for FCP. It is not determined by the source codec. However many bits are in the source are simply used when 32-bit rendering is used. Only when 32-bits are down-converted to the OUTPUT codec does the format play a role. Since ProRes is used for OUTPUT, you automatically get 10-bits.

Moreover, by waiting until an effect is actually applied to a clip, the needless transcoding from MPEG-2 to ProRes is avoided. You save the compress and decompress to and from ProRes. There is, no matter how you work, a decode of MPEG-2.

Alexander Ibrahim
February 7th, 2008, 02:57 AM
Which means a huge savings in disk space and disk bandwidth compared to transcoding to ProRes 422 during import. The much lower XDCAM bandwidth should translate into far more real-time streams -- even IF, and it is a if, the long GOP MPEG-2 is slower to decode than ProRes is to decompress.

You are assuming that real time playback is entirely bound by disk bandwidth.

That is not a good assumption. Even very high end computers can only handle a fixed number of video streams.

There are two other ways in which a computer is limited. The first, which I refer to, is CPU limiting. That still happens for a lot of systems with HD video.

The second is decoded bandwidth limits. You may only be storing 35Mbps on disk, but when the computer decodes it, you essentially have to sling an uncompressed video around the various system busses out the video ports.

Finally- it isn't so much an if that ProRes decodes faster than just about any long GOP format on the same hardware. This leaves aside issues of hardware acceleration.


And, since any effect one adds to XDCAM EX is -- in reality -- being added 4:4:4 uncompressed video, there is no need for any other codec in a workflow.

Not strictly true. In fact its only true if you have a 4:4:4 codec for the timeline.

Any effect you add to a time line is calculated based on uncompressed video... but at the bit depth and chroma sampling of the source sequence.

Motion, Shake, After effects and other graphics/compositing software behave differently. They do everything at 4:4:4 uncompressed, then throw away data right before output, exactly as you suggest.

The choice of how many bits are used for FX rendering is determined by the setting you choose for FCP. It is not determined by the source codec.

Actually in all but one case the FCP setting bases its calculation precision on the codec for the source material. Only the last radio button gets you high precision YUV calculations all the time.

On most machines this exacts a huge performance penalty, which is why Apple included it as an option and set the default as it did, only using high precsion YUV on high precision YUV source materials.

Examining this high-precision YUV, what actually happens is that FCP (its the same in any editor) will calculate in higher bit depth- but with the same chroma subsampling of the source codec.

So, for XDCAM HQ materials, which are sampled at 8 bit 4:2:0 you merely gain the advantages of using 10 bit 4:2:0, but with 8 bit data.

If you transcode to ProRes, you still have 8 bit 4:2:0 data, but the editor will operate on it as 10 bit 4:2:2 data.

The question is how useful the extra headroom will be. In a lot of cases it won't be useful at all.

If you are using XDCAM EX HQ footage, then you should dial back the settings in FCP most of the time. The same reasoning means that you shouldn't bother transcoding into a better codec most of the time.

When is it useful?

When your footage is going to be handled time and again in your post workflow. i.e. your typical film workflow. Of course in these workflows you will usually acquire in a better format. Indie films are often acquired with ENG quality codecs- because thats what folk can get. In these cases transcode immediately to a better codec.

A lot of users here don't have anything near that complex- so it isn't useful. A lot of diskspace and CPU power get burned for no reason. It literally won't make any visible difference for a school play. (as an example)

Steve Mullen
February 7th, 2008, 05:38 AM
You are assuming that real time playback is entirely bound by disk bandwidth.

I made no assumptions at all. I merely stated a fact. The disk bandwidth for XDCAM is much lower. I said long-GOP might be slower, but might not be. The Intel chips have instructions to speed MPEG-3 decoding. Someone should test.


Not strictly true. In fact its only true if you have a 4:4:4 codec for the timeline. Any effect you add to a time line is calculated based on uncompressed video... but at the bit depth and chroma sampling of the source sequence.

With the new Open Timeline, I think the Sequence codec has nothing to do with FX rendering. No matter the source codec, when a clip is decoded or decompressed, it becomes uncompressed video. If necessary, as for XDCAM EX, FCP performs the necessary computations to up-sample chroma to 4:4:4.

Actually in all but one case the FCP setting bases its calculation precision on the codec for the source material. Only the last radio button gets you high precision YUV calculations all the time.

According to Apple docs:

1) 32-bit processes both 8-bit and 10-bit source files.

2) Because of the longer computation time for 32-bit, Apple recommends turning it on only for the final render(s).

How can multiple 8-bit values be computed in only 8-bits? The intermediate calculations must be performed in 16-bits. Likewise, 10-bit values can't be computed with only 10-bit intermediates. Something seems wrong in Apple's description.

PS: I wonder what happens when 8- and 10-bit are mixed?

When your footage is going to be handled time and again in your post workflow. i.e. your typical film workflow. Of course in these workflows you will usually acquire in a better format. Indie films are often acquired with ENG quality codecs - because thats what folk can get. In these cases transcode immediately to a better codec.

If the vast majority of the footage will be moved from application to application you are totally correct. I was thinking of workflows where only short segments are moved to other apps. In this case, it seems space efficient to render only these segments in ProRes HQ. In fact, if one moves material that has already been CC with Color -- it's already ProRes.

Evan Donn
February 7th, 2008, 03:23 PM
Not strictly true. In fact its only true if you have a 4:4:4 codec for the timeline.

Any effect you add to a time line is calculated based on uncompressed video... but at the bit depth and chroma sampling of the source sequence.

Motion, Shake, After effects and other graphics/compositing software behave differently. They do everything at 4:4:4 uncompressed, then throw away data right before output, exactly as you suggest.

Do you have a reference for this? I could swear that I've read in FCPs docs that all internal processing is done at 4:4:4 (at whatever bit depth/precision you have selected in preferences) and then downsampled to the timeline's format.

David Schmerin
February 7th, 2008, 04:03 PM
First please let me say that compared to guys like Steve Mullen and Alexander Ibrahim, I consider myself to be a relative thickie. 90% passes so far over my head that I look at you guys in wonder...

When I edit EX1 HQ footage with FCP, I myself stay with the native codec. I ran a small test whereby I brought a 5 second video of a flag into FCP. Then I rendered the file out as H.264, ProRes, and Photo JPEG.

All render settings were set to best or as unrestricted as possible to allow the codec to produce the highest quality files. In every case I found the rendered files quality to be pale in comparison to the original.

If any FCP users would like to take a look at the results of my flag test, the raw, H.264, ProRes, and Photo-JPEG are posted on my FTP site at

Host: download.gotfootage.com
User: vegas
Pass: demo
Port: 21

Or you can paste the following link into Safari

ftp://vegas:demo@download.gotfootage.com

Test File Names:
flag.raw.mov
flag.H.264.mov
flag.ProRes.mov
flag.pjpg.mov

David Schmerin
www.GotFootageHD.com

David Schmerin
February 7th, 2008, 04:05 PM
First please let me say that compared to guys like Steve Mullen and Alexander Ibrahim, I consider myself to be a relative thickie. 90% passes so far over my head that I look at you guys in wonder...

When I edit EX1 HQ footage with FCP, I myself stay with the native codec. I ran a small test whereby I brought a 5 second video of a flag into FCP. Then I rendered the file out as H.264, ProRes, and Photo JPEG.

All render settings were set to best or as unrestricted as possible to allow the codec to produce the highest quality files. In every case I found the rendered files quality to be pale in comparison to the original.

If any FCP users would like to take a look at the results of my flag test, the raw, H.264, ProRes, and Photo-JPEG are posted on my FTP site at

Host: download.gotfootage.com
User: vegas
Pass: demo
Port: 21

Or you can paste the following link into Safari

ftp://vegas:demo@download.gotfootage.com

I would very much to know if anyone else feels the same as I do about the rendered video files.

Test File Names:
flag.raw.mov
flag.H.264.mov
flag.ProRes.mov
flag.pjpg.mov

David Schmerin
www.GotFootageHD.com

Alexander Ibrahim
February 8th, 2008, 02:36 AM
Do you have a reference for this? I could swear that I've read in FCPs docs that all internal processing is done at 4:4:4 (at whatever bit depth/precision you have selected in preferences) and then downsampled to the timeline's format.

The sequence settings panel, which I've a attached a screen grab of for you to peruse, supports part of what I am blabbering on about.

You can set rendering precision. 10 bit materials are rendered in 32 bit floating point (FP). You can force 8 bit materials to render in 32 bit FP

Now, I think some confusion sets in in that FCP creates all GENERATED imagery in RGB at 10 bits per channel. (32 bit RGB)

Another point that might confuse a reader is that in Volume 3 page 650 the manual says, "By default, render files are created at full quality, but you can speed up rendering by choosing lower-quality options in the Render Control tab and the Video Processing tab of the Sequence Settings window."

This has nothing to do with bit depth or chroma subsampling.

Finally if you are stupid just like me, as opposed to in your own peculiar way, then you may have confused material you read in the Shake manual with stuff you read in the FCP manuals. Man that still drives me crazy.

If you do a crossfade, that's done at the color space of the sequence. If you want that to happen in 4:4:4, then you have to have a 4:4:4 sequence.

This is standard amongst NLE's Quantel, AVID, FCP you name it.

This is a smart optimization by the software engineers. The majority of "rounding errors" will result in color levels that are unsupported, not colors out of gamut- even in 4:1:1 or 4:2:0 color spaces. So, you waste CPU cycles on extra bits and that makes a difference. Wasting them on increased colorspace is actually a waste. You are limited by the output codec- so never bother exceeding its capabilities.

The relevant section is in the Final Cut Pro Manual Volume 3 page 659.

If you are going to 32 bit processing though you may want to read Volume 3 pages 241-252 which contains notes on which effects are disabled when you are working in high precision YUV modes. HINT: Most of them.

Compositing software, like Shake, NUKE or Flame works in 32 bit FP all the time. Of course they work in RGB space, causing a whole different set of headaches.. but hey that's the game we are in right?

If you need an NLE that works this way then you really need a "Finishing system." I think AVID DS, Autodesk Smoke, Autodesk Fire, and anything from Quantel are the only NLEs right now that can work in 10 bit with all features all the time.

Avid Symphony Nitris is mostly 8 bit, but it offers 10 bit titling- sorta like FCP just wicked fast. Of course 8 Core Mac is also wicked fast- but Nitris can be wicked faster.

Enjoy

Alexander Ibrahim
February 8th, 2008, 03:59 AM
First please let me say that compared to guys like Steve Mullen and Alexander Ibrahim, I consider myself to be a relative thickie. 90% passes so far over my head that I look at you guys in wonder...

Awww... shucks.

Well just keep on working- this stuff just seeps into your head as you slog through project after project.

In every case I found the rendered files quality to be pale in comparison to the original.

Well... H.264 looks like sh*%e of course. The ProRes and photo jpeg versions look fine on my system when I played them through Quicktime. They are a bit more deeply saturated than the XDCAM file.

Check your monitor calibration. Also make sure that high quality is set in the Quicktime movie properties panel. (I think its that way by default on QT7, but check it.)

Sebastien Thomas
February 8th, 2008, 08:20 AM
Hi,

I would like to say, then, what is the right setup ?

Stay in XDCAM-EX (4:1:1) or go to prores (4:2:2) ?
or maybe some other format ?

Then how ? Is it just a matter of changing the sequence setting ?

Is this workflow good so :

import XDCAM-EX into FCP
use the media manager to re-compress as prores (4:2:2 1920x1080 HQ)
create a new sequence, set it as prores.

Will this improve the plugins/color correction you may do ?
What will go to Color then ? Will you get the same quality when coming back to FCP ?

I asked this kind of questions, to myself first, with the manual, then on some forums. Nobody have been able to answer yet. I may not habe the time nor the experience to try all this myself.

Thanks.

Alexander Ibrahim
February 8th, 2008, 03:09 PM
I would like to (ask) what is the right setup ?


For 90% (WAG) of the work people will do:
Import your footage as XDCAM.
Drop it in a sequence that is Set to ProRes, DNxHD, Cineform or some other high quality codec.

That's it. You will get VERY good results that way. Chances are you can stop reading now.

IF You plan on extensive adjustment or alteration in post production then you should get your material into a high quality codec as early in the production chain as possible.

Before I go into that- I need you to realize that I am talking about an edge case. I do that a LOT on this forum. 90% of videographers should never use any of the information that follows.

Every time you do any effect, from a crossfade, to adding titles, to heavy compisting and DI you reduce the quality of your image. Especially if you are working across different facilities with lots of generations of processing. These things are bad for the technical quality of the image, but are essential to achieving our practical results. Balance is the key.

The very best thing you can do is skip XDCAM altogether, and just use a superior codec from acquisition. If you can hook the camera up to a higher end capture solution, like the Convergent Designs Flash XDR, or AJA ioHD, or the proposed Cineform solid state recorder, those are all excellent options. I guess (wildly) that about 10% of the EX1's users will need to do this.

If you know your footage is going to be "abused" then uncompressed capture over SDI is the right thing. I estimate that 99% of users of the EX1 will never EVER need uncompressed. Of the remaining 1%, less than 2% of our footage will need to be captured uncompressed. The rest falls into the "capture with a better codec" category above.

Failing that you will get MARGINALLY better results by transcoding to a high quality codec. Encoding in XDCAM already threw away a LOT of sensor data- and there is nothing you can do to get it back. You are trying to stop the bleeding.

Often we end up back at square one- the best way to stop the bleeding is to leave the footage alone.

If you follow this path you have to consider what is going to happen to every bit of your footage. The goal being to transcode it as few times as practical- and then to the codecs most suitable for continued work on that bit of footage.

It is hard to draw this line- and only lots of experience will help you draw it intelligently. In the meantime, try what you need to do with the footage in its source codec, when that fails try improving the codec. Notice when this helps and try to figure out why.

HINT: It doesn't help that often.

If your footage will go through ten or more generations of processing then you need to consider converting to uncompressed.