Exporting CGI/Affected footage? at DVinfo.net
DV Info Net

Go Back   DV Info Net > Windows / PC Post Production Solutions > Non-Linear Editing on the PC
Register FAQ Today's Posts Buyer's Guides

Non-Linear Editing on the PC
Discussing the editing of all formats with Matrox, Pinnacle and more.

Reply
 
Thread Tools Search this Thread
Old April 18th, 2006, 03:50 PM   #1
Major Player
 
Join Date: Mar 2004
Location: Cape Town, South Africa
Posts: 276
Exporting CGI/Affected footage?

Is it encouraged, when working with either CGI, composited footage or both, to always export the clip as a sequence of high quality image files, like TGA or TIF, which can be imported into your NLE as a single clip? (ie, 10.000 images for each frame as a single clip). I've often seen people do this rather than export their finished footage as an uncompressed AVI (or when working with DV-AVI, as another DV-AVI file)

What is the reasoning behind this? Can AVI files not contain an alpha channel, while image sequences can? (If so, I can understand why it would be used to import CGI into a compositor as an image sequence to be composited onto live fooage, but should the final render that wlil go back into the NLE also be an image sequence?)

Is there generation loss when rendering out DV-AVI (Render a clip, work on it, render it again, work on it, render it again etc etc, or is DV-AVI totally lossless?)

What advantages does the image sequence method have?
Aviv Hallale is offline   Reply With Quote
Old April 25th, 2006, 07:19 AM   #2
Major Player
 
Join Date: Feb 2004
Location: Philadelphia, PA, USA
Posts: 548
AVI files can contain alpha channels, but there are a number of advantages to using image sequences...

- Crash recovery
When rendering image sequences, you can resume rendering from the first incomplete frame if your software or computer fails for any reason during the rendering process. If rendering to a video file, the file is typically left corrupt and rendering must be restarted from the beginning.

- Frame Rate independance.
Many visual effects apps were originally written for use in the film world and have never really bothered to accomodate fractional frame rates like NTSC's 29.97fps. By handling data as individual frames, mapping to any frame rate is possible, and multiplass renders are more easily synchronized.
Frame 897 is ALWAYS frame 897.

- Memory management/frame access speed
Most of the better visual effects applications are optimized for image sequence management and can actually work faster with image sequences than video files.
Consider opening an clip jumping to frame 631.
For an AVI, the very large file will need to be opened and then a seek operation needs to advance the pointer to 631 to read in the frame.
For an image sequence, the application simply looks up the file name of the small single frame file and opens it.

- Transportability
if you have a LARGE shot or set of shots to transfer to others, it's very easy (even without compression software) to break up the frame files into batches that could fit on media like a DVD, then re-assemble the shots later.

- DV-AVI is NOT uncompressed
A DV AVI compresses image data by literally throwing away 75% of the color data.

- Workflow scaleability
If you develop your SD visual effects workflow to be based on image sequences, you are better prepared for HD and film work. Virtually ALL film work is done with image sequences. Film frames are scanned to large Cineon and increasingly OpenEXR files (usually at 2k or 4k) for digital work.

Hope this helps.
__________________
Nick Jushchyshyn Matchmoving, Compositing, TD
imdb
Nick Jushchyshyn is offline   Reply With Quote
Old April 25th, 2006, 11:36 AM   #3
Major Player
 
Join Date: Mar 2004
Location: Cape Town, South Africa
Posts: 276
Thanks a lot Nick! Seriously...Thanks a lot...

I dont have the the leisure of Adobe Dynamic Link, so I'm assuming that ALL renders between software should be image sequences? This means:

- The render of video between Premiere and After Effects that I'll be working on.

- The render of effect footage from something like 3DsMax, Particle Illusion, EffectsLab etc

-The final render of composited footage in AE that will be put back into Premiere

Should all be image sequences?

If I'm going to be working with audio, should I render video and audio separately and sync them up in AE?

Is there any way to work around not having Dynamic Link and not always having to render?

Also, if I were compositing green screen footage, would I export my backdrop from Premiere as an image sequence into AE and then also render the green screen stuff as an image sequence which I'll key and composite?

All in all should every render I make when working with composites and effects, between initial capture and total final render, be in image sequences?


Also, a lesser important question, but if you're working in software like Particle Illusion or EffectsLab (AlamDV) is there any reason why you'd need to composite?

Should you:

- Open your source footage that you've exported from Premiere as a background/bottom layer in the aforementione software, add your effects (particle effects, muzzle flashes etc) and then render the entire thing out, including the background to place back into your NLE?

OR

- Do all of the above, except when rendering, get rid of the background of live action footage so you just export an effect against an alpha channel background, open that and the original live action video in AE and composite it there?

Someone told me that the latter option is better because when using something more powerful like AE, you can colour correct the effect separate from the live action clip to get it matching better, while if you export both the effect and the original video as one file straight from the effects app, you wont be able to readjust each one's colour separately...Which is the better method?

Thanks again!

UPDATE: Right, I tried this...

- Did my effect in particle illusion and exported as a TIFF sequence

- Opened After Effects and Premiere, The shot I was going to be compositing on was quite long and the beginning and ending of the preceding clip had a dissolve (Should I only add transistions after all the compositing and affeced clips are lined up and done?), so I razored it in half, and selected just the part which would be affected. I copied this and pasted it in AE (Yay, good intergration)

- I comped the PI shot onto that clip, looked perfect...Exported as a TIFF seuquence...

- Brought the sequence into Premeire and replaced the second half of the razored clip with this sequence.

Should be fine..BUT, there is a small, but noticeable drop in quality between the two halves (the one untouched and the one from AE)...It seems that the picture softens just a bit, just enough to be distracting...I rendered out as a DV clip from AE too, and the same problem occured...I'm wondering if it has to do with Particle Illusions composite causing problems I use the Premultiplied lum. transfer mode in AE, though, or else the effect wont come through as it's rendered out from Particle Illusion with a "Non-Intense Particles Create Alpha Channel" setting so that the lighting isn't ruined by a slight black halo...Could that be the problem?

Premiere Pro Preview Monitor is set to "Highest Quality"

After Effects is set to Best Quality under render settings

I really don't know what to do...Is this an After Effects problem, a Particle Illusion problem or what? The problem obviously wouldn't be noticeable if I didn't affect one half of a clip and leave the other untouched, but still, I don't expect this to happen.

Would it be easier to just use the original video from PP in Particle Illusion as a background, do the effect straight on it, and render out the effect and the background as one? The only con I can find in this is the fact that the effect and the background wont be separate and I wont be able to colour correct the effect to match the background...This method works fine, although I have to raise the clip by one pixel in PP for some reason and of course I wont have the power to colour correct things separately.

Any ideas? This is starting to bug me.

In this clip (11mb), after one second (at 0:02) you can see a slight shift in tone and sharpness (look at the top corners of the tree branches). Annoying.

http://download.yousendit.com/9375795B42155AA4

Last edited by Aviv Hallale; April 25th, 2006 at 05:01 PM.
Aviv Hallale is offline   Reply With Quote
Old April 26th, 2006, 12:28 PM   #4
Major Player
 
Join Date: Mar 2004
Location: Cape Town, South Africa
Posts: 276
Update 2:

I sorted out the problem, what I was doing was copying a clip from PP, pasting it in AE, applying the effect and rendering out. When I went the long route and rendered out of PP, imported into AE, applied the effect there and rendered out from AE it was perfect and the difference in clips was seamless.

However, for this clip what I did was import the image sequence background into PI, apply the effect directly on it and render the background and effect out of PI as one clip, which also works, but for more advanced effects where I need to colour correct the effect to match the live action video, I would need to work on the effect and background video separately.

In a 3D App, can that method be used? Where the live action video you're compositing on can be used as a background, your 3D animation built directly on it and then rendered out as one clip, eliminating the need for a middle-man compositing program like AE? Or when working with 3D stuff, do you need to colour correct the animation separately from the live video so you would need the two clips in a compositor separate from each other to work on them separately?

Also, just a side question, but what is the difference between TIFF and TGA? Which is better?
Aviv Hallale is offline   Reply With Quote
Old April 27th, 2006, 02:59 PM   #5
New Boot
 
Join Date: Apr 2006
Location: Michigan
Posts: 19
Quote:
Originally Posted by Aviv Hallale
Update 2:

...In a 3D App, can that method be used? Where the live action video you're compositing on can be used as a background, your 3D animation built directly on it and then rendered out as one clip, eliminating the need for a middle-man compositing program like AE? Or when working with 3D stuff, do you need to colour correct the animation separately from the live video so you would need the two clips in a compositor separate from each other to work on them separately?

Also, just a side question, but what is the difference between TIFF and TGA? Which is better?
Aviv...

In my experience with 3DS Max, you can certainly use the live action footage as a background image (or as a texture map if you want)... then composite on top of it. I would recommend using it as a background image (as opposed to mapping it to a plane) because if you map it to a plane (as I've seen done), lights and shadows affect the live action footage. By using the footage as a background, it will render out as intended. I've used TIF's in Premiere Pro and for some erason I've always had problems with them. I switched to using 32 bit TGA's. The first 24 bits contain the image information and the remaining 8 bits contain the alpha channel. Let's say for instance you want to make a title for a clip... create the image file in PhotoShop or PhotoPaint or whatever you use.... make the background of the image black or blue.... just a different color than the foreground elements... apply a mask to the background and export as an enhanced TGA (32bit)... import into the editor and it will come in with the masked sections already keyed out....
__________________
Current Documentary, "The Frisbee Effect" the true story of a man and his flying saucer.
Keith Allison is offline   Reply With Quote
Old April 27th, 2006, 04:27 PM   #6
Major Player
 
Join Date: Mar 2004
Location: Cape Town, South Africa
Posts: 276
Thanks a lot for your reply! Would I be right in saying that the con of rendering out your animation as well as your background video with 3DsMax and not compositing the two of them together separately is that you wont be able to adjust the appearance (colour correction etc) of your animation separately from your background?
Aviv Hallale is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Windows / PC Post Production Solutions > Non-Linear Editing on the PC


 



All times are GMT -6. The time now is 01:59 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network