|
|||||||||
|
Thread Tools | Search this Thread |
February 5th, 2007, 10:34 AM | #1 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Toronto SI/CineForm Workflow and Hands-On Seminar
The Silicon Imaging - Cineform Workflow.
DATE: Wednesday February 14, 2007 LOCATION: PS Production Services PS Toronto Head Office 80 Commissioners St. Toronto ON M5A 1A8 tel 416 466-0037 fax 416 466-9612 toll-free 1-800-461-0037 info@psps.com Location Refreshments and snacks will be served between sessions. Admission is free but space is limited. RSVP: Steve Nordhauser or Alison Russel seminars@siliconimaging.com Steve Nordhauser of Silicon Imaging and David Taylor of Cineform will cover the following topics: FOOTAGE (10) - Segment of Spoon footage - Dave Faires' work - Logistics of the shooting INTRODUCTION TO RAW (20) - Concepts - What is RAW? ("Digital Negative") - Benefits of shooting RAW for both camera and post-production - Integrated workflow Overview: Camera-to-post INTRODUCTION to SI-2K (20) - Models (Mini, DVR) - Differences from standard cameras - 3D LUTs discussion (on set and post ramifications) - Pricing and accessories POST WORKFLOW AND EXAMPLES (20) - RAW workflow concepts - Premiere / AE - Final Cut Pro - Keying DISCUSSION (20) This material will be covered in two sessions, and you may attend whichever you prefer. First session from 9:30am - 11:00am, second session 1:00pm - 2:30pm. Please indicate your preference when RSVPing. Please RSVP early, as space will be limited to ensure that everyone has the best experience possible. The Silicon Imaging camera takes an entirely new approach to camera design. The workflow from beginning to end has been the foremost design consideration in this new line of cameras. Cineform RAW® native to the camera and 3D LUTs will change the way you work. Come see how this makes your artistic creation much easier and more intuitive. For more information, visit: http://www.siliconimaging.com/DigitalCinema/ http://cineform.com and of course our hosts at: http://psps.com/ _________________ Silicon Imaging Product Development Troy, NY |
February 13th, 2007, 01:34 PM | #2 |
New Boot
Join Date: Jan 2006
Location: Canada
Posts: 22
|
still a go?
Is the Toronto session still a go for Feb 14? With all the snow we're getting, just want to be sure before I take the Go Transit into Toronto.
|
February 14th, 2007, 12:36 AM | #3 |
Inner Circle
Join Date: Jun 2003
Location: Toronto, Canada
Posts: 4,750
|
There's a post on reduser.net about this... it seems like the SI guys have landed in Toronto so I don't see any reason why not.
http://www.reduser.net/forum/showthread.php?t=548 |
February 14th, 2007, 08:43 AM | #4 |
Major Player
Join Date: Sep 2003
Location: Solana Beach, CA
Posts: 853
|
The Toronto session is a "GO" but unfortunately without direct CineForm participation. I was unable to make it to Toronto, much to my chagrin, but I was in NYC on Tuesday and am in Orlando on Wednesday....
|
February 15th, 2007, 12:58 AM | #5 |
Inner Circle
Join Date: Jun 2003
Location: Toronto, Canada
Posts: 4,750
|
I attended the Toronto SI seminar today and saw some neat stuff. It's interesting to see SI's 'down-to-earth' marketing compared to Red's (Red's website is a little over the top IMO). Although I did notice some subtle Red digs in the presentation (hehe). Anyways, moving on to relevant stuff...
1- The workflow is nice in that you can just drop your footage in and start editing away. 2- I guess I should have asked this, but is there support for workflows for finishing systems like Discreet FFI, Final Touch, Lustre, Da Vinci 2K, etc. etc. ? From the seminar, I gathered the answer is no. What if Cineform were to somehow integrate with a DDR system like RaveHD (which would allow tape-tape workflows), or to make a Cineform VTR. A Cineform VTR would sort of be along the lines of the XDCAM VTRs. XDCAM is also a data-centric dealie, but the VTR allows SDI output. SDI is a common language for many high-end systems. Or another possibility is if custom software could take an EDL and 'checkerboard' the edit for you. Each edit gets put on its own track (without overlapping or cross dissolves). Where there are cross dissolves, the two video streams will be on different tracks. Print each track to tape (or to a DDR). Output a new EDL that corresponds to the two new tapes. Basically what would be nice if you can finish the project in a decent color grading system, so that the footage can look really sweet. For SI's target market, probably the only reasonable solutions would be Final Touch (not sold anymore), maybe Final Cut (weak), maybe Color Finesse, or PPro/After Effects/Magic Bullet (maybe). 3- To my eyes, the Spoon footage shot on 16mm Fuji film looked better than the SI2K footage. I prefer the aesthetic of the film material they shot. 4- Also, the flesh tones in the SI footage has this weird look to it. It just looks weird. I'm going to wildly guess (perhaps wrongly) that what's happening is this: The camera is mapping ~11 stops of dynamic range into Rec. 601 / 709 video range. The mapping / transfer function is Rec. 601 or 709, except with the transfer function multiplied by some constant. This I'm guessing is similar to Steve Shaw's low contrast linear curves. If you simply do that and bump the saturation, then the colors are just going to look wacky. Film on the other hand has 'photochemical algorithms'. Its transfer functions are best represented by a 3-D LUT (not a 1-D LUT, which is what MBE seems to do; Nattress Film Effects does this). Although a 1-D LUT (or curves in Photoshop) is a reasonable tradeoff in performance/quality. Potentially a way of avoiding wacky colors is playing around with algorithms. You can see some of my results at http://glennchan.info/Proofs/filter/..._preview2.html (scroll down). This shows a per-pixel approach. There's also spatial/area-based approaches. There's lots of academic literature on HDR tone mapping... they have information on those algorithms. |
February 15th, 2007, 08:51 AM | #6 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Hi Glenn,
Thanks for making it to the seminar! In response to your questions: #2) There actually is a workflow planned for all the software platforms you just mentioned, the only issue is that the "ideal" workflow will be based off the Quicktime format when it becomes available (soon). For finishing programs like Final Touch and Speedgrade HD which are Quicktime native, the end user will be able to export the timeline info from Final Cut via XML or EDL to these respective applications, and then just import the CineForm RAW Quicktime's natively without any transcoding. CineForm is planning an external metadata management program that will allow the end user to modify the metadata headers in the Quicktimes should one need to make any tweaks that the codec itself is not able to present the end-user in the host software platform. But basically you're looking at native import and color-correction of CineForm RAW files at full 16-bit linear or 32-bit float resolution depending on the capabilities of the program. For the Discreet FFI systems, we're looking at exporting an XML from Final Cut Pro and then using an import tool like Xstoner to take the files from the Final Cut Pro station or some hard-drive and import them into the Stone framestore. I think this would be the ideal workflow for 2K and HD. For HD resolutions there is the possibility of loading up the Final Cut Pro timeline and playing out an AJA card into the Discreet box over baseband HD-SDI, but then you loose the ability to trim/re-edit with handles. As far as a "Cineform VTR", I believe this is possible with the Wafian system and RS422 control. You can load CineForm HD files (not RAW, so you will need to render out to CineForm HD via a media-manager operation from Final Cut Pro, but you only need to render out the final timeline selects which is fairly straight-forward) into the Wafian and then control that device from the Discreet box. Quantel would be similiar to the Discreet method, with an exporting of an XML from Final Cut Pro and then importing the CineForm RAW QT's into the Quantel Box. I'm not too familiar with the IQ workflow, so there migth be to be a transcode to DPX somewhere in the pipeline. #3) Spoon shot only the Rutger Hower scenes on 35mm, not 16mm, and that was due to the SI cameras not being available at that time. Those are the only shots in the movie done with the SI cameras. #4) The reason flesh-tones might look "off" to you in the Spoon footage is because that was from one of our early prototype camera systems, and they turned the color matix *off* in the RAW files for artistic reasons (they said they liked the look and I'm not going to argue with that, since it does create the environment they are wanting to portray) . . . so yes, saturating the colors might look a little wacky because you're looking at the native RGB color-space of the camera independent of any transformation into XYZ or Rec709 or any other standard color-space (so it will not look correct on devices calibrated to these standard color-spaces). We understand the "correct" pathway that a color transform needs to take from the native camera color-space. In our current color-pipeline we are recording a 10-bit Log file from the 12-bit linear RAW source, but the color pipeline through our 3D LUT .look files from Speedgrade OnSet relinearizes the log file, adds the appropriate matrix color transformation from the camera's RGB color space which is passing through XYZ to the standard color gamut your display is using, and then adds a re-gamma correction stage. The nice thing about this is that it's all metadata and is controllable inside the Speedgrade interface, so you can completely re-map the color-pipeline of the camera data if you wish to get the effect you need. Speedgrade OnSet now includes the interface for pre-matrix relinearization, post-matrix gamma-correction LUT, and then output LUT's on top of that to add contrast curves or whatever else you creatively require for your project. And all the math in Speedgrade is performed in a non-destructive 32-bit float environment, with resulting 64x64x64 3D LUT's for very accurate color rendention and render quality results performed on-the-fly in the GPU. It's pretty nifty to get WYSIWYG right out of Speedgrade on the camera's screen. With the controls that we've given the end-user inside of Speedgrade OnSet, if you don't like the color the camera is delivering, you can do whatever you want to remap it to whatever you want . . . the camera is a true "blank slate" when it comes to the capabilities of the color pipeline . . . after the 12-to-10-bit conversion, absolutely everything else in the pipeline, including white-balance, is all non-destructive metadata. |
February 15th, 2007, 08:58 AM | #7 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
BTW, Also Glenn, Speedgrade OnSet supports 3D LUT's as well as 1D LUT import, so for instance, those pre-matrix relinearization LUT's and post-matrix gamma correction LUT's can also be 3D LUT's . . . we are planning some very nifty capabilities with this sort of functionality, but again, it shows you just how flexible the color-pipeline is for this software-with the relinearization stages properly implemented now in the 3D LUT pipeline, if you can think it, you can do it.
One thing you can do right now for instance is actually import into the .look file the 3D LUT transformation for a film-print (via CineSpace . . . we've already been doing this internally). So although you may not being going out to film, you can get the look of the data printed out to Vision Premiere, etc. right on your computer screen and can visualize it while you're shooting . . . if you never go out to film print, your footage still looks as though it's passed through a photo-chemical process. |
February 18th, 2007, 08:12 PM | #8 |
Inner Circle
Join Date: Jun 2003
Location: Toronto, Canada
Posts: 4,750
|
A follow up question. :D
Why are you looking at XML? From my experience with it and Final Touch, Final Cut Pro's XML does not handle speed changes. You need to spend time conforming your project for FT, making sure to bake everything in. Whereas with EDLs, they actually do support speed changes. Speed ramps however have to be re-created or baked in. However, a lot of time the editor might do things like freeze frames, or slow cutaways to 50% speed. So EDLs can come in handy there. |
February 18th, 2007, 08:41 PM | #9 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Ah, okay . . . for the Final Touch approach I'm going with the "idealized" approach which would be an XML, but yes, you're right, FT's XML import is very buggy and doesn't quite work as advertised.
Hopefully being owned by Apple will fix those issues. BTW, how many speed changes are done in the edit program itself? Wouldn't it be better to just "mark" the clips that need a speed-change and then pass them through an optical flow algorithm like Shake's, etc.? That would look a lot better than the frame-blending approaches of Final Cut. I mean I've seen people do speed-changes in the editing program, but they always look very bad. |
February 18th, 2007, 09:09 PM | #10 | ||
Inner Circle
Join Date: Jun 2003
Location: Toronto, Canada
Posts: 4,750
|
Quote:
Quote:
*Ok I've seen some programs that don't do that good a job, but it passes QC. Workflow-wise, unless you're doing slowmo shots where the frame blending is really apparent, there isn't that much point in going to a lengthy optical flow render. 2- With the EDL route, the slowmo shot will be re-created in the online system. EDLs define source in, source in, and where they clip fits in the project (project in, project out essentially). The online system ingests off tape and re-creates the slowmo shot. |
||
February 19th, 2007, 09:39 AM | #11 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Okay, well, EDL's are handled by all the above mentioned programs as well then, i.e., Speedgrade HD, Discreet Fire/Smoke, Scratch, etc. . . . so I guess you can move an EDL over as well to reconform the timeline.
|
February 19th, 2007, 09:40 PM | #12 |
Regular Crew
Join Date: Apr 2005
Location: New Zealand
Posts: 136
|
Jason,
I hope you have a Silicon Imaging for Dummies white paper for us that don't know our 'pre-matrix relinearization LUT's' from our 'post-matrix gamma correction LUT's can also be 3D LUT's' You'll know me at NAB, I'll be the one asking you to speak slowly and use smaller words! |
February 20th, 2007, 09:45 PM | #13 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Hi Scott, don't worry, it can be as easy or complex as you want it to be :)
|
| ||||||
|
|