|
|||||||||
|
Thread Tools | Search this Thread |
August 7th, 2010, 06:12 PM | #1 |
Trustee
Join Date: Oct 2009
Location: Rhinelander, WI
Posts: 1,258
|
Three-camera 3D
I have been meaning to discuss this and never got to it, but people have brought it up in the toe-in thread, and I would much rather we just discussed toe in in that thread, so here goes.
As we have discussed before, and as one person has absolutely and vehemently accused us of not knowing what we are talking about, 3D depends on the size of the screen. Or should I say on the width of the screen. If we shoot for the big screen, the vergence is too small on a TV/computer monitor. If we shoot for the TV/computer monitor, the vergence is too big on the big screen. As it is easier on the eyes to exaggerate the convergence than to exaggerate the divergence, we tend to shoot for the big screen and err on the small screen. Alas, this is an error which haunts us more and more as more and more people watch big-screen movies on a small screen in the comfort of their homes, watching them on DVD and BD. I have shown some trigonometry in the opening post of To toe in or not to toe in, that's the question, and you can read it there. The same math applies when discussing divergence, spacing the objects too far apart on the big screen: The farther away from the screen we are, the smaller the angle of divergence we use. What we need to remember is that both the convergence ("toe in") and the divergence ("toe out") approach a zero angle (relative to parallel lines) as the distance approaches infinity. And, as I pointed out, infinity is closer than we may think of, especially with regards to toe in (convergence). But, no matter how far from the screen we are, convergence will never become divergence, nor will divergence ever become convergence. It is mathematically (and physically) absurd to suggest that we should space the two images of the object we are gazing at to something like one meter (1000 mm) apart when our eyes are 55-65 mm apart, thinking that the divergence will magically turn into convergence because of the distance. It will not. Never, ever. The same trigonometry applies to this as what I presented in the toe-in thread. For example, if you space the images 1 m (1000 mm) apart, they will be 500 mm apart from the point in front of the center between the eyes, producing 470 mm divergence (for the average of 60 mm between the eyes, which gives us a 30 mm distance of the eye from the center). If viewed from a distance of 100 m (100000 mm), the angle of divergence is atan(470/100000)*180/pi, which gives us 0.3° of divergence. Quite small, mind you, but it still is a divergence. And if a person sits only 5 meters from the screen, the angle of divergence is atan(470/5000)*180/pi, or 5.37°, which is guaranteed to give him a headache. And of course, that is for each eye. If considering both eyes, the angle will double to 10.74° (and to 0.6° at 100 m). So, unless we build new theaters and seat everyone 100 m or farther from the screen, we have a problem. This is not a realistic solution. At least not at this time. One solution to the problem of the big screen vs. the small screen is shooting with three cameras, or perhaps one camera with three lenses (of course, when creating pure CGI animated features, we can just render different versions, but not with real cameras). The one idea I have been toying with, but only in my head because I am not equipped for it, an idea I would like to throw out and get some feedback on, especially from people who might actually be able to test it, is space the three cameras A, B, C in such a way, that if A is farthest to the left and C is farthest to the right, then B should not be spaced exactly in the middle, but that the B-C distance might be, say, twice the A-B distance. That way, we get three pairs of images, A-B, B-C, and A-C. Perhaps one for the computer/TV, one for a standard cinema and one for IMAX. Once again, this is just an idea. I welcome a discussion. Even a somewhat heated one, just as long as no one is calling anyone names, please. ;) |
August 7th, 2010, 06:39 PM | #2 |
Major Player
Join Date: Mar 2005
Location: Neenah, WI
Posts: 547
|
Well...convergence can be altered in post. It isn't always pretty, but it has to be done to cover mistakes from convergence pulls made on the set (when you're shooting converged).
If the shots are framed wide enough, couldn't the convergence be mastered twice? I have a college buddy from UW Oshkosh days who did the 3D post on the Brendan Frasier "Center of the Earth" thing...I think i'll see if I can ping him on what normal practice (if there is such a thing) is these days...
__________________
TimK Kolb Productions |
August 7th, 2010, 07:13 PM | #3 | |
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
Quote:
There is one post solution by The Foundry called "Ocula." It can create a smaller interaxial from a wide one. It doesn't work for every shot but many of them can be saved.
__________________
Tim Dashwood |
|
August 7th, 2010, 08:03 PM | #4 |
Major Player
Join Date: Mar 2005
Location: Neenah, WI
Posts: 547
|
To adjust interaxial, is there some keystoning?
I know I can adjust a number of properties in First Light, but they're presented n terms of the movement you're causing...horizontal, keystone, depth tilt, etc... Does Nuke (or is Ocula a separate app?) actually do something 'elastic' with the frame to affect convergence disproportionately across the screen?
__________________
TimK Kolb Productions |
August 7th, 2010, 10:31 PM | #5 |
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
I think it basically just morphs a new inbetween view using the existing left/right views. I'm not a Nuke user so I've never actually worked with it. I've just seen it demonstrated at NAB.
__________________
Tim Dashwood |
August 8th, 2010, 11:36 AM | #6 |
Inner Circle
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
|
That's an interesting idea that shouldn't be too hard to do in practice. A simple morph between the two images from a stereo pair should create very good 3rd eye view. This would change the way I shoot if it works because It would mean shooting for the smaller of your screen size range rather than the larger. It also opens up more possibilities for using side by side rigs and then calculating different I-A's in post.
Hmm it all sounds too simple and too obvious. With a modern workstation it wouldn't take long to generate those in between shots. Even movement shouldn't be an issue as the movement would be the same in all views. My initial concern is that the computer generated in between camera is the one your going to used on the biggest screens where any issues will be at their most obvious. I've got some clips where the IA was too large for big screen use all I ned to do is find some software to do the calculations and I can test this out. Perhaps this would be a good task for a ffmpeg or VLC script. Tim: it would be a fantastic add-on for stereo toolbox. The issue with using 3 physical cameras is getting them all close enough together. For most of my productions I'm using an IA of 30 to 50mm and even for small screen use such as a PC your still not going to be going over 100mm for the majority of shots. You could perhaps get close with narrow cameras on a beam splitter by having left straight through, right mirrored plus a further right camera even furthe right effectivly side by side with the left camera, but your still looking at a sub 100mm separation for most shoots.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com |
August 8th, 2010, 11:58 AM | #7 |
Major Player
Join Date: Mar 2005
Location: Neenah, WI
Posts: 547
|
Would the "middle camera" created by the interpolation be too "center"?
__________________
TimK Kolb Productions |
August 8th, 2010, 02:39 PM | #8 |
Inner Circle
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
|
You could with the right algorithms adjust the interpolated camera left or right. With careful planning a simple half way morph could probably be made to work giving you two masters for different screen sizes.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com |
August 8th, 2010, 03:16 PM | #9 |
Major Player
Join Date: Apr 2008
Location: Malibu, CA
Posts: 480
|
Tim and Alister seem to have this correct. Ocula creates a pair of "virtual cameras" in between the two originals and re-sets the I-A from them. There's a pretty good YouTube video done by the chief Foundry scientist at a trade show demonstrating this here;
YouTube - ‪Ocula - plug-ins for Stereoscopic post production‬‎ |
August 8th, 2010, 08:26 PM | #10 |
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
I have been actively working on a technique for many months that doesn't use morphing.
__________________
Tim Dashwood |
August 31st, 2010, 08:42 AM | #11 |
Regular Crew
Join Date: Dec 2009
Location: Brooklyn, New York
Posts: 101
|
This is an interesting and informative thread.
It would be interesting if Ocula or some other Program would be able to create a pair of "living" or continuously variable "virtual cameras" in-between the two original cameras in the near future. |
September 1st, 2010, 01:00 AM | #12 |
Trustee
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
|
Not much experience in 3D, but still:
1. Wouldn't AB and BC be two totally different perspectives from an aesthetic point of view? 2. Considering your argument that the size of the screen is a critical variable, then the biggest problem is defining an average (standard) 'home/office' viewing screen. I have three screens at home - 18", 22" and 42". Boy do I have a problem when it comes to 3D! 3. Why not use FOUR cameras (Two 3d rigs)? Yielding two results - Cinema and home (42" as standard?). This keeps the perspective the same (roughly), and simplifies the workflow technically if not quantitatively? This would make the most logical sense, wouldn't it? 4. If one includes scripts or metadata that run specialized calculations for each particular device, then that means an almost revolutionary change in viewing technology - in the way data's written, read, calculated and displayed (not to talk of feedback systems like games, virtual worlds, walkthroughs, etc). Somehow my instinct tells me the solution is to keep it really simple, but heck if I know how.
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa. |
| ||||||
|
|