Anyone bought the reald Stereo Calculator App? - Page 2 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery
Register FAQ Today's Posts Buyer's Guides

3D Stereoscopic Production & Delivery
Discuss 3D (stereoscopic video) acquisition, post and delivery.

Reply
 
Thread Tools Search this Thread
Old July 27th, 2010, 06:15 PM   #16
Trustee
 
Join Date: Oct 2009
Location: Rhinelander, WI
Posts: 1,258
Quote:
Originally Posted by Bruce Schultz View Post
We'll have to see how the calculation designers handle that.
By rearranging the equations, which is very easy to do.

From the web site I quoted:

Quote:
Code:
B =  P/(L-N)  ( LN/F - (L+N)/2 )
B = Stereo Base (distance between the camera optical axes)
P = Parallax aimed for, in mm on the film
L = Largest distance from the camera lens
N = Nearest distance from the camera lens
F = Focal length of the lens

(L-N = D = the front to back depth of the subject. With a bit of luck it is also the depth of field of the image, but does not have to be, especially at macro distances where a big depth of field is hard to achieve.)
So, if we know the desired front to back depth, we change every L to D+N, like this:

Code:
B = P/D ( (D+N)N/F - (D+N+N)/2 )
And if we want to calculate the N (nearest distance from the camera lens) while knowing B, P, D and F, we just rearrange it and end up with a quadratic equation:

Code:
N² + (D - F)N - FD(1/2 + B/P) = 0
The equation will have two solutions for N, probably one negative (which we can discard) and one positive. Once we have the nearest distance, we can calculate the largest distance by adding the subject depth D to the nearest distance.

Quite frankly, I am surprised the app mentioned in this topic does not have this option (at least not judging by what I have read here).
Adam Stanislav is offline   Reply With Quote
Old July 28th, 2010, 10:32 AM   #17
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Interaxial variation is fundamental to good 3D production. Changing convergence via toe in or angulation also changes the on screen depth, sometimes by dramatic amounts, while changing convergence by changing interaxial has a far smaller effect on the total scene depth. When cutting between shots within a scene, constantly changing the convergence via angulation can make it very hard to view comfortably as the scene stretches and compresses, while using interaxial instead gives a more natural and easier to view film. This for me is one of the big downsides to any of the single camera solutions currently available.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old July 28th, 2010, 12:41 PM   #18
Regular Crew
 
Join Date: Jul 2010
Location: Tuusula
Posts: 31
Quote:
Originally Posted by Mathew Orman View Post
For the moment I have one for PC
Mr. Orman, I'd be very interested in purchasing a copy of your calculator for PC. How should I proceed?
Petri Teittinen is offline   Reply With Quote
Old July 31st, 2010, 09:28 PM   #19
Trustee
 
Join Date: Feb 2006
Location: USA
Posts: 1,684
Thanks Bruce and everyone else here as well,

Please be patient if my questions seem dumb- I am picking up speed fast though.

Having finally gotten my feet wet the other day I can see how much you would be interested in calculating inter-axial to the shot, instead of the near and far to some pre chosen inter-axial. Also i was able to use the 3D StereoTool kind of backwards by making incremental changes until I got to the shot I was working with. It seems pretty helpful though it will be more valuable if/when they change it to allow you more choice over which values to make variable. My partner downloaded the IOD calculator for $50 and that didn’t seem to let you set your interaxial either at least on first glance. I only looked at that for a moment and will try it again next week.

I’m wondering as a general rule how much you guys are making calculations on the set or do you just use the 1/30 or 1/60th rule and wing the rest. The shoot I’m on expects to have at least a few theatrical corporate meeting screenings but the rest would be for TV’s. Do you guys sometimes split the difference and go for 1/45?

Are you really able to limit far distance to twice near distance on many shots and is that a common rule of thumb? It sounds very limiting on the set. Does that really mean if a person is say 9’ away and your near is 7’ that you have nothing in the B/G past 14’ feet? Do you find yourself battling directors who want to set up shots you don't feel comfortable with?

I sat in front of a 3DTV at Best Buy today and looked at every short they had in their demos. The good the bad and the ugly for sure. It did look to me that my favorite stuff had sometimes did have limited depth and above all good roundness i.e. something closer to the twice the near distance rule. Almost every exterior that had much visual interest at infinity looked very cardboard - or at least the background did.

Alistair, That sounds like a good argument for shooting parallel also.

Here’s a broad question. So much of what I read online concerns avoiding mistakes i.e. with calculations you can determine the limits of acceptable 3D by setting near far and interaxial - but that doesn’t make for good looking 3D yet does it? What turns you guys on and what are you trying to achieve aesthetically on a 3D shoot?
Leonard Levy is offline   Reply With Quote
Old August 1st, 2010, 02:14 PM   #20
Major Player
 
Join Date: Apr 2008
Location: Malibu, CA
Posts: 480
That's a lot of questions Lenny, I'll try to get to a few of them and let others speak to the rest.

It almost sounds like you are using a fixed I-A camera rig and not an adjustable mirror/beam type of rig. Is that correct? If so it would explain why you seem fixed on backwards calculating the proper settings, and this I understand having suffered through a few early shoots with these types of systems and eventually abandoning them entirely for mirror/beam setups. It's not that you can't get some decent stuff shooting parallel cameras, but it's extremely limiting. You really need to be able to adjust the I-A down to a very short distance at times to eliminate background disparities like ghosting and the mirror/beam rig is the best way to achieve that.

I was on the set of Transformers 3 a few weeks ago and had a long discussion with the Pace I-O (why inter-occular has not been fully replaced by inter-axial I don't understand) convergence puller as well as the on set stereographer. They both said that they were rarely going over 25mm (1 inch) in any shots except the most compact ones - that is where the BG was close to the FG. Using a narrow I-A does minimize roundness to some degree, but it also minimizes the background disparities which can be difficult if not impossible to correct except with the most expensive post software. So it is incumbent on the stereographer to keep this in mind. I rarely shoot anything over say 30mm I-A and am usually in the 15-25mm range for almost all shots. One of the big problems that the early 3D makers (1950's) had was their inability to adjust I-A at all, so they would compensate by moving the set walls inwards - quite the opposite of 2D photography's technique of expanding sets to increase perceived depth. Take a look at Hitchcock's "Dial M for Murder" to see what I'm referring to. Jim Cameron was asked repeatedly by the brass at Fox why the 3D shots weren't "big", and this was why they weren't - he understood through trial and error that the problems with large I-A shots could be insurmountable. You can fix bad convergence, but not bad I-A settings.

I usually converge on the focus point, except for green/blue screen shots which I will try and shoot parallel because the post convergence won't cost any blowup problems.

As for on set calculations, I find the IOD iPhone/iTouch app works great, but it is not tuned to calculate distances based on fixed I-A settings. You might contact Leonard the creator to see if he can modify it, but like I stated earlier it would be to accommodate fixed I-A rail rigs primarily. And finally, I don't find the 30:1 etc formulas to be very helpful in the field.
Bruce Schultz is offline   Reply With Quote
Old August 1st, 2010, 04:10 PM   #21
Trustee
 
Join Date: Feb 2006
Location: USA
Posts: 1,684
Thanks Bruce and thanks for your patience,

Actually I am using an adjustable I-A rig and on our first shoot did change I-A all the time while shooting parallel. I think my question was formed before actual experience. I'm not sure I still feel it is a major problem with the software, but it seems like it would be good to have more freedom in how to to use the software. I will compare the 3D stereoTool Kit to the IOC as soon as I get a chance. Hope they suggest the same settings.

I was reassured by the general range of I-A settings you used since we also stayed reasonably conservative probably varying between 12mm for close shots and - maybe 37mm for things people around 9 - 10 feet away. I based that on a 1/45 compromise and then just kind of winged it. Sounds like you and the Transformers people are going even closer . Any danger to going too close other than dull 3D. I did read something about avoiding "gigantism".

Max is 85mm for our rig and I hope that will be enough for exterior shots of locations. No vistas thank god.

At the risk of ignited the continuing controversy - sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel. My PM would prefer converged because he seems to think it will make post faster though I'm dubious. I'm wary of it because with our level of experience I prefer to keep more choices available in post.

How do you decide how deep to allow your shot to be?

Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?

I feel pressure from above to not be a conservative curmudgeon about depth and since its a doc in general - to just "shoot - we're running out of time". ( Some things never change.)
Leonard Levy is offline   Reply With Quote
Old August 2nd, 2010, 01:43 AM   #22
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
I completely agree with Bruce and his comments above. The only comment I would add is of course that Transformers is being shot for large screen presentation so there is no surprise to find them using sub 40mm I-O. If your production is only ever going to be seen on smaller screens such as 50" TV's the the larger I-O's may be desirable, but even then 60mm is likely to be plenty. For web and PC use you may even exceed 65mm. All my latest projects have been done sticking to Sky TV's disparity limits of +4% and -2% max with an average of +2% with only occasional use of negative disparity. This has normally meant I-O ranges between 30mm to 50mm depending on the scene for interiors and narrative. At the same time I'm doing a very exciting Hyperstereo project where we are using an array of cameras with an I-O of 150m!
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 2nd, 2010, 05:03 AM   #23
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
Quote:
Originally Posted by Leonard Levy View Post
Thanks Bruce and thanks for your patience,

Actually I am using an adjustable I-A rig and on our first shoot did change I-A all the time while shooting parallel. I think my question was formed before actual experience. I'm not sure I still feel it is a major problem with the software, but it seems like it would be good to have more freedom in how to to use the software. I will compare the 3D stereoTool Kit to the IOC as soon as I get a chance. Hope they suggest the same settings.

I was reassured by the general range of I-A settings you used since we also stayed reasonably conservative probably varying between 12mm for close shots and - maybe 37mm for things people around 9 - 10 feet away. I based that on a 1/45 compromise and then just kind of winged it. Sounds like you and the Transformers people are going even closer . Any danger to going too close other than dull 3D. I did read something about avoiding "gigantism".

Max is 85mm for our rig and I hope that will be enough for exterior shots of locations. No vistas thank god.

At the risk of ignited the continuing controversy - sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel. My PM would prefer converged because he seems to think it will make post faster though I'm dubious. I'm wary of it because with our level of experience I prefer to keep more choices available in post.

How do you decide how deep to allow your shot to be?

Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?

I feel pressure from above to not be a conservative curmudgeon about depth and since its a doc in general - to just "shoot - we're running out of time". ( Some things never change.)
If you want perfect geometry without distortion in scale and perspective then simply set your rig at 63 mm IO and lock it for good. Never use zoom and set your view angle fixed at 35 to 45 deg depending where the sweet spot is in presentation theater geometry.
If your rig is parallel then stereo-window adjustment can be done in post or
at presentation time if real stereoscopic player is used. If you can toe-in, then simply set the stereo-window size the same as the target screen.
Just remember all rigs need first barrel correction and then if it is parallel then just a stereo window adjustment. If if it is toe-in then just keystone correction. In both cases you will have some loss of resolution. Only
the off-axis stereoscopic camera can shoot perfect content that only requires barrel correction in post. If your rig has cameras with changeable lens type then it can be converted to true off-axis geometry using special custom made adapters.
If you follow my advice your content will be real for life and you can always distort it for Gimmick type projection. But if you fallow Gimmick geometry techniques then you will never be able to experience full realistic immersion in scenes that you would have captured.

Mathew Orman
Mathew Orman is offline   Reply With Quote
Old August 2nd, 2010, 11:17 AM   #24
Trustee
 
Join Date: Feb 2006
Location: USA
Posts: 1,684
Well our first showing will be semi-theatrical large corp meeting, so I need to keep the IA conservative.
What I'm doing at this point is shooting parallel, but using a Davio box to slide the horizontals together to aproximate convergence in post. That allows us to see analglyph on a 17" monitor. Because the monitor is small I'm aiming for "tame 3D" on it.

We're using a 1/45 to 1/60 rule to make sure the IA is aprox right for near distance then checking with my calculator to look at at the near and far and again to just feel like I'm safe and keeping the IA somewhere between 12 mm for very close ( one shot that was 2' away & wide) to 30- 40mm for most normal shots. I'm telling my producers constantly I don't want the backgrounds too deep , but making it only 2x's the near distance as suggested earlier on this thread sounds pretty tight. I might try for 3x's if I can but even that will sometimes be hard. We have a day to experiment tomorrow I hope and I want to try shooting a bunch of stuff that stretches some parameters for B-roll just to see what it will look like. Can't wait to start think about aesthetics instead of just damage control.

Alister what in the world (or out of this world) are you using 150m IA for? Sounds like the 3D equiv of SETI.

Thanks again for your help guys.
Leonard Levy is offline   Reply With Quote
Old August 2nd, 2010, 03:11 PM   #25
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
150m is for lightning bolts 4 to 5 miles distant using 50mm lenses. We are building an array of 20 cameras for this and then later in the year we will be doing the Northern Lights with 500m I-A, Northern Lights are about 300 miles up.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 2nd, 2010, 03:28 PM   #26
Regular Crew
 
Join Date: Jul 2010
Location: Tuusula
Posts: 31
That sounds pretty amazing Alister. Can't wait to see that someday.
Petri Teittinen is offline   Reply With Quote
Old August 2nd, 2010, 05:05 PM   #27
Major Player
 
Join Date: Apr 2008
Location: Malibu, CA
Posts: 480
Quote:
Originally Posted by Leonard Levy View Post
...sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel.
Yes, I usually shoot converged for a couple of reasons. One is that some of the production companies I have been shooting 3D for don't have adequate 3D post software so I have to help them get there with convergence. It's nerve wracking because you have to be pretty close to dead-on on set, which is very difficult at times with the time constraints, etc. Basically, I have a 90 - 10 rule with Post Production, which is that I'll get you 90% of the way there and Post has to do the last 10% fine tuning. Secondly, when shooting 1080x1920 not converging will always cost you image size on the sides, and therefore resolution as moving the data streams in post will force a scale-up.

Transformers 3 like Avatar is being converged on set. They are using a Pace 3D rig with 2 Sony F35 cameras. It's my understanding that Tron was shot parallel, and although I know that Pirates of the Caribbean is shooting 3D with 2 new Epic Red cameras, I don't know whether Dariusz is shooting parallel or converged.

Quote:
Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?
For camera rig adjustments I use a 8" linear polarized 3D monitor which shows me an overlapped superimposed image of both eyes at 1080i resolution, and with passive glasses on I can see the 3D effect quite well - and in full color not anaglyph. I have just taken delivery on a new Blackmagic-Designs 3D displayport which I am going to use to drive a 40" 3D Samsung on set for the Village Idiots. Finally, I'm a partial investor and consultant with a company that is planning to bring to the market in a few months, a 12" LCD field monitor using linear polarization to preserve full color, and the ability to zoom in 4X to check background disparities (that's 15" X 4 or the equivalent of a 60" 3D HDTV monitor) so as not to have to drag a giant HDTV on to the set.

Alister is very correct in pointing out that if you are shooting for a HDTV final output, then you can crank 'em open a little bit more than for the big screen.

Lastly, I have found that it is vital to find out exactly, and I mean exactly what 3D post production software will be used on your footage. I have visited the edit bays of all of the companies I've shot 3D for to do just this and ask the 3D editors about their editing capabilities in advance of any shoot. This will save you over and over because it will tell you how to shoot the footage the most efficiently. Unfortunately, Avid has convinced it's installed base that it "does" 3D now with the newest version, but I'm finding out that about all it really does is take two data streams and mux them for viewing. That's not even rudimentary 3D processing which you can do with Cineform Neo3D or Tim Dashwood's Stereo Toolbox plugin for FCP.
Bruce Schultz is offline   Reply With Quote
Old August 3rd, 2010, 11:54 AM   #28
Regular Crew
 
Join Date: Jul 2010
Location: Tuusula
Posts: 31
Bruce wrote, "I know that Pirates of the Caribbean is shooting 3D with 2 new Epic Red cameras"

As far as I know they're not actually using EPICs on that show, since those cameras exist in prototype only at the moment. I believe they're shooting with RED Ones that have the new M-X sensor installed. That's the same sensor as will be in the EPIC but the rest of the original RED hardware is limiting bitrate and framerate to non-EPIC levels.

Off-topic, I know, sorry.
Petri Teittinen is offline   Reply With Quote
Old August 3rd, 2010, 07:10 PM   #29
Major Player
 
Join Date: Apr 2008
Location: Malibu, CA
Posts: 480
Petri, you are right about that and I was misinformed by a crew person. They are using MX'd Red One cameras on Pace 3D rigs.
Bruce Schultz is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery


 



All times are GMT -6. The time now is 07:29 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network