View Full Version : Anyone bought the reald Stereo Calculator App?
Nick Hiltgen July 22nd, 2010, 04:14 AM So I get this email yesterday offering a discount on the stereo 3d course talked about in an earlier thread, frame forge 3d, and a new application for iphone, ipad and ipod touch from reald.
The catch is the application is 230 bucks. As much as I'd want to buy it so I could legitimately(ish) write off the purchase of an ipad I just can't help but wonder if 230 bucks is anywhere close to a reasonable price to pay for what appears to be such a simple piece of software. Anyone bought this yet?
RealD Professional Stereo3D Calculator for iPhone, iPod touch, and iPad on the iTunes App Store (http://itunes.apple.com/us/app/reald-professional-stereo3d/id362539528?mt=8#)
This link opens up both a web page and the itunes info page.
Mathew Orman July 22nd, 2010, 12:06 PM If you wait a week I will sell you one for US$ 29.00
For the moment I have one for PC but working on iPhone one and should have it ready by next week.
With the package you would also get stereoscopic camera simulator in OpenGL
supporting: toe-in, parallel and off-axis camera geometries.
Mathew Orman
Nick Hiltgen July 22nd, 2010, 09:49 PM if it has the same functionality i think you'll have a great ap at that price.
Steve Shovlar July 23rd, 2010, 03:11 AM There's already one out there for $30.
http://itunes.apple.com/us/app/3d-movie-calculator/id377269776?mt=8
Mathew Orman July 23rd, 2010, 10:30 AM if it has the same functionality i think you'll have a great ap at that price.
Yes, it has the gimmick stereo mode like all of them but the unique feature is
the real undistorted stereoscopic geometry mode for creating stereoscopic content that is distortion free and matches the target screen size.
Also, there are support for toe-in, parallel and off-axis stereoscopic camera geometries.
Mathew Orman
Alister Chapman July 24th, 2010, 01:45 AM There is also the very good Speedwedge app.
Tim Dashwood July 25th, 2010, 06:49 PM I concur on Speedwedge IOD Calc. It has the most "bang for the buck" at $50 and is easy to use and understand.
Leonard Levy July 26th, 2010, 12:42 AM I just got the 3D StereoTool app for my iphone and it seemed very cool but something has me confused and I'm wondering if I misunderstand something or if the app has a problem.
Also I tried going on the web and looking for other stereo calculaters and they all seemed to want you to set a near and far point and then the app would calculate the correct inter-ocular. However that seems backwards to me. I would prefer to start with a set inter-ocular -like 65mm and then see what the acceptable near and far points are for any given shot.
Ok specifically the 3D Stereotool lets you set lens in mm, inter-ocular distance and convergence point ( for toed in) . Now I would expect with those 3 parameters it would then show you the acceptable near and far points for a stereo window. However it also allows you to set a " far " distance on the stereo window and then calculates the near. So you can change the far point from infinity to anything while the convergence point remans set. That doesn't make sense to me. Shouldn't the convergence point determine both the enar and far?
How about the Speedwedge IOD. Will it let you set up a shot with a given inter-ocular and then show you the near and far points based on a given convergence distance?
Lenny Levy
Adam Stanislav July 26th, 2010, 09:50 AM Why not just use a hand-held calculator and calculate it yourself? Why do we need a dedicated application for everything these days?
Just use the Bercovitz and Di Marzio Formulae (http://nzphoto.tripod.com/stereo/3dtake/newversionberk.html) that all these applications use anyway.
And if you absolutely must use a computer, why pay for something you can get for free, such as the BaseCalc (http://pmeindre.free.fr/BaseCalc.html) application for Windows (including Pocket PC)?
Pavel Houda July 26th, 2010, 09:58 AM Great link, thank you Adam. Given that I know my camcorders, I can probably pre-calculate for few situatios and remember the values. This doesn't need to be a rocket surgery.
Adam Stanislav July 26th, 2010, 10:35 AM Indeed it does not. Plus, doing it yourself allows you to test the formulae for your own work and not rely on some hardcoded math that someone else decided should work for you.
Some additional links for those who want to get into the guts of the math are these two PDF links:
The Di Marzio Equation (http://nzphoto.tripod.com/stereo/3dtake/Di_Marzio_v_3.pdf)
The Di Marzio Equation for Stereography (http://nzphoto.tripod.com/stereo/3dtake/Di_Marzio_Equation_Technical_Web.pdf)
The first one being an overview (10 pages), the second one quite technical (75 pages).
Pavel Houda July 26th, 2010, 12:45 PM Thanks again Adam. I like to understand the basic theory and rules for these types of things, and how to be in a ballpark, but improvise as I go. I don't do theater movies. For me it was more useful to build the steresocopic/dual monoscopic previewing system for the cameras, that can let me see camera alignments, luma and chroma states, as well as if I am still in stereoscopic range, and other easily visible issues, than running a computer for the shots. Even though the resolution is poor, there is still a wealth of information one can receive.
I can see how that is useful for bigger production though, specially since one cannot preview on IMAX screen very easily.
I do enjoy to understand the theory behind it, so I can make sensible guesses. I mostly used the 30/60x rule so far, that I originally received from Tim Dashwood. Thank you for the links. It is fascinating to learn from it.
Leonard Levy July 26th, 2010, 11:48 PM I'm sorry guys, I used to be quite competent in math when I was younger, but it is not practical to expect the average working DP or stereographer to have to deal with calculations or even articles like these. If it works for you great but if I can get a nice calculater on my iphone that I only have to plug values into, its a far better solution for me and I would guess the vast majority of other professionals.
I would be quite pleased if anyone would respond to my question about the StereoTool. Should I not be able to see the near and far points of a stereo window by inputting my preferred interaxial distance, convergence point and lens?
It doesn't seem to work on the 3D StereoTool and I don't get why not.
Can you do this with the RealD calculator?
Am I missing something?
Lenny Levy
Adam Stanislav July 27th, 2010, 08:30 AM Am I missing something?
Actually, yes. In the very first link I posted it shows clearly that the near distance changes if any other parameter changes. That includes the far distance.
It also suggests you limit your far distance to twice the near distance. But from what you are describing your fancy app does not seem to let you specify that. Hence the suggestion to calculate it yourself. It is not that hard. We were taking pictures long before the portable computer (or even the personal computer) and we got it right doing the calculations with a pencil and paper, though mostly straight in the head. Doing it with a handheld calculator is much simpler and you do not have the limitations your app is imposing on you.
The most complex of those formulae require addition, subtraction, multiplication and division. No trigonometry, no calculus, just things any 10 year old can do.
Anyway, if your app is asking you to give it the far distance, give it a far distance. Play with it. Try different far distances and see what it comes up with. Maybe give it infinity first and then keep it bringing closer until you get what you need. At first you'll be guessing. Very soon you will see a pattern of how it works and will take the guessing out of it.
Bruce Schultz July 27th, 2010, 05:24 PM Lenny, you tried to help me some time back with an EX1 color rendering problem I was having so perhaps I can try and reciprocate here.
Why would you want a "preferred inter-axial" ?
The basic measurements needed to avoid background (and foreground) parallax or divergence is the distance between your foreground object(s) and the background. These measurements can usually be obtained with the help of a tape measure (foreground) and general guesswork about the background if it is very deep. Once you have these two numbers, then the calculators will let you know the minimum and maximum lens inter-axial distance to set to minimize or eliminate divergence. I can't think of a reason why you would want to do this backwards with a mirror rig because you would just be guessing at the proper inter-axial for every shot since the FG and BG distances would almost always be different distances from shot to shot.
65mm adult human inter-occular is fixed, but lens inter-axial can be adjusted in a mirror rig down to 1mm and out to as much as 200 - 300mm for deep distance shots - even further apart with side-by-side rail rigs. The key is understanding the relationship between the FG and BG distances as I've outlined above and then setting your correct inter-axial to maximize roundness and minimize divergence.
Now with the new Panasonic 3D camera, all of this changes and you actually do have a fixed 62mm inter-axial distance. Having shot it in tests recently I can say that it is going to change the measuring metrics used for mirror rigs and the IOD calculators will have to start making the kinds of "backwards" calculations you have requested. We'll have to see how the calculation designers handle that. The Panasonic 3D camera will be very limited in it's depth availability - Panasonic says 10 feet to 100 feet, but my calculations using it indicate a much more compact depth of 10 feet to about 50 feet. It's going to be a real challenge to use this camera as effectively as a mirror rig.
Adam Stanislav July 27th, 2010, 06:15 PM We'll have to see how the calculation designers handle that.
By rearranging the equations, which is very easy to do.
From the web site I quoted:
B = P/(L-N) ( LN/F - (L+N)/2 )
B = Stereo Base (distance between the camera optical axes)
P = Parallax aimed for, in mm on the film
L = Largest distance from the camera lens
N = Nearest distance from the camera lens
F = Focal length of the lens
(L-N = D = the front to back depth of the subject. With a bit of luck it is also the depth of field of the image, but does not have to be, especially at macro distances where a big depth of field is hard to achieve.)
So, if we know the desired front to back depth, we change every L to D+N, like this:
B = P/D ( (D+N)N/F - (D+N+N)/2 )
And if we want to calculate the N (nearest distance from the camera lens) while knowing B, P, D and F, we just rearrange it and end up with a quadratic equation:
N² + (D - F)N - FD(1/2 + B/P) = 0
The equation will have two solutions for N, probably one negative (which we can discard) and one positive. Once we have the nearest distance, we can calculate the largest distance by adding the subject depth D to the nearest distance.
Quite frankly, I am surprised the app mentioned in this topic does not have this option (at least not judging by what I have read here).
Alister Chapman July 28th, 2010, 10:32 AM Interaxial variation is fundamental to good 3D production. Changing convergence via toe in or angulation also changes the on screen depth, sometimes by dramatic amounts, while changing convergence by changing interaxial has a far smaller effect on the total scene depth. When cutting between shots within a scene, constantly changing the convergence via angulation can make it very hard to view comfortably as the scene stretches and compresses, while using interaxial instead gives a more natural and easier to view film. This for me is one of the big downsides to any of the single camera solutions currently available.
Petri Teittinen July 28th, 2010, 12:41 PM For the moment I have one for PC
Mr. Orman, I'd be very interested in purchasing a copy of your calculator for PC. How should I proceed?
Leonard Levy July 31st, 2010, 09:28 PM Thanks Bruce and everyone else here as well,
Please be patient if my questions seem dumb- I am picking up speed fast though.
Having finally gotten my feet wet the other day I can see how much you would be interested in calculating inter-axial to the shot, instead of the near and far to some pre chosen inter-axial. Also i was able to use the 3D StereoTool kind of backwards by making incremental changes until I got to the shot I was working with. It seems pretty helpful though it will be more valuable if/when they change it to allow you more choice over which values to make variable. My partner downloaded the IOD calculator for $50 and that didn’t seem to let you set your interaxial either at least on first glance. I only looked at that for a moment and will try it again next week.
I’m wondering as a general rule how much you guys are making calculations on the set or do you just use the 1/30 or 1/60th rule and wing the rest. The shoot I’m on expects to have at least a few theatrical corporate meeting screenings but the rest would be for TV’s. Do you guys sometimes split the difference and go for 1/45?
Are you really able to limit far distance to twice near distance on many shots and is that a common rule of thumb? It sounds very limiting on the set. Does that really mean if a person is say 9’ away and your near is 7’ that you have nothing in the B/G past 14’ feet? Do you find yourself battling directors who want to set up shots you don't feel comfortable with?
I sat in front of a 3DTV at Best Buy today and looked at every short they had in their demos. The good the bad and the ugly for sure. It did look to me that my favorite stuff had sometimes did have limited depth and above all good roundness i.e. something closer to the twice the near distance rule. Almost every exterior that had much visual interest at infinity looked very cardboard - or at least the background did.
Alistair, That sounds like a good argument for shooting parallel also.
Here’s a broad question. So much of what I read online concerns avoiding mistakes i.e. with calculations you can determine the limits of acceptable 3D by setting near far and interaxial - but that doesn’t make for good looking 3D yet does it? What turns you guys on and what are you trying to achieve aesthetically on a 3D shoot?
Bruce Schultz August 1st, 2010, 02:14 PM That's a lot of questions Lenny, I'll try to get to a few of them and let others speak to the rest.
It almost sounds like you are using a fixed I-A camera rig and not an adjustable mirror/beam type of rig. Is that correct? If so it would explain why you seem fixed on backwards calculating the proper settings, and this I understand having suffered through a few early shoots with these types of systems and eventually abandoning them entirely for mirror/beam setups. It's not that you can't get some decent stuff shooting parallel cameras, but it's extremely limiting. You really need to be able to adjust the I-A down to a very short distance at times to eliminate background disparities like ghosting and the mirror/beam rig is the best way to achieve that.
I was on the set of Transformers 3 a few weeks ago and had a long discussion with the Pace I-O (why inter-occular has not been fully replaced by inter-axial I don't understand) convergence puller as well as the on set stereographer. They both said that they were rarely going over 25mm (1 inch) in any shots except the most compact ones - that is where the BG was close to the FG. Using a narrow I-A does minimize roundness to some degree, but it also minimizes the background disparities which can be difficult if not impossible to correct except with the most expensive post software. So it is incumbent on the stereographer to keep this in mind. I rarely shoot anything over say 30mm I-A and am usually in the 15-25mm range for almost all shots. One of the big problems that the early 3D makers (1950's) had was their inability to adjust I-A at all, so they would compensate by moving the set walls inwards - quite the opposite of 2D photography's technique of expanding sets to increase perceived depth. Take a look at Hitchcock's "Dial M for Murder" to see what I'm referring to. Jim Cameron was asked repeatedly by the brass at Fox why the 3D shots weren't "big", and this was why they weren't - he understood through trial and error that the problems with large I-A shots could be insurmountable. You can fix bad convergence, but not bad I-A settings.
I usually converge on the focus point, except for green/blue screen shots which I will try and shoot parallel because the post convergence won't cost any blowup problems.
As for on set calculations, I find the IOD iPhone/iTouch app works great, but it is not tuned to calculate distances based on fixed I-A settings. You might contact Leonard the creator to see if he can modify it, but like I stated earlier it would be to accommodate fixed I-A rail rigs primarily. And finally, I don't find the 30:1 etc formulas to be very helpful in the field.
Leonard Levy August 1st, 2010, 04:10 PM Thanks Bruce and thanks for your patience,
Actually I am using an adjustable I-A rig and on our first shoot did change I-A all the time while shooting parallel. I think my question was formed before actual experience. I'm not sure I still feel it is a major problem with the software, but it seems like it would be good to have more freedom in how to to use the software. I will compare the 3D stereoTool Kit to the IOC as soon as I get a chance. Hope they suggest the same settings.
I was reassured by the general range of I-A settings you used since we also stayed reasonably conservative probably varying between 12mm for close shots and - maybe 37mm for things people around 9 - 10 feet away. I based that on a 1/45 compromise and then just kind of winged it. Sounds like you and the Transformers people are going even closer . Any danger to going too close other than dull 3D. I did read something about avoiding "gigantism".
Max is 85mm for our rig and I hope that will be enough for exterior shots of locations. No vistas thank god.
At the risk of ignited the continuing controversy - sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel. My PM would prefer converged because he seems to think it will make post faster though I'm dubious. I'm wary of it because with our level of experience I prefer to keep more choices available in post.
How do you decide how deep to allow your shot to be?
Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?
I feel pressure from above to not be a conservative curmudgeon about depth and since its a doc in general - to just "shoot - we're running out of time". ( Some things never change.)
Alister Chapman August 2nd, 2010, 01:43 AM I completely agree with Bruce and his comments above. The only comment I would add is of course that Transformers is being shot for large screen presentation so there is no surprise to find them using sub 40mm I-O. If your production is only ever going to be seen on smaller screens such as 50" TV's the the larger I-O's may be desirable, but even then 60mm is likely to be plenty. For web and PC use you may even exceed 65mm. All my latest projects have been done sticking to Sky TV's disparity limits of +4% and -2% max with an average of +2% with only occasional use of negative disparity. This has normally meant I-O ranges between 30mm to 50mm depending on the scene for interiors and narrative. At the same time I'm doing a very exciting Hyperstereo project where we are using an array of cameras with an I-O of 150m!
Mathew Orman August 2nd, 2010, 05:03 AM Thanks Bruce and thanks for your patience,
Actually I am using an adjustable I-A rig and on our first shoot did change I-A all the time while shooting parallel. I think my question was formed before actual experience. I'm not sure I still feel it is a major problem with the software, but it seems like it would be good to have more freedom in how to to use the software. I will compare the 3D stereoTool Kit to the IOC as soon as I get a chance. Hope they suggest the same settings.
I was reassured by the general range of I-A settings you used since we also stayed reasonably conservative probably varying between 12mm for close shots and - maybe 37mm for things people around 9 - 10 feet away. I based that on a 1/45 compromise and then just kind of winged it. Sounds like you and the Transformers people are going even closer . Any danger to going too close other than dull 3D. I did read something about avoiding "gigantism".
Max is 85mm for our rig and I hope that will be enough for exterior shots of locations. No vistas thank god.
At the risk of ignited the continuing controversy - sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel. My PM would prefer converged because he seems to think it will make post faster though I'm dubious. I'm wary of it because with our level of experience I prefer to keep more choices available in post.
How do you decide how deep to allow your shot to be?
Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?
I feel pressure from above to not be a conservative curmudgeon about depth and since its a doc in general - to just "shoot - we're running out of time". ( Some things never change.)
If you want perfect geometry without distortion in scale and perspective then simply set your rig at 63 mm IO and lock it for good. Never use zoom and set your view angle fixed at 35 to 45 deg depending where the sweet spot is in presentation theater geometry.
If your rig is parallel then stereo-window adjustment can be done in post or
at presentation time if real stereoscopic player is used. If you can toe-in, then simply set the stereo-window size the same as the target screen.
Just remember all rigs need first barrel correction and then if it is parallel then just a stereo window adjustment. If if it is toe-in then just keystone correction. In both cases you will have some loss of resolution. Only
the off-axis stereoscopic camera can shoot perfect content that only requires barrel correction in post. If your rig has cameras with changeable lens type then it can be converted to true off-axis geometry using special custom made adapters.
If you follow my advice your content will be real for life and you can always distort it for Gimmick type projection. But if you fallow Gimmick geometry techniques then you will never be able to experience full realistic immersion in scenes that you would have captured.
Mathew Orman
Leonard Levy August 2nd, 2010, 11:17 AM Well our first showing will be semi-theatrical large corp meeting, so I need to keep the IA conservative.
What I'm doing at this point is shooting parallel, but using a Davio box to slide the horizontals together to aproximate convergence in post. That allows us to see analglyph on a 17" monitor. Because the monitor is small I'm aiming for "tame 3D" on it.
We're using a 1/45 to 1/60 rule to make sure the IA is aprox right for near distance then checking with my calculator to look at at the near and far and again to just feel like I'm safe and keeping the IA somewhere between 12 mm for very close ( one shot that was 2' away & wide) to 30- 40mm for most normal shots. I'm telling my producers constantly I don't want the backgrounds too deep , but making it only 2x's the near distance as suggested earlier on this thread sounds pretty tight. I might try for 3x's if I can but even that will sometimes be hard. We have a day to experiment tomorrow I hope and I want to try shooting a bunch of stuff that stretches some parameters for B-roll just to see what it will look like. Can't wait to start think about aesthetics instead of just damage control.
Alister what in the world (or out of this world) are you using 150m IA for? Sounds like the 3D equiv of SETI.
Thanks again for your help guys.
Alister Chapman August 2nd, 2010, 03:11 PM 150m is for lightning bolts 4 to 5 miles distant using 50mm lenses. We are building an array of 20 cameras for this and then later in the year we will be doing the Northern Lights with 500m I-A, Northern Lights are about 300 miles up.
Petri Teittinen August 2nd, 2010, 03:28 PM That sounds pretty amazing Alister. Can't wait to see that someday.
Bruce Schultz August 2nd, 2010, 05:05 PM ...sounds like you shoot converged rather than parallel in general? Was this true for Transformers also? Most of the people I've read or talked to strongly suggest Parallel.
Yes, I usually shoot converged for a couple of reasons. One is that some of the production companies I have been shooting 3D for don't have adequate 3D post software so I have to help them get there with convergence. It's nerve wracking because you have to be pretty close to dead-on on set, which is very difficult at times with the time constraints, etc. Basically, I have a 90 - 10 rule with Post Production, which is that I'll get you 90% of the way there and Post has to do the last 10% fine tuning. Secondly, when shooting 1080x1920 not converging will always cost you image size on the sides, and therefore resolution as moving the data streams in post will force a scale-up.
Transformers 3 like Avatar is being converged on set. They are using a Pace 3D rig with 2 Sony F35 cameras. It's my understanding that Tron was shot parallel, and although I know that Pirates of the Caribbean is shooting 3D with 2 new Epic Red cameras, I don't know whether Dariusz is shooting parallel or converged.
Do you also simply look at the overlapped images on a monitor and make an educated guess about how far the parallax is?
For camera rig adjustments I use a 8" linear polarized 3D monitor which shows me an overlapped superimposed image of both eyes at 1080i resolution, and with passive glasses on I can see the 3D effect quite well - and in full color not anaglyph. I have just taken delivery on a new Blackmagic-Designs 3D displayport which I am going to use to drive a 40" 3D Samsung on set for the Village Idiots. Finally, I'm a partial investor and consultant with a company that is planning to bring to the market in a few months, a 12" LCD field monitor using linear polarization to preserve full color, and the ability to zoom in 4X to check background disparities (that's 15" X 4 or the equivalent of a 60" 3D HDTV monitor) so as not to have to drag a giant HDTV on to the set.
Alister is very correct in pointing out that if you are shooting for a HDTV final output, then you can crank 'em open a little bit more than for the big screen.
Lastly, I have found that it is vital to find out exactly, and I mean exactly what 3D post production software will be used on your footage. I have visited the edit bays of all of the companies I've shot 3D for to do just this and ask the 3D editors about their editing capabilities in advance of any shoot. This will save you over and over because it will tell you how to shoot the footage the most efficiently. Unfortunately, Avid has convinced it's installed base that it "does" 3D now with the newest version, but I'm finding out that about all it really does is take two data streams and mux them for viewing. That's not even rudimentary 3D processing which you can do with Cineform Neo3D or Tim Dashwood's Stereo Toolbox plugin for FCP.
Petri Teittinen August 3rd, 2010, 11:54 AM Bruce wrote, "I know that Pirates of the Caribbean is shooting 3D with 2 new Epic Red cameras"
As far as I know they're not actually using EPICs on that show, since those cameras exist in prototype only at the moment. I believe they're shooting with RED Ones that have the new M-X sensor installed. That's the same sensor as will be in the EPIC but the rest of the original RED hardware is limiting bitrate and framerate to non-EPIC levels.
Off-topic, I know, sorry.
Bruce Schultz August 3rd, 2010, 07:10 PM Petri, you are right about that and I was misinformed by a crew person. They are using MX'd Red One cameras on Pace 3D rigs.
|
|