To toe in or not to toe in, that's the question - Page 4 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery
Register FAQ Today's Posts Buyer's Guides

3D Stereoscopic Production & Delivery
Discuss 3D (stereoscopic video) acquisition, post and delivery.

Reply
 
Thread Tools Search this Thread
Old August 7th, 2010, 07:30 AM   #46
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
Exactly,
I have already provided and example of realistic stereoscopic image
and if you read the previous replays you will find one that has a link to PC Magazine article that has my example with links to download.

Mathew Orman
Mathew Orman is offline   Reply With Quote
Old August 7th, 2010, 07:59 AM   #47
Regular Crew
 
Join Date: Jul 2010
Location: Central NJ
Posts: 42
Can't fuse it

Mathew,

The single anaglyph image you link to in the comments following Mr. Dvorak's article, when viewed on a 22" 16:9 screen at a distance of 56 cm, as you instruct, is impossible for me to fuse it into a 3D image. After attempting this near acrobatic feat for 60 seconds, the pain in my eyes was too great. I asked my wife and child (both lacking any stereoscopic expertise) to give this image a go. Neither could achieve any success viewing the image.

For the benefit of other forum readers, here is a direct link to the image you refer to:

http://www.tyrell-innovations-usa.co...2inch16by9.JPG
__________________
VRtifacts
Tony Asch is offline   Reply With Quote
Old August 7th, 2010, 08:56 AM   #48
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
That is correct,
you need to relax your eyes as the car is about 18 feet away and you are expecting it to be at the screen distance.
Try to forget that there is a screen surface and think of it as just a window or a frame for real world.
The cube on the other head needs you to focus on object that is in front of your screen.
Simply superimpose your finger on the cube focus on the finger and slowly movie it away from screen until you can touch an edge of it.
Ones you have it, use the rule aligned with one of the edges and it should indicate exactly one inch. Just make sure your distance to monitor remains constant while you doing the measurement.
Finally when focused on the car the cube will split into two images and that is natural. Real life example look or focus at object on horizon and the finger in front you nose will split into 2 fingers.

The difficulty is due to some ghosting in anaglyph mode and the fact that you and your family have never experience realistic immersion with depth extending from 25 cm to infinity on this example.
Also if you are over 50 you need +1 diopter glasses to accommodate out of focus screen image while in relaxed state.
I suspect that all images you have seen on your 22 inch screen represented some miniaturized volume of some scenes, volume that was limited to small distance from screen surface.
Such volumes are great for kids who like miniature version of the World.

For those who have nVidia's 3D Vision there is also page flip version of this image.

Mathew Orman
Mathew Orman is offline   Reply With Quote
Old August 7th, 2010, 09:55 AM   #49
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
I stand by my view that someone in a totally dark room presented (planetarium) with a single dot of light 30ft from them would struggle to tell how far away that dot is. Tell them it is a star and they could be convinced that it is at infinity We are not simply mathematical computers, emotion and imagination also play their part.

Looking at infinity the two pieces of string would be parallel even if they don't look parallel when viewed end on and each eye would be looking straight down those strings. The image created or perceived by our brain may well be cyclopean, but each eye does does look straight down the strings delivering two very slightly different images to the brain, you only have to blink to see this. So if I take my strings and stretch them out to the 30ft distant screen where they are separated by 34" the strings would appear have a constant separation along their length, this is an illusion (whether your brain is tricked by it or not) because in fact we know the stings to be diverging. If we were watching a stereoscopic image on the screen at the ends of those strings our eyes would be looking down the lengths of the string so they too would be diverging. Our eye's are not designed to diverge so even if you do calculate a 34" disparity to give the "illusion" of strings being parallel and thus disappearing off into infinity this is dangerous as we do not know what the true implications of doing this for long periods are (other than short term headaches etc). So we come back to using 65mm as this is the closest we can come to representing infinity without causing possible damage. Many productions will permit some disparity beyond this as some people believe that a degree or so of divergence is tolerated by most people.

I can't fuse your image either Mathew. Perhaps others can. If that is an an example of how to do it, it's not doing you argument any favours. EDIT: OK I did fuse it but it took some work to get the scale correct and I had to very much relax into the image. I found it harder to fuse this image than most incorrectly done S3D images and any head movement caused the image to brake down in a way I found unpleasant. If we have to get screen to viewer distances as consistent as this image seems to require it will mean the complete redesign of cinemas as we know it and very specific screen/seating arrangements in the home. This isn't going to happen, so we need to learn how to deal with the distortions that are going to be a fact of life for S3D for the foreseeable future.

At least we all agree that staying close to the human FoV is desirable, even if it isn't always practical. Sometimes when story telling you want to exaggerate things for increased emotional impact. In addition taking sport as an example I don't think anyone is pretending that we are trying to fool the viewer into thinking they are at the game. But ball sports in particular benefit greatly from the 3rd dimension as it allows you to asses which way the ball is travelling in the Z axis, not just x and y. In the recent PGA golf examples S3D allowed viewers to actually judge the subtle slopes of the greens so you could see why the putt was not directly at the hole. However I don't think the long shots down the fairway worked, too much foreshortening. We are still learning what the viewer finds acceptable or desireable.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 7th, 2010, 09:57 AM   #50
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
Here is the real life example made with camera that had stereo window set to 22 inch and 40 deg FOV.

http://www.tyrell-innovations-usa.co...ereo22inch.JPG

You should have no problem viewing it if you relax your eyes.

Mathew Orman
Mathew Orman is offline   Reply With Quote
Old August 7th, 2010, 10:06 AM   #51
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
Alister Chapman;1556498]I stand by my view that someone in a totally dark room presented (planetarium) with a single dot of light 30ft from them would struggle to tell how far away that dot is. Tell them it is a star and they could be convinced that it is at infinity We are not simply mathematical computers, emotion and imagination also play their part.

//Wrong, all needed is to make a hole in the screen so the viewer can //compare real sky with the one on the screen

Looking at infinity the two pieces of string would be parallel even if they don't look parallel when viewed end on and each eye would be looking straight down those strings. The image created or perceived by our brain may well be cyclopean, but each eye does does look straight down the strings delivering two very slightly different images to the brain, you only have to blink to see this. So if I take my strings and stretch them out to the 30ft distant screen where they are separated by 34" the strings would appear have a constant separation along their length, this is an illusion (whether your brain is tricked by it or not) because in fact we know the stings to be diverging. If we were watching a stereoscopic image on the screen at the ends of those strings our eyes would be looking down the lengths of the string so they too would be diverging. Our eye's are not designed to diverge so even if you do calculate a 34" disparity to give the "illusion" of strings being parallel and thus disappearing off into infinity this is dangerous as we do not know what the true implications of doing this for long periods are (other than short term headaches etc). So we come back to using 65mm as this is the closest we can come to representing infinity without causing possible damage. Many productions will permit some disparity beyond this as some people believe that a degree or so of divergence is tolerated by most people.

// Wrong.
// I will create the string scene using C4D and prove my point in just a short while.

I can't fuse your image either Mathew. Perhaps others can. If that is an an example of how to do it, it's not doing you argument any favours.

//I guess you have never seen real immersion too.

// See if you can fuse the image made with Sony's stereoscopic camera with s-windows set to 22inch digonal and 40 deg FOV.

At least we all agree that staying close to the human FoV is desirable, even if it isn't always practical. Sometimes when story telling you want to exaggerate things for increased emotional impact. In addition taking sport as an example I don't think anyone is pretending that we are trying to fool the viewer into thinking they are at the game. But ball sports in particular benefit greatly from the 3rd dimension as it allows you to asses which way the ball is travelling in the Z axis, not just x and y. In the recent PGA golf examples S3D allowed viewers to actually judge the subtle slopes of the greens so you could see why the putt was not directly at the hole. However I don't think the long shots down the fairway worked, too much foreshortening. We are still learning what the viewer finds acceptable or desireable

No problem,
sooner or latter people will experience undistorted S3D content and
and we shell see who likes what.

Mathew Orman

Last edited by Mathew Orman; August 7th, 2010 at 10:41 AM.
Mathew Orman is offline   Reply With Quote
Old August 7th, 2010, 10:24 AM   #52
Regular Crew
 
Join Date: Feb 2009
Location: USA
Posts: 31
Quote:
Originally Posted by Mathew Orman View Post
Here is the real life example made with camera that had stereo window set to 22 inch and 40 deg FOV.

http://www.tyrell-innovations-usa.co...ereo22inch.JPG

You should have no problem viewing it if you relax your eyes.

Mathew Orman
Here is the same scene but distorted, miniaturized by misplaced 3D volume.
You can just grub the rider with your hand as she is no taller the your 22 inch screen. Notice the 3D volume is only few inches deep.

http://www.tyrell-innovations-usa.co...ereo22inch.JPG

Mathew Orman
Mathew Orman is offline   Reply With Quote
Old August 7th, 2010, 11:12 AM   #53
Regular Crew
 
Join Date: Jul 2010
Location: Central NJ
Posts: 42
Experience with immersion

the fact that you and your family have never experience realistic immersion with depth extending from 25 cm to infinity

In our household we have the good fortune to experience such realistic immersion for at least 99% of our waking hours every day as we go about our normal lives.

In any event, my (limited) range of test subjects have expressed their preference. Perhaps others will indicate theirs otherwise.

Unlike the rest of my family, I have more than 20 years experience with stereoscopic imagery and am aware of the technique of "relaxing" one's eyes. Indeed, for a fleeting uncomfortable moment the scene fuses, until one starts to scan the image as curiosity draws attention to different parts of the scene. Beginning with that moment all fusion evaporates. My normal +1 computer glasses do not seem to have any significant effect on fusion of this image.

Perhaps with extensive training and practice those of us with no experience in realistic immersion can learn to see things Mathew's way. I am pessimistic that the general public will follow such a regimen.

___________________________________________

Humans perceive 3D in numerous ways, while stereoscopy only mimics one of those ways: vergence. Just as 2D cinema takes liberties with color, light, FOV, editing, SFX, and POV, not only because of the technical limitations of film/video in reproducing the real world, but more importantly, because the director is conveying a mood, a story, a visual metaphor, a pacing, or some other creative instinct. Stereoscopic cinema is the same way. We take liberties with stereoscopic techniques for both technical and creative reasons. For instance, a director might purposely use 3D cardboarding as a metaphor for a character's inflexible and dogmatic nature, much as Edwin Abbott uses a two dimensional world to reflect certain aspects of Victorian society in his satire "Flatland." Storytelling, whether by novel, 2D film, or 3D film achieves immersion through a willing suspension of disbelief, not just in Coleridge's sense by moving beyond implausibilities in the narrative, but also by moving beyond the limitations of the medium.

With that said, no doubt some director will employ Mathew's approach to good use, not because it is the only way to represent 3D, but because it evokes some emotion or feeling related to the storytelling at hand.
__________________
VRtifacts
Tony Asch is offline   Reply With Quote
Old August 7th, 2010, 12:21 PM   #54
Trustee
 
Join Date: Oct 2009
Location: Rhinelander, WI
Posts: 1,258
Quote:
Originally Posted by Mathew Orman View Post
Here is the same scene but distorted, miniaturized by misplaced 3D volume.
And unlike the other two, it is actually possible to view without having to "relax" your eyes and without losing the stereo effect the moment you move your eyes. Indeed, in this one you can move your eyes and look at the rider or look at the trees in the background, always seeing it in 3D. Not once do you have to diverge your eyes. You can also move your head, move closer to the screen or further away from it.

The only problem: It was shot wrong from the start, so it is cardboarded. Its natural interaxial distance is too large, so you had to reduce it in post, sacrificing realism in the process. Have you shot it with a correct interaxial distance, either no or minimal correction in post would have been necessary, the image would be comfortable to view and would not be distorted.

It is completely unrealistic to tell your audience it needs to view a movie by completely relaxing their eyes and hold them for two hours without blinking or moving their head. It is not possible if your movie is a Zen meditation, let alone if it is an action movie, or a thriller.

It is just as unrealistic to expect anyone over fifty to wear special diopter lenses, especially considering the number of people who are wearing corrective lenses already. We had a discussion about that before: The only way for 3D to become widely accepted is by the use of completely passive viewing devices, such as circular polarizers. These can (and do) not only come as glasses but as clip-ons, so those of us already wearing glasses can use them, too. And, as Alister pointed out, if a kid sits on them, you won't have to mortgage your house to replace them.

The only "perfect" 3D images I have ever seen, images with no distortion but with full 3D depth vision, were static slides shot with the World 3D camera using 6 cm film ("120") and viewed with the handheld viewer that comes with the camera. You hold the viewer directly against your eyes, so each eye sees exactly the one view, fully lit, in full color, with no strobe effect, and all the angles of view are exactly the same as they were in the camera.

Unfortunately, something like that is impractical for motion pictures. Luckily, it does not matter. We are used to film presenting distorted images. In 2D, too. We expect it, we don't mind it. Film is art and thrives on distortion of reality. Reality is boring, film is exciting. When we want reality, we can take a walk in a park and just observe. It is very relaxing and we do not have to pay an admission fee. And of course we often do just that (or something similar). But we are still willing to pay to go to the movies and watch the unreal.

No, our vision is not cyclopian. We use two eyes, well most of us do. If you want to see the difference between our vision and cyclopian vision, all you have to do is wear an eye patch for a month or so. I was actually forced to do that for about a month as a kid after someone threw a rock in my left eye, just because I was of a different nationality than he was.

Just now I took a break from typing and went outside for a few minutes. I was standing on the garage roof and closed one eye (I do it a lot to observe the difference between 3D and 2D vision), when a small white butterfly flew up from below the roof and started flying towards me. I was unable to tell whether it was going to land on me. At the last moment I opened my closed eye and now could see it was not headed towards me. Before that, I could not judge its size (it turned out to be smaller than I thought) or its distance from me. We do have certain depth cues in cyclopian vision, but the cues fail us if we cannot tell the exact size of an object from experience.

Yes, our brain merges the two images into one stereoscopic image, but the result is not cyclopian by any stretch of imagination. It is stereoscopic.
Adam Stanislav is offline   Reply With Quote
Old August 7th, 2010, 02:30 PM   #55
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Well yes I can now fuse Mathews images but my vision doesn't like them at all. I will say that the ortho images are more natural looking than the non ortho version, but I don't find the scale totally convincing, I still think I'm looking at a small image on a small screen, possibly because my vision is tuned to looking at a computer screen and when it sees the screen edge the scale cue overrides the depth cues. I'm also having serious focus issues as my eyes are somewhat confused as to where to focus. Perhaps in a cinema with a more distant screen this will be less of an issue or corrective glasses are needed. But like Tony the biggest issue I have is when my eyes start to explore the scene and I start to concentrate on any one small aspect as opposed to the whole, then the 3D then breaks down. I also find it very hard work to view the images and when I return to normal vision I find my eyesight has been affected (focus I think) and this takes a few moments to settle down. I would not want to watch a film like this, I actually found viewing the ortho image nauseating which I have never experienced with S3D before.

The non otho image does miniaturise, but it's so much easier to fuse and I find it an easier viewing experience. I enjoyed looking at the non ortho image, I could explore it visually without issue. The miniaturisation doesn't really bother me, I know it's a horse so I know how big it really is and I also know it's on a screen so my brain, imagination etc are happy just to accept that it is not a 1 to 1 representation of the real world. Lets face it people have been watching 2D movies on screens with gross distortions of scale and distance for over 100 years and we accept them willingly.

I watched Toy Story today in S3D and I thoroughly enjoyed it. There were big distortions of reality throughout the film, most of these added to the viewing experience such as the use of miniturisation to make the characters appear vulnerable or increased depth volume to make the foreground character more threatening.
At no point during the movie did I feel nauseous or have to relax my vision to see the stereoscopic effect. I was free to visually explore the scenes. I found this an enjoyable experience and this is what watching a film is all about.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 7th, 2010, 02:46 PM   #56
Wrangler
 
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
I have remained absent from this thread mostly because I haven't had the time this week to make any worthy contributions to the discussion, but I was also interested in seeing how other stereographers feel about one of the most controversial & important subjects in S3D.

The discussion has become heated at times and I ask that the participants please remain as objective as possible. There are two completely different goals being presented and therefore require different approaches. 99% of us in this forum are producing content to be viewed for entertainment purposes and not true-to-life orthographic reproduction. A scientific approach is not wrong, but just impractical for the 99% of us.

Except for stills I've shot over the years on my Kodak Stereo camera I have never attempted to achieve the true orthostereo effect, and I probably never will. I get paid to make sure the audience enjoys the content being presented and that the S3D effect is never a hinderance to a positive S3D experience. There are three industries (camera/TV manufacturers, film & television production/broadcasters, gaming) relying on content producers to provide a positive, easy and enjoyable 3D experience for consumers. Most of the time that means true-to-life volume needs to be sacrificed for a more conservative approach, and ultimately I don't think audiences are looking for true-to-life 3D anymore. They just don't want cardboard!

When I setup a shot I prioritize like this:
  1. Setup the ideal composition and focal length of the shot to serve the story (usually as storyboarded)
  2. Choose the plane of convergence based on the action (or closest plane of convergence if it will change mid-shot) and flag any potential window violations
  3. Measure and calculate maximum I.O. to stay within the limits of positive parallax on the target display size after HIT convergence (I typically shoot parallel and use Rule's calculation for this.)
  4. The last thing I worry about is the roundness factor or the effects of hyper or hypo interaxials. Typically I will be asked to maximize the depth budget for each shot but may argue against it based on the other setups in the same scene.

So I guess the point I'm trying to make is that orthostereo realism is the last concern of mine when shooting for entertainment. I had an interesting conversation a few months ago with Allan Silliphant on this very topic. I'll ask him to elaborate in this thread if he has the time but here's the broad strokes of what he said:
In the early 50's when Hollywood started producing 3D films full force there was an assumption that the goal should be orthostereo reproduction. The problem was that audiences didn't respond well to expecting to see their "bigger than life" Hollywood stars suddenly appear "normal" sized in the window. The use of hypo-stereo solved the issue through the gigantism effect (which makes us as audience members feel that the actors are once again bigger-than-life) and a more conservative depth budget (which is good for the large screen.) It also allowed for good S3D close-ups.

The next topic I would really like to discuss is the potential for multiple interaxials (3 cameras) for different screen size targets. One issue that my clients have brought up in the past is the fact that when we shoot for the big screen the 3D effects scale down for the small screen. Of course we can't shoot for the small screen when we know that there will be at least one screening in a theatre. The choice is to be more conservative for the largest screen but clients hate to hear that that is the only option available to them.
__________________
Tim Dashwood
Tim Dashwood is offline   Reply With Quote
Old August 7th, 2010, 03:05 PM   #57
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Hi Tim.

Thanks for the input.

Multiple camera rigs is certainly something that I have been pondering too as I'm often producing for both big and small screens, it's a discussion worthy of it's own thread really. It would mean bigger rigs and that's the main stumbling block for me.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 7th, 2010, 11:52 PM   #58
Trustee
 
Join Date: Feb 2006
Location: USA
Posts: 1,684
At the risk of being old fashioned I'd like to bring this back to the original conversation.

Alister, your method of setting convergence is very attractive to the work flow on my project because we need to move fast ( doesn't everyone though) but want to see what the picture will look like on set without the extremes of shooting fully converged.

I have a couple of questions though.

1 - one of the advantages of Parallel is the ability to set convergence in post - though that requires pushing in on the picture a bit as you lose the sides. Does your method throw that away completely? Would i be locked into my on set choice in post, or would I still have some freedom to adjust later?

2 - You said you initial set your camera at 0 I-A and converge so that the background is at maximum convergence, Then you push out the I-A to set convergence. Doesn't this mean that your background is closer than maximum convergence? Is that done intentionally to limit how much divergence you'll have and does that tend to make your depth conservative?
Leonard Levy is offline   Reply With Quote
Old August 8th, 2010, 12:35 AM   #59
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Yes leonard. The background will always be closer than the maximum disparity, but normally only by a very small amount. You could set your zero IA disparity just over your chosen limits if you feel that you need to compensate for this. unless you are working converged on a subject very close to the camera the overall scene depth only changes by a very small amount over a very wide range of IA, certainly a lot less than if you work with a fixed IA and then adjust via toe in. You can still tweak the stereo window in post but bringing the window forward will lead to increased background disparity which could put you over your limits, if you find you need to increase the disparity a slight zoom will do this.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old August 8th, 2010, 01:08 AM   #60
Trustee
 
Join Date: Feb 2006
Location: USA
Posts: 1,684
Thanks Alister that's what I figured. I guess if in post they want to move convergence back it will be no different than what they have to do with every shot in parallel anyway. Thus an advantage over parallel in general is that you don't need to do the push in and thus lose quality. I go in to more serious production Monday and I think i'm going to adopt this method.

One last question . Am I correct that when you get to very wide shots and also to longer lenses that rules of thumb get trumped even more than usual by the character of the individual shot which may sometimes look better with more or less I-A - though generally longer lenses may suggest wider I-A and the opposite for wide lenses and close subjects?
Leonard Levy is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery


 



All times are GMT -6. The time now is 08:20 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network