|
|||||||||
|
Thread Tools | Search this Thread |
January 15th, 2007, 10:44 AM | #16 |
Major Player
Join Date: Mar 2006
Location: Los Angeles
Posts: 439
|
True carlos, I mis-phrased. This seems like it would eventually be a FAR better solution than all the parts of a mini35. My point was simply that the image is formed by the primary lens on the front surface of the fiber optic. The individual optics themselves act like lenses to redirect the image to the chip. So instead of a single ground glass with a single lens to focus it on the chip, you have thousands of gg's and lenses. Either way, you're photographing an image formed in front of the chip.
|
January 15th, 2007, 03:53 PM | #17 | |
Trustee
|
Quote:
__________________
BenWinter.com |
|
January 20th, 2007, 12:47 PM | #18 | |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Actually, it is possible...
Quote:
Essentially, an all-optical adapter would just relay the aerial image directly to the CCD/CMOS chip. To put it another way: imagine the focal plane of your prime lens -- a lens that exactly relays that image to your imaging device (film or CCD/CMOS) will give you precisely the lens properties of that original lens at the original focal plane. Well, not quite -- the relay system will necessarily add its own aberrations, which in practice is why this hasn't been done yet -- in order to relay the entire image with any appreciable amount of light, you've got to use some pretty extreme optics that will necessarily add lots of aberrations (at one point, I decided that going from IMAX to a single 1/3" chip would require a relay system of about F/1.4, which I just can't build at home). In fact, that's why the Frazier lens (I suspect) only operates at T/8 or slower - it's the only reasonable way to have a lens system that complicated that doesn't make the image completely fuzzy (due to aberrations). (Keep in mind the Frazier lens is manufactured by Panavision, who know a thing or two about lenses, and they still keep it at T/8.) This gets considerably more complicated when you're using 3-chip cameras, too. Because the beam-splitting prism introduces aberrations (lots of astigmatism, for one) in any non-perpendicular ray bundle, you've got to make your relay lens design telecentric in image space (and actually entocentric in object space -- check out wikipedia and edmund optics website). Not trivial, and certainly beyond me. Ultimately, a truly optical solution would be so gnarly that you're probably better off going with a RED camera or similar, even if the DOF adapter had large economies of scale (which most of these solutions do not). Next post: fiber optic tapers. |
|
January 20th, 2007, 01:31 PM | #19 | |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Fiber optic tapers in DOF adapters
Quote:
And that's the other good news: the reason it's used in night-vision is that a fiber optic taper actually gives you a light *gain*. Yes, the cladding material in the taper absorbs about 30% of the incident light. However, the total flux at the entrance face (less those losses) is transmitted to the exit face -- which is usually glued to a sensor. When you squeeze a given amount of light into a smaller area, the intensity increases as the reciprocal of the ratio of the areas -- so a 3x taper will actually give you 530% light increase (1:9 area ratio, x 70% transmittance). You could gain a few stops, and have the DOF of a larger sensor, to boot. (Incidentally, this is why larger chips tend to have better low-light performance: a single pixel measures total incident light; with a larger area receiving the same intensity -- and therefore greater flux -- the net effect is more total photons and a higher effective ASA. A similar analogy holds for fast, large-grain films versus slower, less-grainy film.) Okay, so why isn't it being done in D-cinema? There are a few problems with this pie-in-the-sky calculation. For one, the taper itself only accepts a certain amount of light. Not intensity, but angle: each taper has an inherent 'acceptance cone' outside which light will simply be absorbed by the cladding (this number is independent of the 70% figure). Effectively, each cone has a working f/number, and faster lenses will not see any increase in light (nor apparent DOF: you'll be locked into the maximum acceptance F/# of the taper -- it's almost as if the individual fibers in the taper are functioning as their own apertures). Furthermore, the steeper the cone (larger image plane), the smaller that acceptance cone. There's really no free lunch, any light gain you think you might get results in a smaller angle to pour light onto your sensor. As for image-inverters, forget it. They have a so-called 'Mae West' configuration (big at both ends, tiny in the middle) that accentuates this problem. Besides, if you're gluing this thing to a sensor, you don't *need* to flip the image -- it will be flipped once by the taking lens, and once by your camera's circuitry. It gets worse: the acceptance cones are all (necessarily) perpendicular to the taper surface (in a flat cone, that is). So while fibers in the center of the image may get a fair amount of light (if the working F/# of the taper is lower than the F/# of the lens), the fibers on the outside of the image probably won't - their acceptance cones may miss the lens completely. You can (sort of) get around this with a field lens, but now you're introducing aberrations. Note that this isn't actually a problem if your taking lens is telecentric in image space -- like any lens designed for 3-chip cameras must be (so you could conceivably use a taper on a small chip video camera and mount a lens for a larger chip video camera, and enjoy a modest light gain). But it's very problematic for anyone trying to use cinema or still lenses on their video camera (in fact, I believe that making still lenses telecentric is exactly what the Canon EF adapter for the XL series does -- though it does nothing to change effective sensor size). Add to this the mechanical problems of working with a taper: In order to get the benefits of the taper, you have to physically glue it directly to the sensor. Any dust or bubbles, and you're stuck with a permanent image defect. This also means you can't use tapers with 3-chip cameras, since the CCDs are all hidden behind the beam-splitting prism. Also, most commercially available tapers have as much as 2-3% distortion, which may or may not be acceptable to you. That's not to say using a fiber optic taper isn't possible, or even undesirable -- it's just not a magic bullet. I am tempted to get a single-chip HDV camera, hack off the lens, glue a taper onto it and build a lens mount (along with field lens) for using still camera lenses. It would look neat, I think, and be a handy tool to have. But it won't replace other solutions, and besides, that money is probably better spent on the RED camera (I do have a deposit down, and I don't think there are going to be better solutions for less $). Sorry for the long posts, but that's my two cents. If you want clarification on anything I posted, just drop me a line at first underscore last at yahoo dot com, though it may take me a while to reply as that's not my regular e-mail address (it's a handy pre-filter for spam). Or, just reply to this post. Cheers, Ryan PS - I'm not a lens designer, optical engineer, or anything of the like. Please take these posts with a grain of salt, I'm just hoping to spare people a little time. If this stuff interests you, I highly recommend starting with Eugene Hecht's Optics, which I believe is in its 4th edition now. It's not cheap (about $100 on Amazon), but a lot of libraries have a copy (I cut my teeth on my public library's copy, a 3rd edition). |
|
January 22nd, 2007, 01:22 PM | #20 |
New Boot
Join Date: Oct 2006
Location: Ottawa, Canada
Posts: 9
|
Aerial imaging
Does anyone here remember optical printing in the film title houses. I saw one such machine and it captured a projected image in a large condensor. The condensor I believe was from an enlarger. This in turn was filmed frame by frame by another camera. They used selsyn motors that kept all in frame synch.
|
January 23rd, 2007, 04:56 AM | #21 |
Major Player
Join Date: Apr 2005
Posts: 285
|
Ryan, this info is really interesting. Thanks so much. No need to apologize for the long posts; I really appreciate all the great info.
It makes me wonder how the JVC system is going to work if it indeed functions wide open, however. I guess we'll see. I didn't realize the Hylen system was purely optical (I should have assumed as much but the "cartridges" always remined me of ground glass), but f8 seems like a tough restriction, since you already get a relatively deep depth of focus at that f stop and f8 requires a lot of light. The fiber optic tapers seem like a cool experiment, but I have no idea how it's possible for their minimum effective f-stop to not only alter the transmission of light from the taking lens, but also the effective depth of focus. Insane. Still, 2.8 is as fast as the dvx at full telephoto, so I bet there's something to be done there...not by me, though. Oh well. Makes me wish I'd listened to my mom and taken physics classes in college.... |
January 23rd, 2007, 02:04 PM | #22 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Hm....
So I probably should've looked at the JVC adapter more closely before posting.
Jaron Berman suggests that it's just a mount adapter - letting you use pro lenses, but cropping the 35mm- or 16mm-sized image circle to the 1/3" of the JVC's chips. Pictures of the adapter suggest it's actually a purely optical adapter, like you were wondering. The hz-ca13u looks far bigger than a mount adapter should (or could) be; instead, it's probably a system akin to the Frazier lens I mentioned earlier. If it's like the Frazier system, it would work like this: the taking lens (35mm or 16mm PL) projects an image onto an image plane. Unlike ground glass adapters, that image plane is just air -- the ray bundles converge at the normal back focal distance, and then diverge beyond it. That divergent light is then collected by the hz-ca13u and refocused onto the JVC's chips. Or, to think of it in reverse, the adapter images the JVC's chips at the image plane of the PL mount lens. Same effect; the image ends up on tape with the optical properties of the original taking lens (plus any aberrations of the adapter, which are probably pretty minimal). This may sound impossible - in fact, if this works, why would you need an intermediate scattering plate (the ground glass) in 35mm adapters at all? The rough answer is, you don't. If you could photograph the spot where the ground glass normally is without ground glass, you'd see the precise image, without any grain or haziness. This image is called the 'aerial image.' However, this would only work on-axis; as you move off-axis, the ray bundles diverge and -- close enough to the edge -- actually miss the second lens entirely, resulting in extreme vignetting. The ground glass scatters the light, so that some of it ends up going towards the second lens. (In fact, for all those people with condenser lenses who have minimal light loss, it would be fun to try focusing the system then *completely removing* the ground glass - if the system is set up properly, it would actually approximate what I imagine the JVC adapter is doing - though homemade rigs might still show some vignetting.) Adapters like the Frazier lens use a field lens to redirect these ray bundles towards your second lens (exactly the function of the condenser lens 'sandwich' that some 35mm adapters call for). Because the field lens is very close to the image plane of the taking lens (sometimes, the image plane is actually inside the field lens itself, though this puts any lens defects in the field lens in focus in the final image), the field lens doesn't actually add much convergence to the ray bundles -- it actually acts more like a variable prism that redirects edge rays into the lens of the second camera (commonly called a 'relay lens,' it relays the original image onto your final image plane). By the way, some lens tests actually use a microscope to examine the aerial image as though it were a real object -- that way you can directly see the effects of the lens (basically you focus the microscope as though it were looking at the image of the taking lens). Same principle as above -- and you definitely see the image of the taking lens, that's the whole point. So... I guess the short answer is yes, it's absolutely possible. Since JVC has real optical engineers, I'd believe that it actually works as advertised. The relay adapter may add its own aberrations (as I said, it's not an easy process, particularly for using with 3-chip cameras), but since they're professionals, the result is probably, well, professional looking. And if it works as advertised, I'd personally see no reason to buy another GG adapter again (sorry, P+S Technik). |
January 23rd, 2007, 02:14 PM | #23 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Talking to the Panavision rep.
Note: it wouldn't work for both 35mm and 16mm -- if it is indeed a purely optical relay lens, it probably crops 35mm lenses to a 16mm size (then relays them to the 1/3" chips).
As a personal note, I was at CineQuest 2005 in San Jose and I heard reps from Panavision, JVC, Lockheed Martin (they make super-high-end optics for the military) and others talk about new developments... Now, I didn't know that much about optics back then, but I asked the Panavision guy some questions about both the P+S Technik adapter and the Foveon (stacked CMOS) chip. Even at the time, I felt he dodged the questions. Basically, I asked him if there was any interest in optically enlarging the chips to get the image characteristics of a large-chip camera with the manufacturing economies of small-chip cameras (the Dalsa rep had *boasted* that they throw out 92% of the chips they manufacture). The Panavision rep rather curtly replied that solutions like the P+S Technik would never have acceptable quality. At the time, I assumed you needed a scattering plate, but of course that rep knew otherwise: Panavision's Frazier system does exactly what we're describing here, relaying an aerial image onto a physically distinct film plane. Now, I'm not a big fan of the indie-inferiority complex (I've got it bad sometimes, I'll admit), but I can't help but think there's a marked disrespect at some companies for the low-ticket end of the market, particularly for D-cinema. I think that's a mistake, but hey -- maybe JVC has scooped Panavision on this one by getting to the lower-cost, higher volume segment of the market. Okay... I think I've gone way beyond my two cents. |
January 23rd, 2007, 02:42 PM | #24 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Hylen system thoughts
Matthew - thanks for the mention of the Hylen system, it was completely off my radar.
By the by, that Hylen system isn't just an optical defocus - it looks like they do the equivalent of selective scattering for the defocus (almost like a ground glass with a small segment that's not ground). I'd guess (based on the still images at http://www.panavision.com.au/News/Hy...enScreen15.htm) that it has nothing to do with defocus -- the bokeh suggests that the blurriness is actually the product of two convolutions: a first (true defocus) convolution with the taking lens's bokeh, and a second convolution caused by scattering at an intermediate. In fact, based on the chromatic aberration of the secondary bokeh, I'd say it's actually done with diffractive scattering (based on the reddish tinge to the blur circles). The relatively sharp-edged delineation between in-focus areas and 'blurred' areas suggests this intermediate is either close to the final image plane (which is difficult) or at/near an aerial image. (In which case, it's exactly analogous to the Frazier lens and possibly the JVC adapter.) That's also how they could superimpose a graphic: by placing it at the aerial image plane. There's a whole category of (somewhat arcane) lenses that allow you to insert objects at the aerial image -- called, unsurprisingly, aerial lenses. I think this would have been handy for process shots or extreme close-ups with a deep depth of field; you don't hear much about them these days. (On a side note, it was suggested that Frazier used shots taken with an aerial lens in his application for a patent on his eponymous lens -- one of the considerations the court used in overturning the patent, by the way, which is an interesting read despite being invalidated). Just another view from the outside. Oh, and thanks for the thread, Matthew -- I've lurked around these forums for a while before finding a topic that got me going. Cheers, Ryan |
January 23rd, 2007, 04:02 PM | #25 | ||
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Quote:
Quote:
And they actually have a minimum acceptance cone, usually expressed as numerical aperture but mathematically equivalent to f/# (used in this way, they're two ways of talking about the same thing). Basically, you can imagine each individual fiber projecting a cone directly out of the face of the taper; this cone is the sum of all possible directions that would lead to the CCD on the other side. Light originating outside of this cone isn't totally internally reflected by the side walls of the individual fiber optic, instead veering off into the cladding. There's more information in this .pdf (I'll try to attach it - no promises). Not sure where I got it originally, probably from a fiber manufacturer. And don't worry about the physics. This stuff is so specific, I doubt it's covered in most general physics courses. (I was an English major, anyway.) |
||
January 23rd, 2007, 05:55 PM | #26 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Hylen patent
It would appear Panavision's Hylen lens is covered by this U.S. patent: patent #: 5,649,259
(http://patft1.uspto.gov/netacgi/nph-...&RS=PN/5649259) Unfortunately, I can't get the images to load up in my browser. Maybe I'll order a physical copy of the patent, if I'm feeling ambitious. And not that they're required to follow the patent to the letter, but based on the abstract it would seem I'd be wrong about the diffractive scattering. I dunno - I still trust my eyes. Cheers, Ryan |
January 24th, 2007, 08:56 AM | #27 | |
Major Player
Join Date: Jun 2006
Location: St. Pete, FL
Posts: 223
|
Quote:
|
|
January 25th, 2007, 07:18 AM | #28 | |
Major Player
Join Date: Apr 2005
Posts: 285
|
Quote:
Ryan, I thought you might find this interesting: http://www.smsprod.com/products/lenses/angenieux4.html Apparently, JVC based their adapter at least somewhat around it. T1.5, pretty optically complex, and converts 35mm to 2/3''. Looks sweet! (I think it's also 30 grand.) Can you make anything out from the small diagram below in regards to how it works? |
|
January 25th, 2007, 07:21 AM | #29 |
Major Player
Join Date: Apr 2005
Posts: 285
|
|
January 25th, 2007, 09:08 AM | #30 |
Major Player
Join Date: Apr 2005
Posts: 285
|
If you look at the pdf, there's a relatively high resolution image of the thing. It has 12 lens elements in 10 groups, which would seem to imply that it's complicated as hell, but the average prime lens (relay lens) has what...6 groups or something and the DVX's zoom has about twice that...So maybe not.
The rear looks like some sort of relatively complex relay lens, and the front looks like any other static adapter...there's clearly a condenser behind the imaging plane and seems to be a piece of glass there on which the image is focused (but it's not ground glass, apparently, as Angenieux has stated). Trying this type of design on my own, however (adapter minus ground glass) results in an "areial image" with the same depth of focus as the 1/3'' camera itself... |
| ||||||
|
|