|
|||||||||
|
Thread Tools | Search this Thread |
February 1st, 2007, 10:32 AM | #46 |
Major Player
Join Date: Jun 2006
Location: St. Pete, FL
Posts: 223
|
I'm just not impressed with the JVC unit. It's not going to be cheaper than the M2, SGpro, or Brevis. It doesn't work with inexpensive 35mm SLR lenses. The DOF isn't very shallow. And the CA is worse than the latest adapter footage I've seen.
On the plus side, it's sharp. It doesn't seem to lose much light (how many stops?). It's probably a breeze to setup. It looks solid and professional. |
February 1st, 2007, 11:08 AM | #47 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
No scattering = very little loss
I haven't used the adapter, but I think I understand how it works. And it appears that the only light losses would be scattering and absorption in the intermediate glass, so I'd expect very, very little light loss - less than a third of a stop (roughly similar to the difference between the T-stop and the F-stop on a given lens).
Now, the real 'loss' is the fact that it only operates at f/3.5ish on the object side - so opening up your taking lens beyond that won't give you more light (at least not in your image; it'll probably give you a little more flare and image defects). And I think it's a neat idea, but I agree it's not that revolutionary. HD200 + adapter isn't that much cheaper than RED. And as fond as I am of elegant solutions (like an optical-only adapter), there's no real substitute for a large imaging chip. |
February 7th, 2007, 12:26 AM | #48 | |
Major Player
Join Date: Jun 2006
Location: St. Pete, FL
Posts: 223
|
Quote:
That said, would this purely optical solution be MORE possible on a 1/3" CMOS sensor? |
|
February 7th, 2007, 12:58 PM | #49 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
More than just chromatic aberration
Beam splitting prisms introduce undercorrected spherical and chromatic aberration, and can produce astigmatism at large angles. Correcting this is really the job of the lens, not the CCD. The normal solution is to make a lens system telecentric in image space, which means the aperture appears to be directly above each pixel (from that pixel's perspective), and the light ray bundles arrive perpendicular to the CCD/prism block.
Telecentric lenses are several orders of magnitude more difficult to design, and have some subjective compromises. They typically require many more elements, are difficult to fabricate and cost more money. And the image they produce looks worse: next time you're watching sports, check out the image blur in the background. The out-of-focus highlights should blur into a roughly polygonal shape with (usually) a weird, banded radial pattern inside. This is probably the result of residual zonal aberrations (exacerbated by the video camera's sharpening algorithms). But regardless of the cause, it's ugly. A single-chip CMOS (or any single chip, for that matter) wouldn't require telecentric optics, so you've got a big improvement there. But, as I mentioned earlier, the relay optics in a non-scattering system would have to be awfully, awfully fast to give you the same DOF as in the 35mm regime (to get the DOF of f/2.0 in the 35mm regime, a 1/3" chip camera would need relay optics at around f/0.57, I think). And while his 'multiplying MTFs' garbage sounds like hand-waving designed to cow Panavision's customers, I'll have to cede John Galt his main point in that post: in terms of sharpness and DOF, you might be 'best' off with a custom designed lens for that particular camera. In addition to shallow DOF, the f/1.4 optics at Panavision would gather a lot of light. Of course, if you're Panavision, sharpness and DOF are probably your only concerns. If you're a filmmaker, you might have more artistic (what are the subjective characteristics of this lens?) and practical concerns (gee, I already own all these 35mm lenses that I like, and I don't have $10,000/day to spend on a camera package). |
February 7th, 2007, 01:05 PM | #50 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Actual CCD sizes
Oops... I found this great link:
http://www.dpreview.com/news/0210/02...ensorsizes.asp that explains CCD sizes a little better. Turns out that the relay optics would have to be more like f/0.40, which is similarly ridiculous. |
February 7th, 2007, 01:16 PM | #51 | |
Regular Crew
Join Date: Jan 2007
Location: Tampa, FL
Posts: 195
|
Quote:
Of course that kind of sucks, because you'd just have a fairly expensive B&W HD camera. Might as well shoot 16mm B&W. |
|
February 8th, 2007, 02:06 AM | #52 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Chromatic aberration, color shift
CA is most noticeable as a colored ring around bright objects next to a dark background, particularly if out of focus.
If you desaturated to black and white, you wouldn't see that distracting color ring except as a slightly less bright area around bright objects. However, CA also reduces sharpness of in-focus areas. For most lenses, this is far more trivial than the OOFO effect. But using non-telecentric lenses (shorter than a certain focal length) with 3-chip cameras is a bad idea, even on black and white (though if it's black and white, why bother with 3-chip?). Because the filters that divide the light into the various colors are very, very angle-dependent, you'll actually see bizarre color effects toward the edge of the frame (wavelengths that would normally be reflected by, say, the red filter are transmitted at steeper angles, so that light ends up registering as blue/green and not red). This color shift would affect tonalities even in black and white, and even objects with a constant color would show up as brighter or darker towards the edge of the frame. That shifting would be highly wavelength dependent. I don't know - maybe it would actually look kind of neat. Anyone with a removable lens 3-chip camera want to try it out? For best effect, use a well-corrected, (super-) wide angle lens at the proper flange focal length, but one designed for use without a beam-splitting prism - removable lenses for digital SLRs or even 35mm stills would be a good choice. It might look very strange if still in color, to maximize the weirdness. (I'd try it myself, but my camera was recently stolen.) |
February 11th, 2007, 02:49 PM | #53 | |
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
Quote:
How do you know that it only operates at ƒ/3.5 on the object side? I've searched for info on the object side aperture and can't find any. After reviewing some of my tests I don't see much of an exposure change for any stops open more than T2.8. There is a definite difference in exposure between T4 and T2.8, (about as much as to be expected) so I would guess that T2.8 is around the max on the object side.
__________________
Tim Dashwood |
|
February 12th, 2007, 05:03 PM | #54 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Mistaken Identity
Hi Tim -
Sorry for the confusion - I was thinking of the 35mm to 2/3" chip adapter mentioned in this thread (http://www.cinematography.net/Pages%...20cameras.htm). Helmut Lenhof says that f/1.4 on the image side corresponds to f/3.5 on the taking side for that adapter (that corresponds to a magnification ratio of 2.7 and an image side speed of f/1.3). For the Panasonic, I should have looked at the magnification ratio: I think a 1/3" chip corresponds to a 6mm sensor diagonal, and a 16mm film frame corresponds to 12mm. By that token, a taking lens of f/2.8 would correspond to relay optics at f/1.4 (f/# of taking optics divided by magnification ratio), which is within the bounds of reason. The point stands, though, that if you want to get very shallow DOF on a 1/3" chip camera (without scattering, that is, with a purely optical solution), the relay optics are going to be ridiculously fast, perhaps to the point of impossible. I presume that a 16mm prime at f/2.8 isn't that shallow a DOF? Take nothing away from the adapter, I think it's a great way to leverage 16mm optics and get cinema-quality footage out of the JVC. As I may have mentioned earlier, there's lots of reasons to use cinema lenses beyond DOF. And your footage was excellent, thanks for posting. |
March 3rd, 2007, 10:38 PM | #55 | |
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
Ryan,
I've been doing the research to explain my test results. Maybe you can tell me if this makes sense to you? The JVC adapter has an exit aperture of f/1.4 just like the CLA35. However, as I mentioned earlier I wasn't able to see any exposure or DoF differences after I opened past T2.8 on the attached lens. It looks like this is the Lagrange Invariant at work. Taking the following into consideration for the HZ-CA13U I calculated that the maximum usable aperture on any PL lens should be f/2.73, which seems close enough to be consistent with my practical tests. 16mm Horizontal field size: 9.35mm 1/3" Horizontal field size: 4.8mm Max exit aperture: f/1.4 1.4(9.35/4.8)=2.73 If I plug the numbers in for Super35mm and 2/3" in the case of the CLA35 I get: 1.4(21/9.6)=3.06 Referring to the true F-stop table, 3.06 would give the CLA35 a maximum aperture of f/2.8 ¼ or rounded to f/3.5. Someone mentioned the idea of creating a 35mm version for 1/3". The problem is that you would only be able to open to f/5.6 ¼ on the PL lens. 1.4(21/4.8)=6.125 or f5.6 ¼ Quote:
BTW, I've been conducting more tests and the CA almost completely disappeared once the Cooke S4 lenses were mounted. The lens they had for the Sundance test was a Zeiss Super speed. The Cookes looked considerably better for alignment than even the Zeiss Ultra Primes. Both S4 and Ultra Primes were in a class of their own above the 'super' speeds and standard speed Zeiss lense. However, the Cookes do breath quite a bit, and are big and heavy.
__________________
Tim Dashwood |
|
March 5th, 2007, 07:24 AM | #56 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
That sounds right to me...
Tim,
Right on, I agree completely with your findings. Sounds like an optical-only from 35mm to 1/3" chip is pretty much out of the question. It also sounds like the Cookes were better matched to the adapter than the Zeiss - that's not a direct knock on the Zeiss lens, of course - the adapter probably has some built-in correction that better matches the Cooke (or even an aberration of the opposite sign, which is basically the same thing). Also, I don't mind lens breathing at all. I play around with large format optics all the time, and they're inherently pretty breath-y... I kind of like the look, though it definitely calls attention to any focus pulls. And just out of curiosity, how did you figure out the exit aperture size? Was it in the documentation, or did you do some crazy measurement? Also, it seems the optical invariant could be a help, not a hindrance, at least in one regard: did you notice if the illumination level was fairly high? It seems to me that the exit aperture should be indicative of the exposure level... so you would basically get a F/2.8 image with a F/1.4 illumination... or am I missing something? Cheers, Ryan |
March 5th, 2007, 12:01 PM | #57 | ||
Wrangler
Join Date: Aug 2005
Location: Toronto, ON, Canada
Posts: 3,637
|
Quote:
Quote:
I expected there to be a slight difference, but 1½ stops would explain your theory. T2.8 on the cine lens was the same light level as approx f/1.7 on the 1/3" zoom lens. I've heard (but it is not documented) that there is a ½ stop loss due to absorbtion/scattering. So if there is a 2 stop increase in illumination, minus a ½ stop loss for absorption, 1½ stops would make perfect sense. Can you show me the proper math with the Lagrange invariant applied to illumination? I don't know the algebraic symbol for illumination, so I'll call it "I". I'm guessing (Iexit)=(Ientry)(ƒ stop)/1.4 , therefore if we use a value of f/2.8 and "1" for entry illumination, the exit illumination will be 2? I'm not sure if I've got it right because to me "two times" would mean 1 stop.
__________________
Tim Dashwood |
||
March 6th, 2007, 01:37 AM | #58 |
Regular Crew
Join Date: Dec 2005
Posts: 46
|
Hey...
I agree that it's definitely the Lagrange invariant causing the apparent light gain. I
think I can explain in more detail why I think that... First, the background you probably already know: the optical invariant is, roughly speaking, a quantity that does not vary no matter where you are in an optical system - anywhere from the object itself, through the lens, and on to the image at the CCD/film plane. This quanitity pertains to lots of different, related variables, including f-stop, illumination, and magnification factor, such that when one value changes, the others change as well but the optical invariant remains the same. It's essentially a mathematical shorthand that makes the relationship between these values easier to compute (it's usually calculated and referenced for individual rays, but the general points are the same for ray bundles and images). The extreme shorthand: when you crunch all that light down into a smaller area, the brightness must go up (by the inverse of the magnification ratio, less transmission losses). This is not so much 'because of' the optical invariant, as the things which make the optical invariant true (mostly trigonometric relationships and geometry) also dictate the above relationship. So when you were shooting side by side with the JVC video lens, it may have been open to f/1.7, but next to it the PL lens at f/2.8 was collecting even more light, but that light is distributed across a much larger image plane (the 16mm sized aerial image plane). The adapter then takes (most) of that light and squishes it down to the final size at the JVC's chips. It's important to note that the adapter is actually working at an image-side speed of f/1.4 (or whatever) that more closely corresponds to the f-stop on the video lens; one thing the optical invariant guarantees is there's no free lunch in optics. When you squeeze the image into the smaller format size, you must do it with proportionally faster optics - so you're not really getting more light than the *entire system* takes in, just what is printed on the taking lens. Put it another way: the f-stop ratings on the taking lens's barrel are strictly for DOF calculation, the entire system actually performs at the back (image-side) f-stop. In other words, because there's no scattering, you have to look at the lens system from front element to back element to determine the actual f-stop. And since f-stop is related to pupil diameter, you sort of have to conclude that the entire system is actually performing massively faster than the front lens is rated at (because the relay lens magnifies the exit pupil, but also shrinks its magnification - these two phenomena are related). I think you calculated that ratio to be about 2 (or two stops). So... now to the math, which is actually pretty straightforward. You mentioned in the article that the frame sizes were 9.35mm and 4.8mm for 16mm and 1/3", respectively. That makes a linear relationship of 9.35/4.8 = 1.94, which is why the magnification factor is about 2:1. The way we calculate DOF is basically linear - we compare the diameter of a circle of confusion to determine whether or not something is in focus... In contrast, illumination is a function of area (exit aperture size is the easiest way to think about it). Since area is a squared relationship, the total illumination difference should be 1.97^2, which is 3.76 or about 2 stops. The other way to think of it is that there's a certain amount of flux (or simply, 'light') that goes through the 16mm 'virtual' frame of the aerial image. Flux is area*illumination, when you shrink the area by a factor of 4 the illumination goes up by an equal factor. (That's just the definition of flux; you can set the two equal to each other due to - drumroll please - the optical invariant.) Getting this to more photographic-looking equations, you could simply propose the squared relationship (with or without associated hand-waving) and state that flux is constant (minus transmission losses), and flux equals brightness times area (F=B*A). This is a shade disingenuous, if only because the units involved are pretty counter-intuitive: flux is in lumens, Area is in cm^2, and Illumination is in lumens/cm^2. By the way, I'm working off of Warren J. Smith's "Modern Optical Engineering," specifically the discussions of photometry and the optical invariant (different sections). Anything specific to photography (f-stops, etc) is my math and hand-waving. Cheers! Last edited by Ryan Damm; March 6th, 2007 at 07:13 PM. |
| ||||||
|
|