|
|||||||||
|
Thread Tools | Search this Thread |
February 25th, 2005, 08:01 AM | #2566 |
Regular Crew
Join Date: Feb 2004
Location: Austin, TX
Posts: 182
|
Hi all, congrats on your continued work and best of luck as you continue.
Jason or Steve, do you know if the AltaSens box cams allow the application of a non-linear change to the response curves at the sensor level? For example, if I wanted to use one of the DigitalPraxis curves to simulate a log-like response at the actual voltage-reading point on the sensor, prior to A/D conversion. Can these cams be configured to respond in this way? Thanks, -Jonathon Wilson |
February 25th, 2005, 08:07 AM | #2567 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
BTW Obin,
Yes, your images look filmic for the most part, but your clipped highlights do not. Since I haven't played with the 3300, I have no idea how much headroom you have with the Micron chip, but I want you to take this into consideration. When Thomson designed the Viper, they set the white point at a level of 637 (in the green channel, which is the brightest channel) out of 4095 (linear 12-bit space). That means that if you put a Macbeth chart in a scene, the white chip of the Macbeth chart should be at 90% of that, or at around 573-600 in the green channel out of 4095 possible values (or a value of 150 in 10-bit space). Exposing linearly, you'd probably complain that you can't see the image because you're underexposing so far. But that's the way that video cameras operate! The RAW image coming off the sensors is going to be very "underexposed" by your standards, and then the white point is set (typically throwing away all that headroom for highlights as a trade-off for sensitivity), gamma, etc., and you get your nice "bright" image. Now, by you trying to get that white point up to the max of 1025 or 4095, you're taking the sensitivity of a typical video camera, that's around ISO400 or ISO500, and from a value of 150, you have to jump up around 2.7 stops, meaning that 600-1200 is a doubling value, so that's one stop, 1200-2400 is another double, so that's another stop, and then 2400-4095 is .7 stops. So, your total sensitivity, from ISO500 or ISO400 is now down to around ISO64 or less! So of course you're going to need a lot of light! So please consider this as you expose your images using a linear device. |
February 25th, 2005, 08:13 AM | #2568 |
Regular Crew
Join Date: Feb 2005
Location: .
Posts: 52
|
Thanks, Jason.
> are you saying the fps that you'd be running if there was no veritcal blanking, Essentially, yes. What I meant was just the time to scan the ACTIVE lines. Maybe we should call it RS-FPS to avoid confusion. So, in your example of 48 FPS, with skip one, the RS-FPS could very well be 60. With normal vertical blanking it might be 49.14.. Or Method 2. SWMS is same, but scantime (RSST) is expressed in ms also (16.67, for example). Then RSM = SWMS / RSST. In my example this would be 500 / 16.67 = 30. 30 is then the number of frames is takes the object to move across the screenwidth. This maybe better -- certainly more intuitive. You and Obin can decide. ;-) |
February 25th, 2005, 08:20 AM | #2569 |
Silicon Imaging, Inc.
Join Date: May 2004
Location: Troy, NY USA
Posts: 325
|
Kyle:
If I understand your metric, it is basically how many pixels of skew you get from the top of the image to the bottom. By using screen width, you are removing the speed and distance from the equation since all that really matters is the relative speed in the image (can measure in pixels or % of screen width). Keep in mind that acceptability of a shot also includes the number of scan lines of the object. A car may only occupy 1/4 of the screen but a building might occupy the full vertical. That is only relative for using the camera, not an objective measurement but you might be able to say an RSM for an *object* of 30000 is OK. It is a complex topic. If you are trying to show speed of a car by using a long exposure and add blurring, does that change the acceptability level? Buildings shouldn't lean but people can when moving fast. Would a dancer look as bad as a city skyline? I guess we need someone with a programmable frame rate camera to do a sequence with an object at different rates. You can predict the skew (an object that goes the full vertical height, read out in 1/48th sec and crosses the screen in 1 sec will go 1/48*1920 pixels of skew. What is acceptable is the real question. And the test is a video, not a still. I'm sure that there are some dynamics of the mind involved.
__________________
Silicon Imaging, Inc. We see the Light! http://www.siliconimaging.com |
February 25th, 2005, 08:22 AM | #2570 |
Silicon Imaging, Inc.
Join Date: May 2004
Location: Troy, NY USA
Posts: 325
|
Jonathon,
That is not available in the sensor but could be done in the camera. It is easier to do in the digital domain than analog - creating arbitrary changes to the pixel data prior to the A/D would be hard. Digitally, it is just a lookup table.
__________________
Silicon Imaging, Inc. We see the Light! http://www.siliconimaging.com |
February 25th, 2005, 09:05 AM | #2571 |
Regular Crew
Join Date: Feb 2005
Location: .
Posts: 52
|
Steve, thanks for clarifying the situation.
If RSM = SWMS/RSST, one would get the same result just by measuring the pixel skew, and dividing it into the width. I can try to do some tests over the next few days with the monochrome SI-1300. 640x480, MPEG1, 24fps, 4mbit/sec, clips 2-6 seconds in length? Another format? In any event, the results should (logically) be independent of resolution and camera. |
February 25th, 2005, 09:08 AM | #2572 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Jonathon,
Actually I'm a big, big advocate of using a 12-10-bit linear to log LUT, like the Viper Filmstream, which unlike the Cineon curve, is properly adjusting the video assuming a gamma of 1.0, rather than negative film's 0.6 native gamma. In other words, if you just apply the default Cineon curve made for a film negative to these linear images, you correcting too much. You need to adjust the Cineon curve equation for a gamma of 1.0 (which is what Thomson did with the Viper). Not quite sure what Steve did with those CvpFile Editor curves (for the Cinealta), although I know he's a very smart guy when it comes to this stuff, and by the look of the pictures, it appears as though he's also correcting for a gamma of 1.0 and not 0.6. The nice thing about log encoding is that it automatically gives you the "filmic" soft-clip you're looking for, while also giving you an image you can actually view properly (rather than an image that almost looks black). You need to add an "S"-curve print LUT after shooting (crushing the blacks), but at least you'll never get a harshly clipped highlight that screams "digital capture". Also 10-bit log can hold up to a 14-bit linear signal, so 12-bit linear to 10-bit log is essentially visually lossless. |
February 25th, 2005, 03:25 PM | #2573 |
Trustee
Join Date: Jan 2003
Location: Wilmington NC
Posts: 1,414
|
.
|
February 25th, 2005, 03:25 PM | #2574 |
Trustee
Join Date: Jan 2003
Location: Wilmington NC
Posts: 1,414
|
Jason I really want to get my head around this. Can you explain it again in a way that's a bit easer to understand...I am sorry if this is a bother but I want to make SURE I FULLY understand you as you seem to be an image expert around here ;)
|
February 25th, 2005, 03:30 PM | #2575 |
Trustee
Join Date: Jan 2003
Location: Wilmington NC
Posts: 1,414
|
Here is what's going on:
.Net looks like it might be more efficient , or at least simpler to use as multithreading is built-in the language as oppose to use Windows CreateThread API. we had huge issues with the save in cinelink as the data is so large.. |
February 25th, 2005, 03:43 PM | #2576 |
Regular Crew
Join Date: Feb 2004
Location: Austin, TX
Posts: 182
|
I agree Jason.
In my simplistic, pipe-dream brain, I had a wish that one could actually get a log-like response into the sensor itself... in other words, an individual cell catching light 'fills up' quickly during the first bit of light, and then its light level increases more slowly along a logarithmic curve as it approaches its maximum level. This seems like it would more closely match the way real film responds to light. Of course, this is not likely given the way CMOS/CCD sensors are produced. They're simple meters and don't have the smarts to 'taper' off as they fill up. The whole point to this would be to prevent clipping while allowing a reasonably normal lower-levels exposure. I'm not completely convinced that there is anything gained in this, however. You're still stretching light out in a logarithmic way -- whether you do it at capture or in subsequent adjustment shouldn't really matter, as long as you didn't clip or crush in the first place. So, I guess the next best thing would be to compress the initial exposure such that the highlights don't blow out, and work at a high-enough bit-depth such that your subsequent LUT transformation (after you've already acquired in a linear fashion) holds up without inter-value degradation. I think the light requirements would be significantly higher as well, as you've mentioned before. Your points on the gamma are excellent... I hadn't thought of that. I had the OCN concept stuck in my head at its lower 0.6 gamma... |
February 25th, 2005, 04:03 PM | #2577 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
There are log-response sensors, only problem is they look really bad (go over to www.fillfactory.com for what I'm talking about. They call it "dual slope").
|
February 25th, 2005, 04:33 PM | #2578 |
Regular Crew
Join Date: Oct 2002
Location: Netherlands
Posts: 111
|
Obin,
A big part of what Jason suggests is 'underexposing' the image by a certain factor. This way you'll preserve a lot of highlight data that would normally be 'blown-out'. This results in a very dark picture that needs to be corrected in post - but this allows for a really gentle roll-off of the highlights. So the benefit is in a more 'filmic' behavior of highlights, but also more detail in the highlights that might come in handy during postproduction. Detail that would normally be lost due to clipping of the highlights. The trade-off is a lower color resolution for the 'visible' range of the spectrum. You're basically assigning bits to 'overbrights'. Hope this makes sense. Bar3nd |
February 25th, 2005, 04:44 PM | #2579 |
Major Player
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
|
Not exactly I guess, cause you can lower the green gain, and move little up the red/blue gain if I'm not wrong....
But you are correct anyway |
February 25th, 2005, 04:52 PM | #2580 |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
without bumping the gains, typically green will be the most light-sensitive channel.
I was just quote numbers for the Viper. To get the maximum dynamic range out of the digital signal, then yes, you want to make sure that you've set the gains in the analog space so that a white chip has around the same numbers for all the color-channels, or else you're going to waste bits or clip a color-channel early, and not get the full benifit of the sensor. For instance, you'll waste bits trying to correct for a green cast if you exactly emulate the Viper, and even the Thomson engineers acknowledge this (you loose a whole stop on the high-end). |
| ||||||
|
|