View Full Version : 4:4:4 10bit single CMOS HD project
Kyle Granger February 24th, 2005, 05:31 PM Obin, I have been normally using 48 and 50 FPS with the SI-1300 and SI-3000. It is possible to instruct the grabber to drop every other frame (or x frames), so you receive 24 or 25 (or have timelapse). (you probably know this)
It is *also* possible to increase the clock rate, and increase the number of lines in the vertical blanking period, and still maintain the desired framerate. This further reduces artifacts from rolling shutter, since the time per scanline is reduced. Caveat: there is a limit to the number of lines in the VB interval, though I can't say off the top of my head what that is. You might, for instance, have to run the camera at a 58.45 fps rate (with normal VB), add the additional (max) VB lines, set your skip to one, and then receive your 24.0000 or 23.9760 frames per second.
I know I am basically repeating what Steve said.
Florin Popescu February 24th, 2005, 05:48 PM Kyle and Obin,
If you guys can barely squeeze the data into RAIDs, how am I ever going to make into notebook hdds. Here's how..
As the data is scanned sequentially from the chip, a histogram of pixel or pixel differential is compiled for each frame, and when it's flushed from the buffer onto disk, it gets huffman coded. Variations thereabouts. A simple scheme requiring very few computational resources, time or memory... much less than jpg.
Not deterministic, sure, but you can't shoot uniform random noise over whole dynamic range even if you tried. Do you guys really write RAW data?
This type of lossless compression seems to get compression ratios of at least 2 (with everyone's favorite huffman coder).
Seems easier than getting getting that extra 10% out of the drive. And if you have to NLE or whatever, what's the use of a 100MB stream anyway?
Juan M. M. Fiebelkorn February 24th, 2005, 07:05 PM LOL hahaha, do you really believe what you are saying?
you are getting something like 94 Mbytes/s.
so you wanna huffman compress it to record it onto a Laptop 2.5 inch drive???
like what? a 2.5 inch 10K RPM one?
So your huffman will compress like 3:1 sustained or even more?
I'm not getting your point, really.not trying to be offensive...
Edit:Now that I see you are Dr. and Fraunhofer guy.Why don't you just make some kind of lossy codec for it, keeping at least 10 bit bitdepth.
You could even modify Xvid or simillar to accept 10 bit and compress the bayer.
May be even better avoid huffman and use a range encoder!!
Obin Olson February 24th, 2005, 07:55 PM we are using VB indeed it works well for what it is...as far as the save stuff I think that is exactly what we are doing ;) and thanks for the info! I did pass it on to our programmer just incase
Obin Olson February 24th, 2005, 07:59 PM www.dv3productions.com/pub/dog.tif
works fine for me....
Wow how gamma can help with the exposure of th image while shooting! I have been playing around with it with CineLink...looks like we will still need a way to show histogram? or maybe zebra... also i want to have a button that allows one to 'zoom in' on a live image for focus ..this will help lots...
I got a chance to hook up the touch screen monitor today an d 'playshoot' with no recording(as it's not working yet) ...touch screen is VERY good and EASY to use! bad thing is the screen has a weird tint to it with the touch surface...I don't like that...anyone seen a VGA splitter so that we could have a nice clear screen to frame with?
Obin Olson February 24th, 2005, 08:15 PM "I am using a commercial routine written in assembler to write to disk. I
have tried both the asynchronous version (Overlapped) of CreateFile with
WriteFile and fOpen, etc... and that are an order of magnitude slower than
the routine I use. I also tried copying memory to disk and a wide variety of
methods that did not work well either.
Right now I am writing a multithreaded test case and I have the basic
structure written. I hope to finish it tonight so that I can do extensive
testing in the morning... I'll keep you posted."
Kyle Edwards February 24th, 2005, 08:26 PM <<<-- Originally posted by Obin Olson : www.dv3productions.com/pub/dog.tif
works fine for me.... -->>>
maybe it wasn't finished uploading when you posted, coming in now.
Mark Nicholson February 24th, 2005, 11:10 PM I imagine that's going to look pretty amazing when it's fully working and the dead pixels have been taken care of. They say that S16 7212 has the same resolution, but I don't know, I've seen a lot of the 7218 printed and it doesn't appear to have nearly the resolution of this camera. I cant imagine how great those Zeiss primes will look on this monster. Actually now I'm really wondering how this would compare to 2-perf 35mm.
Great job. Can't wait to see more samples
Eric Gorski February 25th, 2005, 01:10 AM that really looks like scanned 35mm slide film. so exciting.
Kyle Granger February 25th, 2005, 05:41 AM >> "I am using a commercial routine written in assembler
Very Cool!
> If you guys can barely squeeze the data into RAIDs,
Florin, my system is 3-4 years old, and relatively underpowered for such an application, so I would not consider my results as a typical data point.
Jason and I have spent many emails trying to figure out the source of my bottleneck, and it is looking like the RAID controller is guilty. At 3ware's website, the docs on the 8006-2 do not mention onboard memory nor a cache, so I must assume there is none. That could explain the behaviour. It could also just be an inherent system-limit of the Precision 530. The synergistic interaction of all the components makes profiling difficult to find the exact cause. Or maybe I'm just not smart enough. ;-)
BTW, CPU usage stays below 25%, even while the writing is falling behind by about 8MB/sec.
For me right now, a Broadcom BC4452 is a little pricey, but the Promise SX4 has 64MB to 256MB onboard memory (user may add SIMM), and may work at 66MHz in a PCI-X slot (though the board is 32 bits).
Obin: VB is Visual Basic?
Audio Note:
BTW, my solution for sound is an old portable Sony DAT. I can download the sound digitally as a post-process. I have also tried MiniDisk, and it also works, although at 44.1, and with some compression.
I do use a clapper board. ;-)
To mask the computer noise, I just keep the box far away. I have made a master cable with mouse, keyboard, VGA, and two ethernet cables to run the computer "remotely", up to 20 meters away. I use a light 17" LCD screen as a viewfinder.
Wayne Morellini February 25th, 2005, 05:58 AM <<<-- Originally posted by Florin Popescu :
..
> available. tested it at about 25MB/s sustained write (that's DV speed ain't it). But say I get myself an even faster laptop with two HDDs and interleave/save. Back to that 2 number.
Mini DV is 25 megabits so your far ahead of DV.
> manage to squeeze 24fps1080p down to 50MBs (usb speed) and not overload cpu. At best, after lots of time spent scratching my facial stubble.
You can get that on 8 bit, 10 bit is going to be more. Most of these USB cameras send a pixel over 8 bits in two bytes, and they also burst out the screen, so even 8 bit 1080 24fps will not fit. From testing the highest format to use with USB 2.0 (that is not pixel packed or compressed, or buffered) is 720p. Because USB2.0 tends to lack good data support the processor has o pickup the slack, and Windows is not the most reliable beast for keeping good timing to pickup data (it usually has to be programmed around). The Gige cameras are much better, as at least the SI ones have a buffer and do pixel packing to make effective use of the bandwidth.
>In light of 3CCD 50MBs Panasonic coming out... well...
50M Bytes a second is going to be far ahead of 50M bits per second.
..
>Seems easier than getting getting that extra 10% out of the drive. And if you have to NLE or whatever, what's the use of a 100MB stream anyway?
People not getting the last 10% is probably what has caused so many 50%+ performance hits in PC's, the 10%'s accumulate. Best to isolate and squeeze really tight, do everything else and then attach a higher res/fps camera ;)
If you want to save money the price of the camera head (and software) is going to be the biggest saving factor. Currently I am testing a sensor here, while the results aren't anywhere near what I hoped, I think it could be far better than the Micron 1300, but very difficult to test the unit I got.
Theoretically, a Micron 1300, camera head might be had for $450, with buffer and Firewire. But the sensor has a few problems, that can be avoided with controlled lighting. Obin will testify that it is not the best to shoot with. But this is old technology so an even cheaper sensor of the same quality could be had for less. If you manufactured you could even come to a reasonable $100 price point.
Rob Scott is attempting cheap software, Obin is doing his.
The cheapest is, Cineralla for Linux, editing and capture (don't know about the capture).
The problem here is that there are multiple camera projects (not including others not mentioned here), and unless your a programmer (with all that realistic knowledge that they have) you shouldn't attempt one (and if you wanted to manufacture you should be an engineer too). So the little market here will soon be flooded, and you have to consider going further abroad. But as a hobby, why not.
Good luck.
Wayne.
Wayne Morellini February 25th, 2005, 06:21 AM <<<-- Originally posted by Rai Orz : edit:
Who would buy a camerahead with Altasens ProCamHD 3560 (1920 x 1080p, 12 Bit):
1.) with FireWire 1394b (800) max. 30fps (at 12Bit)
or
2.) Additional with build in double removable HDD disk frame to record direct full 12Bit RAW data up to 30fps on 2 x 2.5" HDDs. (In this case Firewire is only for prewiev and setup)
or
3.) max 60fps. Same as 2, but with 2 x build in double removable HDD disks to record direct up to 60fps on 4 x 2.5" HDDs.
Who are interested ?
And what would be your bid? -->>>
I would buy any of these, at a cheap price. The thing is that cameras here, and in the MV market place, are standardising on Gige (and after, maybe, GigaE) , so a dual interface camera with Firewire 800 ( and after 1600/3200) would be good for standard video market too.
1. I would say less than the price that Sumix is promising for their compressed, buffered, pixel packed Altasens).
2. $200-$300 extra (not including HDD's of course). That is the base price of a Linux PC now days.
3. About $300 extra over the 1. pruce (not including HDD's of course) but you'll probably sell a lot more.
As to where ever I will buy, I am still reserving my judgement until after I see the Sumix product.
So I would say, go with 1, but undercut Sumix greatly, drop 2, and put number 3 out just above Sumix price.
By the time your ready I would say that the Altasens price might drop even further.
Rolling shutter VS Global and Superfast lenses.
Now all this talk of squashing the Rolling shutter bug on the Altasens here/ If it can be done with no picture performance impact, I would say, yes, I prefer it over the present global shutter options. But then we get to the other point of no being able to take super fast lenses, how much restriction in their, and what about the 4/3 format that gets around this problem in the lens design itself, by seemingly artificially, narrowing the convergence angle of the light sent to the sensor. Can a simple adaptor segment be put between the Altasens and the lens to do the same?
Thanks Rai.
Wayne.
Kyle Granger February 25th, 2005, 07:16 AM Rolling Shutter Metric
We may need some common language to talk about rolling shutter artifacts. I have one modest proposal...
The artifacts are a result of two factors: the horizontal speed of an object in the frame, and the scanning time for the image. Here is an idea.
SWMS: Screen Width Milliseconds. This is how fast it takes an object to move the entire width of the screen.
FPS: Obvious. This is the scantime, expressed as its reciporcal, in frames-per-second, not including vertical blanking.
RSM: Rolling Shutter Metric. The product of these two figures. This is applied to an image sequence. AFAICT, the artifacts should be independent of the playback rate.
So, if your scantime is 60 fps, and the object moves across the screen in a half second, the RSM is 30000 (500*60). If you double the speed of your object (SWMS = 250), but also double your scanrate (120 fps), the artifacts should be the same. And the RSM is the same (30000).
So, an RSM or 30000 may be acceptable. But 11000 is maybe not accpectable.
Rolling shutter cannot ever be eliminated. Either it is there, or it isn't. But at what point is it visible and at what point is it acceptable?
Obin Olson February 25th, 2005, 07:26 AM I have never done much with film but this really blows me away with the "filmic" look it has with the soft tones and all
Jason Rodriguez February 25th, 2005, 07:51 AM Hey Kyle,
I like the Rolling Shutter Metric guide.
BTW, we do need to test and see just "how bad" rolling shutter really is, especially when we have the vertical blanking set really high, which means the clock on the chip is running at the equivalent of maybe 60fps, but we're only ouputting 48fps (and then dropping a frame for 24fps).
So Kyle, when you say FPS, but ignoring the vertical blanking, are you saying the fps that you'd be running if there was no veritcal blanking, or the fps after vertical blanking. I.e., you'd be at 60fps without vertical blanking, but you're at 48fps after vertical blanking. Do you use the 60fps in the RSM calculation, or the 48fps?
Overall, great idea!
Jonathon Wilson February 25th, 2005, 08:01 AM Hi all, congrats on your continued work and best of luck as you continue.
Jason or Steve, do you know if the AltaSens box cams allow the application of a non-linear change to the response curves at the sensor level? For example, if I wanted to use one of the DigitalPraxis curves to simulate a log-like response at the actual voltage-reading point on the sensor, prior to A/D conversion.
Can these cams be configured to respond in this way?
Thanks,
-Jonathon Wilson
Jason Rodriguez February 25th, 2005, 08:07 AM BTW Obin,
Yes, your images look filmic for the most part, but your clipped highlights do not.
Since I haven't played with the 3300, I have no idea how much headroom you have with the Micron chip, but I want you to take this into consideration.
When Thomson designed the Viper, they set the white point at a level of 637 (in the green channel, which is the brightest channel) out of 4095 (linear 12-bit space). That means that if you put a Macbeth chart in a scene, the white chip of the Macbeth chart should be at 90% of that, or at around 573-600 in the green channel out of 4095 possible values (or a value of 150 in 10-bit space).
Exposing linearly, you'd probably complain that you can't see the image because you're underexposing so far. But that's the way that video cameras operate! The RAW image coming off the sensors is going to be very "underexposed" by your standards, and then the white point is set (typically throwing away all that headroom for highlights as a trade-off for sensitivity), gamma, etc., and you get your nice "bright" image.
Now, by you trying to get that white point up to the max of 1025 or 4095, you're taking the sensitivity of a typical video camera, that's around ISO400 or ISO500, and from a value of 150, you have to jump up around 2.7 stops, meaning that 600-1200 is a doubling value, so that's one stop, 1200-2400 is another double, so that's another stop, and then 2400-4095 is .7 stops. So, your total sensitivity, from ISO500 or ISO400 is now down to around ISO64 or less! So of course you're going to need a lot of light!
So please consider this as you expose your images using a linear device.
Kyle Granger February 25th, 2005, 08:13 AM Thanks, Jason.
> are you saying the fps that you'd be running if there was no veritcal blanking,
Essentially, yes. What I meant was just the time to scan the ACTIVE lines. Maybe we should call it RS-FPS to avoid confusion. So, in your example of 48 FPS, with skip one, the RS-FPS could very well be 60. With normal vertical blanking it might be 49.14..
Or Method 2.
SWMS is same, but scantime (RSST) is expressed in ms also (16.67, for example). Then RSM = SWMS / RSST. In my example this would be 500 / 16.67 = 30. 30 is then the number of frames is takes the object to move across the screenwidth. This maybe better -- certainly more intuitive. You and Obin can decide. ;-)
Steve Nordhauser February 25th, 2005, 08:20 AM Kyle:
If I understand your metric, it is basically how many pixels of skew you get from the top of the image to the bottom. By using screen width, you are removing the speed and distance from the equation since all that really matters is the relative speed in the image (can measure in pixels or % of screen width). Keep in mind that acceptability of a shot also includes the number of scan lines of the object. A car may only occupy 1/4 of the screen but a building might occupy the full vertical. That is only relative for using the camera, not an objective measurement but you might be able to say an RSM for an *object* of 30000 is OK.
It is a complex topic. If you are trying to show speed of a car by using a long exposure and add blurring, does that change the acceptability level? Buildings shouldn't lean but people can when moving fast. Would a dancer look as bad as a city skyline?
I guess we need someone with a programmable frame rate camera to do a sequence with an object at different rates. You can predict the skew (an object that goes the full vertical height, read out in 1/48th sec and crosses the screen in 1 sec will go 1/48*1920 pixels of skew. What is acceptable is the real question. And the test is a video, not a still. I'm sure that there are some dynamics of the mind involved.
Steve Nordhauser February 25th, 2005, 08:22 AM Jonathon,
That is not available in the sensor but could be done in the camera. It is easier to do in the digital domain than analog - creating arbitrary changes to the pixel data prior to the A/D would be hard. Digitally, it is just a lookup table.
Kyle Granger February 25th, 2005, 09:05 AM Steve, thanks for clarifying the situation.
If RSM = SWMS/RSST, one would get the same result just by measuring the pixel skew, and dividing it into the width.
I can try to do some tests over the next few days with the monochrome SI-1300. 640x480, MPEG1, 24fps, 4mbit/sec, clips 2-6 seconds in length? Another format?
In any event, the results should (logically) be independent of resolution and camera.
Jason Rodriguez February 25th, 2005, 09:08 AM Jonathon,
Actually I'm a big, big advocate of using a 12-10-bit linear to log LUT, like the Viper Filmstream, which unlike the Cineon curve, is properly adjusting the video assuming a gamma of 1.0, rather than negative film's 0.6 native gamma. In other words, if you just apply the default Cineon curve made for a film negative to these linear images, you correcting too much. You need to adjust the Cineon curve equation for a gamma of 1.0 (which is what Thomson did with the Viper).
Not quite sure what Steve did with those CvpFile Editor curves (for the Cinealta), although I know he's a very smart guy when it comes to this stuff, and by the look of the pictures, it appears as though he's also correcting for a gamma of 1.0 and not 0.6.
The nice thing about log encoding is that it automatically gives you the "filmic" soft-clip you're looking for, while also giving you an image you can actually view properly (rather than an image that almost looks black). You need to add an "S"-curve print LUT after shooting (crushing the blacks), but at least you'll never get a harshly clipped highlight that screams "digital capture".
Also 10-bit log can hold up to a 14-bit linear signal, so 12-bit linear to 10-bit log is essentially visually lossless.
Obin Olson February 25th, 2005, 03:25 PM .
Obin Olson February 25th, 2005, 03:25 PM Jason I really want to get my head around this. Can you explain it again in a way that's a bit easer to understand...I am sorry if this is a bother but I want to make SURE I FULLY understand you as you seem to be an image expert around here ;)
Obin Olson February 25th, 2005, 03:30 PM Here is what's going on:
.Net looks like it might be more efficient , or
at least simpler to use as multithreading is built-in the language as oppose
to use Windows CreateThread API.
we had huge issues with the save in cinelink as the data is so large..
Jonathon Wilson February 25th, 2005, 03:43 PM I agree Jason.
In my simplistic, pipe-dream brain, I had a wish that one could actually get a log-like response into the sensor itself... in other words, an individual cell catching light 'fills up' quickly during the first bit of light, and then its light level increases more slowly along a logarithmic curve as it approaches its maximum level. This seems like it would more closely match the way real film responds to light.
Of course, this is not likely given the way CMOS/CCD sensors are produced. They're simple meters and don't have the smarts to 'taper' off as they fill up.
The whole point to this would be to prevent clipping while allowing a reasonably normal lower-levels exposure.
I'm not completely convinced that there is anything gained in this, however. You're still stretching light out in a logarithmic way -- whether you do it at capture or in subsequent adjustment shouldn't really matter, as long as you didn't clip or crush in the first place.
So, I guess the next best thing would be to compress the initial exposure such that the highlights don't blow out, and work at a high-enough bit-depth such that your subsequent LUT transformation (after you've already acquired in a linear fashion) holds up without inter-value degradation. I think the light requirements would be significantly higher as well, as you've mentioned before.
Your points on the gamma are excellent... I hadn't thought of that. I had the OCN concept stuck in my head at its lower 0.6 gamma...
Jason Rodriguez February 25th, 2005, 04:03 PM There are log-response sensors, only problem is they look really bad (go over to www.fillfactory.com for what I'm talking about. They call it "dual slope").
Barend Onneweer February 25th, 2005, 04:33 PM Obin,
A big part of what Jason suggests is 'underexposing' the image by a certain factor. This way you'll preserve a lot of highlight data that would normally be 'blown-out'.
This results in a very dark picture that needs to be corrected in post - but this allows for a really gentle roll-off of the highlights.
So the benefit is in a more 'filmic' behavior of highlights, but also more detail in the highlights that might come in handy during postproduction. Detail that would normally be lost due to clipping of the highlights.
The trade-off is a lower color resolution for the 'visible' range of the spectrum. You're basically assigning bits to 'overbrights'.
Hope this makes sense.
Bar3nd
Juan M. M. Fiebelkorn February 25th, 2005, 04:44 PM Not exactly I guess, cause you can lower the green gain, and move little up the red/blue gain if I'm not wrong....
But you are correct anyway
Jason Rodriguez February 25th, 2005, 04:52 PM without bumping the gains, typically green will be the most light-sensitive channel.
I was just quote numbers for the Viper. To get the maximum dynamic range out of the digital signal, then yes, you want to make sure that you've set the gains in the analog space so that a white chip has around the same numbers for all the color-channels, or else you're going to waste bits or clip a color-channel early, and not get the full benifit of the sensor.
For instance, you'll waste bits trying to correct for a green cast if you exactly emulate the Viper, and even the Thomson engineers acknowledge this (you loose a whole stop on the high-end).
Obin Olson February 25th, 2005, 05:16 PM yes Steve N. has told me that you must have RGB set before you shoot for the most range from the chip
Wayne Morellini February 25th, 2005, 08:09 PM Kyle: Please list your metric on a post when you are finished, all the discussion of new short forms of terms is a bit confusing.
.net:
Yes, I believe they licensed parts of the core of the real-time embedded OS "Tron" for his, so it should be a lot better.
For all developers:
I don't know the way your development systems are set up, but apart from the GPU programming, there are real-time embedded extensions/development systems for Windows. Also MS has released a cross platform game development system called XNA (I think) that should get very efficient handling of graphics etc.
Jason:
I am testing a dual slope sort of scheme here at the moment. I am not fully happy with it (I am only using a small fully auto camera, with one of the smallest lens I have seen) but it is impressive on outside shoots. The example in the Fillfactory site could do with a bit more adjustment to bring the external colors up.
http://www.fillfactory.com/htm/technology/htm/dual-slope.htm
Barend:
So how many bits should we devote to overbrights in adjustments?
Jason Rodriguez February 25th, 2005, 10:25 PM Wayne, just look at the Cineon spec for the overbright stuff.
Or again, just look at what I said for the linear 12-bit space. You need to place 100% white around the 700 mark out of 4095. You can calculate how many bits that is.
Now white clip is 2.5 stops above middle grey, and we're adding another 2.7 stops in over-exposure dynamic range or "super-white, thus giving you around 5.2 stops of dynamic range above middle grey. Which is quite a bit, considering that there's another 5 stops below middle grey at this point (typically). So that's a little over 10-stops total, which again, is a very nice dynamic range to work with, very close to film's 12-stops (but you can preview the dynamic range on the screen, so you can get spot-on, you don't need to "guess" in any way, or need those extra two-stops for movement. A good DP doesn't "guess", but he only has around 10-stops of effectivive lattitude typically because he does need some wiggle room on either side, hence those extra two stops. But in digital, you really don't need the wiggle room as much since you can see exactly what you're doing).
Wayne Morellini February 26th, 2005, 06:15 AM Thanks.
Kyle Granger February 26th, 2005, 07:11 AM We can define the RSM (Roling Shutter Metric) for a particular shot, or image sequence within a shot, or object within a shot, as the ratio of the image width to the pixel skew.
E.g., if you are panning acrocss a vertical door, and the difference between the horizontal position of the bottom and top of the door is 24 pixels, then the RSM for the door (or entire shot) is 1920/24 or 80. You also need to measure the delta in a still shot, since the vertical may not be exact. As Steve pointed out, this is the image skew.
[ We can also define the horizontal speed of an object as Screen Width Milliseconds (SWMS -- how many seconds it takes for an object to move the full screen width), and the Rolling Shutter Scantime (RSST -- number of active lines divided by total lines per second, times 1000 (scantime in ms)). SWMS/RSST is then equivalent to RSM above. I believe all we only need the one metric. We can safely ignore these two. Less is more. ]
Wayne: is your camera global shutter IBIS-5? 40Mhz clock? I looked at something like that last summer.
Log color space: I, too, am coming up to speed with the deep knowledge Jason et al. is sharing about non-clipping of highlights, etc. In the meantime I have been writing 16-bit files. I chose the SGI RGB file format, as opposed to TIFF, just because for me, it was much simpler. The specification and PS plug-in are easy to find., for thems that want it.
GPU efficiency, XNA: I have never found graphics to be problem. For grabbing and preview (on my system), CPU usage is never above 8% for 40MHz clock (3300 and 1300); faster clocks it is a little more. Using very basic OpenGL calls. ( I have written 3D games with DirectX 9, but I prefer OpenGL. )
Wayne Morellini February 26th, 2005, 09:16 AM No, older camera with smalcamera sensor in it (also owned by Cypress now) that I found on clearance locally.
Faults:
I have seen blooming only sometimes, and not very severe) it is irregular and might be the auto function messing up the autobrite (there per pixel dual-slope type mode). I have seen strobing only in shoots in the most extreme contrast conditions (again maybe the auto functions pushing too far).
Low Light:
Low light is a mess because of the extremely small lens requiring too much gain and slow shutter (that lens is only a mm or two across). I think that a normal lens aperture would make low light a non issue, but this will require an iris for daylight work (ie: normal lens).
Improvements:
A normal lens, manual control of gain and autobrite, and 12bit raw images should get rid of the few blowout situations I have seen (the thing only downloads compressed Jpeg versions at the moment, that stuffs up what I'm seeing, I am not used to reading Jpeg codec artifacts). I don't know if it is the ex-small lens, Jpeg or the sensor producing some things I'm seeing. At the moment I think the blowout is equivalent to the 1300 or 3300 with camera as is. My guess this is because of the autobrite and that it is tuned to daylight use.
Perforamnce:
Outside the range should be able to match the human eye (going on the marketing of the autobrite) but I can tell it is down a bit (I don't know how to read how much) my guess is that with a proper lens it will still be a couple of stops less+ than human vision which is good enough). So I would say it can match the range of the Micron cameras (with less problems that the 1300). But this is a stretch, I would like to find out more before confirming this. Having said all this, the shoots outside appear very close to what I see, when exposed for the lit image. Highlights look natural, and close to reality. Contrasty situations, with subjects in the shade and under the trees look a couple of stops darker in the shade, or brighter on the outside (guessing from memory on that one) and have seen blowout is such scene. This sensor is a few years old.
Dualslope/Autobrite:
The situation with the autobrite is closer to the IBIS5a example pictures above (more noise in the shadows). We have not consider it's usefulness in outside shooting, just those desaturated windows pictures. I am convinced that our eyes see washout in contrasty scenes, but our minds edit it out, and the pupils adjust for centre of attention. But is camera work everything has to be saturated and not washed out to the viewer (IE this is excellent for doco). How have you found this on the Drake, Rai.
Comparison:
Otherwise I would say that the picture is down a bit, but this is probably because of the small lens and maybe bayer filter quality. The sensor does work like a larger sensor (except, a guess, in noise floor).
Downside:
The bad news, this is a still camera sensor (there are video ones). It has only 27MB/s bandwidth and rolling shutter. So I don't know if that canbe eliminated. But there are newer 3MP versions. This company wants to only deal with larger buyers (hundreds of thousands) so you would have to go through somebody like SI or Sumix to get them to do it. Might be good for cheap camera (this one costs around $74 retail).
I suspect Logitech might use sensors like these in their web cameras, does anybody have logitech web cameras from the last few years, do they have dual-slope type modes?
Again, I don't know enough to say this is good sensor for cheap cinema/doco camera, but there is room for two to three levels of cameras.
Wayne Morellini February 26th, 2005, 09:28 AM <<<-- Originally posted by Kyle Granger :
GPU efficiency, XNA: I have never found graphics to be problem. For grabbing and preview (on my system), CPU usage is never above 8% for 40MHz clock (3300 and 1300); faster clocks it is a little more. Using very basic OpenGL calls. ( I have written 3D games with DirectX 9, but I prefer OpenGL. ) -->>>
That seems very low. What processor and hardware are you running?
-----------------------------------------------------------------
FAQ
To all:
A lot of questions get re-asked a lot, maybe we need a FAQ file, or at least a linked list to answers to peoples questions on other sites. There 's Rob's Obsura Cam Wiki, but I am reluctant to put everybody else's projects on his Wiki. I have suggested before about a start page for these threads, that could contain a FAQ link. But maybe we need a FAQ link at the bottom of the view pages that takes us to a list of FAQs, or the FAQ for that thread/forum.
Rob, what do you think, easier than re-writing the forum code to have an adjustable first page?
Thanks
Wayne.
Kyle Granger February 26th, 2005, 10:05 AM To all: I am still trying to catch up on most of the older posts, so I apologize in advance if I ask a newbie question.
Wayne: Thanks for the information!
> What processor and hardware are you running?
Dell Precision 530, 1.7 GHz Xeon
NVidia Quadro2 MXR/EX with 32MB, AGP 4x
1 GB RIMM
> 40MHz 3300 clock
15.6 fps at 1920x1080. Only grabbing and preview. But without bilinear interp of Bayer (GPU can do that better): a quick and dirty sum of the 2x2 square. C++, no assembly. The GigeLink packet filtering works well for me: no measurable CPU usage.
1300 smearing: I am only using the monochrome version of the camera, and have so far not found that to be a problem. An artefact of Bayer filter chip?
[ I will go through the posts that mention this. Maybe because I'm an art-film buff, I also tend to like the mono 1300 slightly more than the color 3300. But only *my* opinion. Mono CMOS also has a sensitivity advantage vis a vis Bayer. ]
Wayne Morellini February 26th, 2005, 10:18 AM > Dell Precision 530, 1.7 GHz Xeon
That's a single processor board, the performance appears good, but how come only 15.6fps at 8% instead of 30fps at 16%?
>1300 smearing: I am only using the monochrome version of the camera, and have so far not found that to be a problem. An artefact of Bayer filter chip?
I thought it was sensor level? Do you mean you think the Bayer filter is interfering with wide angle light?
Thanks
Wayne.
Kyle Granger February 26th, 2005, 10:28 AM > but how come only 15.6fps
15.6 fps is what you get with 40 MHz clock (for testing).
40MP = (1920+390) * (1080+28)*15.6
You'd get, e.g., 24 fps with something like 61 MHz. The VB of 28 lines can also be reduced to 4. HB is 390 clocks on 3300.
> I thought it was sensor level? Do you mean you think the Bayer filter is interfering with wide angle light?
I have no idea. I will read the older posts, and then ask again.
Thanks, sir!
Obin Olson February 26th, 2005, 05:39 PM what type of CPU %% are you getting at 65mhz 1080x1920 12bit?
Kyle Granger February 26th, 2005, 06:17 PM Obin,
CPU usage with 65 MHz clock on SI-3300 bounces between 18 and 25%, but seems to average around 21%. I get 24.79 fps. The data from the camera is 12-bit packed.
40MHz was bouncing between 4% and 7%. I can't readily explain the big jump in CPU usage, when the framerate increases just by 60%. (AGP 4x, 400MHz FSB, screen at 1280x1024).
When profiling the graphics, you want to make sure that you are not sending unnecessary PAINT events. I have seen my CPU usage double, just by having the Task Manager window partly overlapping the video.
Hope this helps!
Wayne Morellini February 27th, 2005, 02:39 AM PC's, unreal.
Anyway, sorry, I got the clock speed confused with the 1.3 mp clock speed. Performance still looks reasonable, what I expected when we started out last year. Do you think we could eventually use VIA processor for the same job?
That lost performance sounds suspect, maybe write to Nvidia, maybe they eventually will patch it.
Wayne Morellini February 27th, 2005, 07:57 AM http://www.dvinfo.net/conf/showthread.php?s=&postid=280333#post280333
Obin Olson February 27th, 2005, 12:43 PM Kyle please tell me how you display? what size? what framerate? have you captured frames and displayed video at the same time? if so what cpu% did that take?
Kyle Granger February 27th, 2005, 01:26 PM Obin,
> how you display?
I am drawing two polygons.
> what size?
The window is resizable. But the raster within the texture is 960x540.
> what framerate?
same as capture rate (although it is possible to "skip" display frames)
> have you captured frames and displayed video at the same time?
Capture and display are in different threads, if that is what you mean. There may be one frame latency, but there is no drift. Same CPU usage as quoted above.
Wayne Morellini February 27th, 2005, 08:32 PM Kyle:
>> how you display?
>I am drawing two polygons.
That brought a good laugh on. I wonder if the clipping of the polygon through the layers is what is causing the slow down.
Is the Polygon mehord the fastest in the industry for 2D display, or is it fastest because of he GPU shadder ussage?
Kyle Granger February 27th, 2005, 08:52 PM Wayne,
> That brought a good laugh on.
you must explain. :)
> I wonder if the clipping ...is what is causing the slow down.
Yeah, extra drawing plus the window clipping (slow down with Task Manager overlap), probably.
> Is the Polygon mehord the fastest
It's fast for me. What would you recommend?
> is it fastest because of he GPU shadder ussage?
do you mean pixel shaders?
Wayne Morellini February 27th, 2005, 10:04 PM >> That brought a good laugh on.
>you must explain. :)
Ohh, just that it is innovative, slick etc and I know it's possible (back in the old days one of the consumer 3D chipset manufacturers did a 3D demo that had video panels mapped on a spinning shape). Around here we don't have many classy programmers of this level.
>Yeah, extra drawing plus the window clipping (slow down with Task Manager overlap), probably.
Is there any alternative, buffering/windowing technique (hardware windowing) that will get rid of this (MS changed techniques many times).
>> Is the Polygon method the fastest
> It's fast for me. What would you recommend?
I don't know, that's why I was asking you to learn. I know of these things, and where to look for information, but because I am not doing them myself, I don't know the details.
>> is it fastest because of he GPU shader usage?
> do you mean pixel shaders?
Yes
Thanks
Wayne.
Kyle Granger February 27th, 2005, 10:58 PM Hey Wayne,
> 3D demo that had video panels mapped on a spinning shape).
Heh, heh, I'm not doing anything like that. But one could do that, virtually for free, with any of these cameras. The only real, completely unavoidable work is processing the raw data from the frame grabber, and sending the raster to the GPU. If you don't have to touch the data, so much the better. The number of lines of OpenGL code I use is laughably small compared to what I've used for DirectX. But drawing a bitmap is not rocket science.
You could also use glCopyPixels(), but I just understand texture filtering with polys better. Plus, as you said, you get pixel shaders. They are way cool.
> Is there any alternative, buffering/windowing technique (hardware windowing) that will get rid of this (MS changed techniques many times).
Don't know. For me it's DDI: just Don't Do It, i.e., don't have an overlapping window on your video, if you don't want to cause excessive draws.
|
|