![]() |
Take,
thanks, have you considered one of the many GigE versions out there, what was the reason for going with Pike vs the other camera heads? Or in fact why not the PIke with the 2/3 sony ICX285 sensor (which claimes better range but lower res) It seems that you've had quite a bit of calibration problems, is this down to the pike itself or all the work involved in getting your recorder to work? I've seen images from the pike via streampix/cineform RAW that *seem* to look pretty good? That particular sensor is slightly wider than the usual 1" hence the question. Which lens have you been using? many thanks paul |
Paul,
Yes, I first considered a GigE, but I could not find the specifications of that protocol. Which is why I now use a IIDC camera, which has an open protocol and drivers for OS X. I like the HD resolution, I like to at some point get the 2K sensor. Which is why I use the 1" version. I am not sure why I have these non uniformity problems, maybe it is a bad sensor or it is caused by the micro lenses (which according to literature causes all sorts of problems). The non uniformity only shows in dark scenes with gamma correction, the manufacture says that I should use gamma correction and that is normal operations. At one time I was able to fix all the non uniformity problems, but I am redesigning some stuff to work better, it seems that my new algorithm has a bug somewhere so that is why I have so many problems now. I use the Fujinon CF16HA-1: http://www.fujinon.com/Security/Prod...ry.aspx?cat=47 |
There's an open standard GigE vision, i wonder if it's worth your while taking a look at that because firewire i would think is going to seriously constrict your options. Most of the cameras i've looked at also have thier own drivers and software to a greater or lesser extent. I have to say though that i've not tried developing with any of them yet so most of what im writing is guesswork based on specs and any information i can gather. Please take with a nice pinch of salt as the real world is often quite different!
http://www.prosilica.com/gigecameras/index.html that's one of the many manufacturers that use the KAI 2093 (others inlcude basler and JAI for example) At the moment i've not found any CCD based sensor at 2k with decent frame rates. That's one of limitations of CCD is quite slow rates (compared with CMOS). But then i thought the CCDs were much better with regard to FPN? I understand that because each cell on cmos is addressable individually then it's much more likely to show variations of amplification. Hence the question about the problems you've been experiencing. I thought that Cesar Rubio on his site had just plugged a Pike into streampix and cineform and output some pretty decent looking images. cheers! |
Hallo Paul,
All the drivers for camera are non OS X. You can no where download this "open standard". I may at some point switch to GigE, but for now I have got IIDC working. I may need to design my own camera at some point I am afraid. That is also why I think my camera is just bad, because Cesar gets nice pictures out of them. But then his nice pictures are not very black, I also get nice pictures from the outside without any processing. Cheers, Take |
The quality of cameras will vary within a series, even for serial numbers that change only a little, so it's always a serious bet that can cost 5k or 9k euro for the large sensors. You cannot trust the manufacturers to replace the cameras you don't like. In a production environment, you need to test a number of cameras in a scientific way, one by one, and only use the ones that cover your image quality requirements. Digital cinema is an application that is well above anything in machine vision in terms of required quality. You need to be sure the sensor will perform because it will possibly be used in natural light and with lots of gain and post processed in extreme ways.
You can't expect a produced image out of a camera. It takes work. Rubio's samples use quite a lot of commercial software, a recording app and cineform, cannot maintain precise frame rates and have audio sync problems. The lenses are very soft which hides the image quality problems of the simple debayer. But you can still see there is lots of chroma alias if you zoom. Generally, the package is expensive for an unfinished solution. There is no user interface, no focus aids, no real usable control. If you do not want a solution that uses the camcorder form factor, Take's colution will be cheaper and better than anything than can be put together with off the shelf software components because Take actually writes software:) It took Rubio quite a while to realise that ISO speeds of the bus (iso400, iso800 etc) have nothing to do with sensitivity, even though the frame rate halved when ISO was changed. |
A JAI Kodak HD gigE is about 9,000 euro with tax and shipping btw and it's practically a good small webcam that outputs uncompressed. There are many technicalities in designing a camera and lots of software and hardware engineering issues to solve. 1000s of manhours in user interface design, testing, processing algorithms, troubleshooting etc. It's not as simple as buying a head and using a computer. It would cost as much as a complete properly engineered solution and it would still be completely useless in a video production situation due to user interface, image quality, and basic implementation problems such as frame rate and audio sync.
|
John,
Whilst i agree with the artifacts in most examples, the Elmo Raw 12 looked much nicer. I wonder if StreamPix have fixed some aspects of the cineform integration by that stage. Also the divx of him with his kids in nice in terms of frame to frame consistency. The examples, lighting and environments are not ideal testing though! Im not sure what lens he was using, some photos show a f1.2 50mm nikon which would have a angle of view similar to 125mm (i think!) on this sensor and some of these examples look wider than that. Some of the companies i've spoken to about various cameras imply that some machine vision applications are beyond digital cinematography, it depends on the camera and supporting hardware behind the sensor. Can't beat creating your own though (which you're doing). I've found cineform to work very well for us (Prospect though) but i have no experience of the RAW version. SI footage looks very good though. Take, i understand the mac os x issue now, i hadn't taken that into account. It's been wonderfully enlightening reading your reports. I have no problem with development but i'll always try to avoid reinventing the wheel. If i can take someone elses and smooth it off a bit, that'd make me happier! cheers paul |
Hi John,
The difference in sensors is why I thought would be the problem as well. Anyway I am designing my system that even bad sensors would be good enough for digital systems. 6000 Euro was a pretty expensive bet, so I have no choice then continue what I have started. In a sort of weird way I am lucky I got such a bad sensor, the work I am doing will be a benefit later on. Cheers, Take |
I think SI are using their own debayer algorithms.
The recording app is just moving data, I believe it has nothing to do with image quality in this case. The machine vision applications are designed for processing images in scientific or industrial applications. Streampix can do practically nothing in that area. There are better packages which work great, but not in video applications. It's not the intended market. The software is not designed for streaming video so you have to build everything yourself. Anything will look ok if highly compressed to an mpeg4 variant. The format cannot preserve texture detail, shadow detail is eliminated, noise is reduced because the format cannot code it etc. The elmo sample with a sharper lens would look like this: (200% zoom) http://img182.imageshack.us/img182/3...12uncomct2.png A lot worse probably because some alias is already filtered with the ultra soft lens. |
Quote:
|
Take a look at these cameras. These cost about 3600 euro a piece and are 2/3". Using the manufacturer provided video recording apps.
http://img168.imageshack.us/img168/5...arison2gx8.jpg Notice the debayer quality problems of the app and the uniformity problems (vertical lines) of the sensors. Also the difference in sensitivity. Both are using the same expensive sensor. I don't believe someone used to even $200 consumer quality camcorders would find this quality acceptable, but it is acceptable in most machine vision applications and these cameras are quite popular. |
Hi John,
I don't have any fancy lights or something, so I only can use daylight. That particular ColorChart was made indoor with natural light from outside. On a overcast day with lots of rain. Exposure was 0.02 seconds, and I think the lens was set on its third stop (f/2.4?) Cheers, Take |
The usual incadescent indoor lighting and candles are good for testing. You might have noticed we use them a lot:)
|
Whow John, those images are pretty bad, I do have that striping as well, but horizontally. This makes me a little bit more comfortable.
I wonder what they do in consumer cameras. Do they just make sure the sensors produce an acceptable picture, or do they solve it in software. |
I noticed your candle pictures. I am planning to be able to create multiple calibration data sets. So that you can use daylight or tungsten balanced. You will have to select which calibration data you like to use in the recording application before hand though (or modify the movie file with a hex editor :-).
|
They process the consumer camcorder output a lot. But the format themselves save the day. There is not much in the shadows of most compressed video formats, so the problems sit below the format limits. With uncompressed video there is nothing to hide the flaws.
The test I posted was quite extreme though. It was lit by my mobile phone screen from a long distance, it is 200% zoom and uses 24db of gain:) What would a professional camcorder look like with 24dbs in this situation? Totally black perhaps. I have tried XDCAM HD at 12dbs and it was already very noisy. We would never sell these camera heads to someone of course. There are better cameras out there. And better samples even for these particular cameras. |
I was just thinking of low light situations, and I have a starlight ceiling in my bedroom. Which is even hard to see by eye, so the camera would have no change unless it was integrating over many images.
And then I though these would be very small dot and would only be seen by a single pixel on the sensor. Say these stars where white, the resulting color would be from the pixel, pretty weird. In my debayer algorithm I wanted to separate the high frequency components anyway, now they are added immediately to the color we are trying to interpolate. I guess I should add the high frequency component at the end of the debayer (after direction selection) as a white offset. This would maintain white stars on a black background. |
John,
Im pretty sure that SI use cineform all the way through, i will check with them though. The cineform RAW product lightly compresses the sensor data before debayering. Then you can choose a real time debayer (with real time grading possibilities) or render a final quality debayer with the editing system. I believe the final render is bilinear based. Now what happens to the sensor data before giving it to cineform may be the grey area we are talking about. What kind of sensor fixing is needed, hardware based or software. Perhaps that's the area that silicon imaging have put their effort in to as well? Is this the key area? Hardware based correction before the data? But there's a chance that some of those images are from the quick and dirty debayer, not final. You've mentioned in the past you didn't like cineforms debayer, so do you have some other samples you've seen? cheers paul |
Hi Paul,
bilinear debayer is very low quality, so I hope they do something a little bit better. My own debayer is very slow though, it takes around 5 seconds for a single image. The fast debayer is automatically selected by Final Cut Pro, so you can edit at real-time performance. When you export in Final Cut Pro it automatically selects the high quality debayer. Cheers, Take |
Take,
Actually i've since gone back to the images on silicon imaging and pixel peeped and can quite easily see debayering artifacts (even without sharpening). So maybe it's not that good (the workflow is nice and the overall 'feel' of the images are great) I've had another look at johns images and the debayering is substantially better! Is your custom debayer based on bilinear too? Also there's a thread about cineform vs Reds debayering which is worth a read over on reduser.net cheers paul |
Debayer quality is always a trade between ability for post processing and resolution. The expensive debayers are not very good in postprocessing and can have some small artifacts in motion. Their intelligence is their weak point. The cheap debayers are not very good in terms of resolution and come with artifacts. You have to find a solution somewhere in the middle. The realtime debayers are a challange on their own. For a production tool like a camera, you cannot have something that needs 50x or 150x realtime to debayer. It's not practical.
We are using a debayer we have designed from scratch. Lots of different versions are used in the samples, some with bugs, some without. I have seen lots of images from the SI camera. I'm sure they have better quality options on final output besides bilinear. I remembering reading about that. The cineform codec is using a basic bilinear I believe. The SI uses cineform raw, but the SI is a complete system with user interface, monitoring, extra processing, finetuned to the specific sensor, etc. The cineform codec is just compression with a low quality playback preview and medium quality final output. |
Paul, no my debayer is not based on bilinear interpolation.
My interpolation is a melt of AHD, posteriori and some of my own. AHD and posteriori interpolates the green plane two times one by interpolating horizontally and the second time vertically. I add a third time by interpolating in using all surrounding pixels. In all three algorithms interpolation also uses the red and blue plane to retrieve high frequency component and transplant it into the green plane, this increased the resolution by quite a bit. Now we can interpolate the red and blue planes two or three times based on the green planes we already reconstructed. When reconstructing blue values in red pixels (or red values in blue pixels) we again use directed interpolation. The red and blue planes are reconstructed by looking at the color difference compared to green, otherwise you get a lot of color aliasing like in bilinear interpolation. Now we have two or three full color images. we select a pixel from one of these, depending on the smoothness of the area around it. This is how we eliminate the zipper artefacts. My third image is used to get rid of the maze artefact which I feel is important when used for cinema. Then for an encore the resulting image is passed through a couple of median filters that work on the color differences. This will reduce the color aliasing even further. Most debayer algorithms work with integer numbers which can reduce quality by quite a bit and makes color grading difficult. I am doing all these calculations at 32-bit floating point to retain precision, which I hope will make color grading easy. |
Quote:
We use synthetic images to test the debayer but mainly largely downsampled DSLR images (using a custom filter to prevent alias) which results very smooth and sharp 4:4:4 images. Then remove the color portions that cannot be encoded by debayer, send the images through the debayer and compare to the original 4:4:4. It is a much better test than actual images from the sensor, because the MTF is extremely high and any alias and image quality issues show up. |
John, Paul asked me if my algorithm is bilinear.
|
Quote:
CineForm provides better demosaic options for final output. |
Quote:
Would you say the users actually prefer bilinear because of the cost of other methods? By cineform you mean the generic codec or the one you use? |
This may seem like a silly question, things such as debayering are over my head. But if its a problem to get a high quality debayer real time, why not just do it in post on a high end system?
Wouldn't it make sense to record in the highest possible quality format and debayer it later with a more complex algorithm? Also have it available for the future as better, more efficient, and sharper algorithms become available? |
Quote:
|
Quote:
Im not a pixel peeper but in those you can clearly see artifacts in the eye lights and other places of high transition without zooming. thanks paul |
This is some interesting dsp. We removed the bilinear debayer effects and did a debayer and lens correction from scratch. I didn't code the processing but I find amusing that you can get from A to B:)
http://img255.imageshack.us/img255/6387/rebuildaq5.jpg |
Quote:
What do you think of the Red debayer? It seems very naturalistic. cheers paul |
I found that I did some stupid things, like using a power function instead of the exponential function for the X-axis of my per pixel LUT that I used to do. Now I will try cubic interpolation to guess the values better, and use adaptive pixel repair.
So I have some work to do. |
Quote:
|
Quote:
Since you've been posting your comments have made me rethink CCD although im not *totally* convinced or converted yet :) because i only see CCD examples, no comparison with like for like CMOS to be able to quantify the differences visually. In broad terms with generic cmos/ccd sensors i can see where you're coming from but i'm interested in specific sensors, the altasens ProCam HD vs the Kodak KAI2093 for example. On paper the altasens is actually more sensitive, across a broader range of light and cmos, as i understand, generally outputs a cleaner signal (less hardware needed around the sensor because a lot of it is onboard). CCDs bloom and CMOS usually have a rolling shutter (mitigated more by making sure the sensor is running fast enough). CMOS frame rates are higher and that is a valid narrative requirement sometimes, especially for model shots and so on. Have you found *any* cmos that you're impressed with? Aside from rolling shutter, what about the other aspects? paul |
Well, the producer is usually not an engineer or even the photographer, so I guess every single one of them will want the Red because of the higher resolution even if that's not required by the project.
Most people involved with advertising or special effects will also prefer the Red because the higher chroma resolution gives more options and you can use as much light as you want in these shoots. They will only need CCD is there is lots of real camera motion (not synthetic). Where CCD will always be better is in very active camera work, action shots, natural light cinematography etc. On a good CCD outputing uncompressed there is natural ccd noise that can be used creatively. You normally do not have to use gain, so there is no grain, but if you like it, it look great and has excellent statistical properties that are very close to high sensitivity film. On a compressed format or a CMOS sensor you would not want to do that. On the other hand, not everyone has the same idea about what an image should look like. These days everything has too much grading and an artificial image is very common and even considered cool by some people. I personally think film is still the reference for color quality and this is what we are after. If the user is after desaturated, metallic, green/blue looking science fiction looks, any camera will be equally good I guess. But some people want realism out of the camera and most people interested in the camera so far are involved in pseudodocumentaries, drama, comedy and horror. Which is nice, since that was the intended market for this camera. We also have interest from film people who have never used a digital camera. Which made us extremely happy I admit. When comparing sensors of the same resolution, the ccd is usually superior in every image quality aspect except smear performance. You can except the kodak 2093 to significantly outpeform any CMOS at 2k, including SI and Red in 2k crop. To answer you question, in a direct comparison, the CCD will have higher sensitivity, an excellent statistical nature in its noise, higher saturation, better motion quality, more realism and a far more impressive image. In low light situations the CMOS will quickly deteriote to a flat lifeless image. Every single cmos I have seen so far has these characteristics. I have first hand experience with many cmos sensors including the altasens 1080p but not with the Red sensor. We would only use CMOS if the reduction in cost was extremely important for the complete package. With an altasens it can be very significant, so there is always room for CMOS even in our product. The issue of depth of field has many sides. F1.4 is cheap in the 2/3" and 1" sensors, but how much does it cost on the full frame Red? SLR lenses at that speed (beyond 50mm) do not exist and film lenses are extremely expensive. You settle for f2.8 or larger and there you go, there is no DOF advantage and you also get an enormous loss of light. There are many 35mm shooters that prefer to stay above f4 because of DOF limitations and consider working at larger apertures a problem. They will be very satisfied with a sensor like the Kodak that has much more sensitivity for the same DOF. 2.3" and 1" has good shallow DOF capabilities. If a user comes from an 1/3" or 1/4" f1.6 or f1.8 camcorder he might be starved for some DOF flexibility, but super35mm is probably too much for most applications. Personally I find 2/3" DOF annoying sometimes at large apertures because it is too shallow. CCDs are not made by startups or companies that were created yesterday, they are made by Sony and Kodak, companies that pioneered and are dominating the imaging (even film) market for decades. There is some solid engineering behind the sensors and it is obvious in the output of the sensors. Why is our sony 2/3" CCD sensor used in camera heads that cost 20,000 euro and output a frame every 10 seconds? Why is it considered the highest quality low light ccd for scientific applications if CMOS actually had a chance to compete at a fraction of the cost? Why do all serious microscopy cameras come with that sensor? Why do NASA choose CCDs for all space based inspection instead of the affordable cmos sensors? You would expect the engineers behind these solutions to have done their homework and be able to read beyond the CMOS related marketing and be free from cost restrictions of a few $100s. And you would be right:) I only see CMOS in a few places on the market, mobile phones, consumer camcorders, and a couple of digital cinema cameras. Why is Sony and 95% of other companies still using the expensive CCD sensors in even the cheapest still cameras if CMOS is up to the task? They don't want to save money or improve quality? CMOS is used in DSLRs now, but there is no camera motion involved in those and the system can afford to do quite a lot of processing on the cmos output. These cameras (body only) do not make any real profit, the companies survive on selling aftermarket lenses for their system, and there is obviously a pressure to reduce cost and, why not, increase resolution on the side. The expensive medium format camera backs still use CCDs, just like any serious camcorder and digital cinema camera from the big manufacturers. Red are very smart. I can get an altasens implementation and do a direct comparison with a CCD. We have evaluated an altasens head and are in the process of evaluating another one. But since RED claim the sensor is not available to anyone else, they can claim anything and we have to spend 20,000 to buy a Red One camera and wait many months to get it in order to directly compare. We would never be trusted when publishing such results, so there is not much we can do except wait for the users to discover the quality differences in actual use. Which will be hard. The Red users thought the NoX samples looked bad but they look excellent IMHO. We have a DVX user in the team and, damn, it is like questioning one's religion sometimes when comparing anything to the DVX! |
John,
I just wanted to say thank you for such a comprehensive thought out reply! I'd like to add that you talk about having real chroma and luma and the nice thing about a bayered 2k or 4k is the extra information for chroma by the time it's downsampled to 2k. Although i know your debayer is exceptionally good. You make a good point about the nature of 'uncompressed' noise Can smear be 'fixed' or minimised in CCD sensor design by good supporting electronics and hardware? The point about lenses is important especially with this sensor, im finding it quite difficult locating glass that has a big enough image circle. Even the zeiss superspeeds and S16mm cookes don't look like they'll cover it. There's a bit of a void here in the market save for some machine vision lenses (like the fujinons) - SLR lens are too big and all the masses of 2/3 lenses are too small. Have you found some quality lens manufacturers? >We have evaluated an altasens head and are in the process of evaluating another one. don't suppose you care to mention what you have tested and your thoughts? I suspect you're in a unqiue position of really being able to test and understand these heads? thanks again paul |
I've have worked on my non-uniformity calibration algorithm and here is the result:
http://www.vosgames.nl/images/Mirage.../fr_cal_bp.png I first had to find the best values for the x-axis of my per pixel luts, as the errors are exponential on an exponential system (or is that logarithmic on a logarithmic system) I had to choose the x-axis accordingly. Then I added a bad pixel detector. A bad pixel is marked when it is non uniform (after uniformity restoration) by more than 6% (taken from a kodak white paper) from the average. I do this for each white field that has been taken and mark the bad pixel by the brightest value that it is off by more than 6%. During rendering, first the pixels are made uniform. Then each pixel value is compared to its own bad-pixel-brightness-value. If the pixel is bad then the pixel is interpolated from neighbour pixels that are good, if no good neighbour pixel is found an average is taken from all the neighbours (I guess I could do this using weights). There is also a bad line somewhere at the top of the image that goes all the way from left to right. weirdly this line does not show up in the white fields, otherwise the bad pixel detector would have detected it (I also checked it visually). I guess I will have to find a way to add manual pixels and lines to the bad pixel map. Next step is getting color conversion to work, i am thinking of using a 3D-LUT for color-space conversion, instead of a color conversion matrix. I am not sure how to implement one, but I guess I will find out. |
Hey Take,
3D LUT's are not hard per-se, just depends on where you need to apply one. The GPU will give you a 3D LUT for "free", at least sort of, meaning linear interpolation is typically a part of any GPU's architecture (i.e., it can linearly interpolate a texture map), so you can use a volumetric texture and the GPU will linearly interpolate it using trilinear interpolation to get all the values you need out the 3D LUT. If you have to rely on the CPU for the 3D LUT generation, then trilinear gets a bit cumbersome since there are a lot of operations (you need a total of 8 points to construct the interpolated point). Tetrahedral interpolation can be a lot easier since you're now only dealing with 4 points necessary for interpolation. Since you're not doing a scattered mesh, you don't need to worry about stuff like Delaunay triangulations, i.e, your 3D LUT should be a regular mesh of points evenly spaced, so you can simply split each cube into six tetrahedra using a single diagonal on the cube. In fact, it's probably a lot simpler than that, i.e., with an evenly spaced mesh, you can just pick the 4 closest points and make a tetrahedra out of them, but I think you will need to make sure that no point can be addressed by two separate tetrahedra (i.e., if you don't pick an axis of the cube for creating the tetrahedra, then there is a situation where one point can become assigned to one set of 4 points, and another point right next to it might get assigned to another set of 4 points, and then the point right next to that will be assigned back to the first tetrahedra, so now you have this odd cross-over situation. Depending on the precision of the math used, this could cause interpolation inconsistencies. Theoretically it wouldn't, but in the real-world it might). Thanks, Jason |
Hello Jason,
Thanks for the information. I started and stopped with trilinear interpolation, it was to messy. I was actually doing a scattered mesh, i.e. only the colors from the ColorChecker would be in the LUT. And I think I had it quite good, until I needed to do extrapolation and then it became extremely weird. So instead I am trying to find a 3x3 matrix for color conversion. I am using Gauss-Jordan to find the matrix. Then I am going to repeat that for each combination of 3 ColorChecker colors. Then I take the median of all the results and that will be my correction matrix. Cheers, Take Vos |
Jason asked me the exact steps when creating the image fr_cal_bp.png so here they are.
+ Read footage into Final Cut Pro - Camera bayer 12 bit linear (already black corrected with a small offset to handle negative values) - Apply Per Pixel uniformity correction (also linearizes each color channel). - Fix bad pixels - Debayer using a direction algorithm - <--------- Here is where the color conversion will be. - Add rec709 gamma correction - Convert to YUV using rec709 YUV conversion + Final Cut Pro with "3 way color correction filter", with neutral settings, just to force high dynamic range rendering when previewing (not needed for normal export) + Export to my own intermediate codec from Final Cut Pro - Convert to RGB with rec709 YUV conversion - Remove rec709 gamma correction - Saved as 16 bit float linear RGB + Read back into Final Cut Pro - Add Apple native gamma correction (1.8) (because it is exported to .png) - image is now in 24 bit RGB (because it is exported to .png) + Save as .png by Final Cut Pro. Just to reiterate, the "3 way color correction filter" is not doing anything to the image, it is just there for forcing high quality rendering for debugging purposes. |
All times are GMT -6. The time now is 03:48 AM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network