January 4th, 2008, 01:16 PM | #91 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hi John,
Yes, if the manufacture will release firmware that does the black level correction in the way I like; I could actually use all those 940 lines for the image, instead 140 lines are covered by a piece of aluminium to get a couple of black lines. Cheers, Take |
January 4th, 2008, 01:18 PM | #92 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Right now, I include a shoot-and-protect square for 16:9 720p in the 1920 x 800 frame preview. Scaling would be better, but as I said in the post before all those other 140 lines are unusable.
|
January 6th, 2008, 12:50 PM | #93 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hi,
So I have started working on the calibration utility. I will share you some statistical research of before and after. First I point the camera at a white piece of paper and over exposing it at 80ms, then Boom Recorder creates a movie where it automatically reduces the exposure time and records the results. The calibration program will read this movie and average at least images at each exposure time. The results are below, the statistics are split for each color component, and then shown between parenthesis the: average pixel intensity, the standard deviation of pixel intensity (spatial noise or pixel non-uniformity), and the average of the deviation of each pixel from multiple images from the same exposure time (temporal noise, for use in ISO calculations). The values are in stopFS (FS stands for full scale), so -1.0 stopFS means 50% exposure, -2.0 stopFS means 25% exposure. black movie, 0.0 ms:red (-10.7,-12.1,-10.1) stopFS; green (-10.7,-12.1,-10.1) stopFS; blue (-10.7,-12.0,-10.1) stopFS. white movie, 0.1 ms:red (-8.1,-9.4,-9.4) stopFS; green (-7.7,-8.9,-9.3) stopFS; blue (-7.5,-9.1,-9.2) stopFS. white movie, 0.2 ms:red (-7.8,-9.3,-9.3) stopFS; green (-7.3,-8.7,-9.1) stopFS; blue (-7.0,-9.0,-8.9) stopFS. white movie, 0.3 ms:red (-7.4,-9.0,-9.2) stopFS; green (-6.7,-8.6,-8.9) stopFS; blue (-6.4,-8.8,-8.7) stopFS. white movie, 0.4 ms:red (-7.0,-8.8,-9.2) stopFS; green (-6.2,-8.4,-8.8) stopFS; blue (-5.8,-8.5,-8.5) stopFS. white movie, 0.5 ms:red (-6.6,-8.7,-9.0) stopFS; green (-5.7,-8.3,-8.4) stopFS; blue (-5.3,-8.3,-8.1) stopFS. white movie, 0.7 ms:red (-6.2,-8.6,-8.9) stopFS; green (-5.2,-8.1,-8.2) stopFS; blue (-4.7,-8.1,-7.8) stopFS. white movie, 1.0 ms:red (-5.7,-8.4,-8.6) stopFS; green (-4.7,-7.9,-7.8) stopFS; blue (-4.3,-7.9,-7.4) stopFS. white movie, 1.4 ms:red (-5.3,-8.1,-8.2) stopFS; green (-4.2,-7.8,-7.5) stopFS; blue (-3.8,-7.6,-7.0) stopFS. white movie, 2.0 ms:red (-4.8,-8.0,-8.0) stopFS; green (-3.8,-7.4,-6.9) stopFS; blue (-3.3,-7.2,-6.5) stopFS. white movie, 2.8 ms:red (-4.3,-7.8,-7.6) stopFS; green (-3.3,-7.0,-6.5) stopFS; blue (-2.8,-6.8,-6.1) stopFS. white movie, 3.9 ms:red (-3.9,-7.5,-7.1) stopFS; green (-2.8,-6.6,-6.0) stopFS; blue (-2.4,-6.3,-5.6) stopFS. white movie, 5.5 ms:red (-3.4,-7.1,-6.7) stopFS; green (-2.3,-6.2,-5.6) stopFS; blue (-1.9,-5.9,-5.1) stopFS. white movie, 7.6 ms:red (-3.0,-6.7,-6.2) stopFS; green (-1.9,-5.7,-5.1) stopFS; blue (-1.4,-5.4,-4.6) stopFS. white movie, 10.7 ms:red (-2.5,-6.3,-5.7) stopFS; green (-1.4,-5.2,-4.6) stopFS; blue (-0.9,-4.9,-4.2) stopFS. white movie, 14.9 ms:red (-2.0,-5.9,-5.3) stopFS; green (-0.9,-4.8,-4.2) stopFS; blue (-0.5,-4.5,-3.8) stopFS. white movie, 20.8 ms:red (-1.5,-5.4,-4.7) stopFS; green (-0.4,-4.4,-3.9) stopFS; blue (-0.0,-4.6,-6.7) stopFS. white movie, 29.0 ms:red (-1.0,-5.0,-3.2) stopFS; green (-0.1,-4.5,-4.9) stopFS; blue (-0.0,-8.0,-11.5) stopFS. white movie, 40.5 ms:red (-0.0,-5.9,-11.9) stopFS; green (-0.3,-2.6,-5.4) stopFS; blue (-0.0,-8.5,-11.7) stopFS. white movie, 56.6 ms:red (-0.2,-4.3,-5.6) stopFS; green (-0.0,-7.6,-11.4) stopFS; blue (-0.0,-8.6,-11.2) stopFS. white movie, 79.0 ms:red (-0.0,-8.5,-12.3) stopFS; green (-0.0,-7.6,-12.5) stopFS; blue (-0.0,-8.9,-12.5) stopFS. From these measurements a PPLUT (Per Pixel Look Up Table) is calculated with 8 values, then this PPLUT is applied to the above images to linearize the pixel values and reduce the spatial noise. As you see, I win around 2 stops of spatial noise reduction. Of course when the pixels are over exposed the spatial noise increases. I have found that this algorithm makes pixels that are normally counted as bad (too hot or too cold) pixels useable again. The bad pixels that can not be rescued can now be found and using a simple average be restored from other pixels. Would be cool to do bad pixel fixing inside the debayer algorithm, so that pixels can be fixed horizontally and vertically and let the debayer algorithm figure out which is best. calib movie, 0.0 ms:red (nan,-10.7,-inf) stopFS; green (-15.8,-8.6,-inf) stopFS; blue (-15.5,-11.7,-inf) stopFS. calib movie, 0.1 ms:red (-8.6,-10.3,-inf) stopFS; green (-7.8,-8.5,-inf) stopFS; blue (-7.3,-10.1,-inf) stopFS. calib movie, 0.2 ms:red (-8.3,-10.5,-inf) stopFS; green (-7.4,-9.6,-inf) stopFS; blue (-6.9,-10.3,-inf) stopFS. calib movie, 0.3 ms:red (-7.8,-10.4,-inf) stopFS; green (-6.7,-10.2,-inf) stopFS; blue (-6.2,-10.8,-inf) stopFS. calib movie, 0.4 ms:red (-7.2,-10.3,-inf) stopFS; green (-6.2,-10.9,-inf) stopFS; blue (-5.7,-10.5,-inf) stopFS. calib movie, 0.5 ms:red (-6.8,-10.5,-inf) stopFS; green (-5.7,-10.5,-inf) stopFS; blue (-5.2,-9.8,-inf) stopFS. calib movie, 0.7 ms:red (-6.3,-10.5,-inf) stopFS; green (-5.2,-9.9,-inf) stopFS; blue (-4.7,-10.4,-inf) stopFS. calib movie, 1.0 ms:red (-5.8,-10.3,-inf) stopFS; green (-4.8,-10.3,-inf) stopFS; blue (-4.3,-11.8,-inf) stopFS. calib movie, 1.4 ms:red (-5.3,-9.9,-inf) stopFS; green (-4.3,-11.2,-inf) stopFS; blue (-3.8,-10.5,-inf) stopFS. calib movie, 2.0 ms:red (-4.8,-10.3,-inf) stopFS; green (-3.8,-10.1,-inf) stopFS; blue (-3.3,-10.5,-inf) stopFS. calib movie, 2.8 ms:red (-4.4,-11.7,-inf) stopFS; green (-3.3,-10.2,-inf) stopFS; blue (-2.8,-11.1,-inf) stopFS. calib movie, 3.9 ms:red (-3.9,-10.5,-inf) stopFS; green (-2.8,-11.8,-inf) stopFS; blue (-2.4,-10.0,-inf) stopFS. calib movie, 5.5 ms:red (-3.4,-10.3,-inf) stopFS; green (-2.3,-9.9,-inf) stopFS; blue (-1.9,-10.4,-inf) stopFS. calib movie, 7.6 ms:red (-2.9,-11.4,-inf) stopFS; green (-1.9,-8.9,-inf) stopFS; blue (-1.4,-9.5,-inf) stopFS. calib movie, 10.7 ms:red (-2.4,-10.2,-inf) stopFS; green (-1.4,-9.5,-inf) stopFS; blue (-0.9,-8.8,-inf) stopFS. calib movie, 14.9 ms:red (-2.0,-10.1,-inf) stopFS; green (-0.9,-8.7,-inf) stopFS; blue (-0.4,-8.3,-inf) stopFS. calib movie, 20.8 ms:red (-1.5,-9.4,-inf) stopFS; green (-0.4,-7.0,-inf) stopFS; blue (-0.0,-5.1,-inf) stopFS. calib movie, 29.0 ms:red (-1.0,-7.8,-inf) stopFS; green (-0.0,-4.5,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS. calib movie, 40.5 ms:red (0.1,-3.4,-inf) stopFS; green (-0.2,-2.6,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS. calib movie, 56.6 ms:red (-0.1,-4.8,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS. calib movie, 79.0 ms:red (0.1,-3.4,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS. |
January 6th, 2008, 12:53 PM | #94 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
A before and an after image would be more useful:)
|
January 6th, 2008, 01:08 PM | #95 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hi John,
I guess it would, but I don't have them yet, I first need to get my calibration program a little bit better. I already got it a little bit better by moving my calibration point to two stops down. I also want to calculate the color correction matrix/lut from the ColorChecker image. I learned a new term yesterday "daylight balanced film stock", I guess this calibration of the camera would be pretty much like that. It should also be possible to calibrate the camera to tungsten, but you can not really mix multiple calibrations. Cheers, Take |
January 10th, 2008, 05:38 PM | #96 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
I've spoken with the manufacturer and they told me that it is probably a ground loop problem caused by the firewire cable, something about the chassis not being connected to shield of the cable.
Although the computer was running on batteries and not connected to ground at the time, you can still have the "ground loop"-like effect where the firewire cable acts as an antenna. Especially because my firewire cable is rather long. I guess running the camera on fiber with its own power will eliminate this problem so I'll be looking forward to testing it that way. Next step on the agenda is adding reading of the calibration data by the codec, then I can make a couple of comparison pictures. Cheers, Take |
January 11th, 2008, 07:56 AM | #97 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
I remember that the Pike has a shading correction feature. Doesn't that handle the pattern noise problems?
In audio installations, you usually disconnect the shield to break the loop but I don't know much about electronics. Almost all firewire cameras I have seen have horizontal lines deep into the shadows but you can't show them by just using gain. You need gain at very high values and extreme gamma settings. |
January 11th, 2008, 08:31 AM | #98 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hello John,
The standard way of reducing pattern noise is by taking a dark and a bright image (usually averaged out to remove the spatial noise). Basically you do linear interpolation using the two points captured by the dark and bright image. However this doesn't seem to work (not enough for my taste) on my camera (maybe the camera is broken but the manufacturer says this is normal when you use gamma correction). What happens is that when there is a small amount of light the pixels start to behave non linear and each pixel has a different non-linear curve. When it gets brighter the curve becomes linear again, but because of the first part of the curve the black image can not properly be used to calculate the offset. Either the non linearity of the curve is normal and caused by non-linear effect of semi-conductors on a per pixel basis; or it is caused by the microlenses, which may cause non-linear effects at such scales. Instead of interpolating over a line using only a black and a white image, my algorithm interpolates over a curve that is measured from multiple gray images and a black image. And thus follows the non-linear parts of the curve at dark levels. The algorithm works extremely well, even hot and cold pixels become usable again (as long is there is a little bit of life in them). As for ground loop problems, this is caused when two pieces of equipment are connected together and there is a voltage difference between the two grounds. This causes current to travel between the two, this is normally not a problem and needed to eliminate the voltage difference. The actual noise problem is caused when the current is travelling in the ground wire of a signal pair. Sound engineers used to break the shield on the cables to eliminate current travelling over this shield. As we now know the actual cause of the problem, well educated sound engineers now make proper ground connections between the pieces of equipment, using heavy gauge wires and laid out in a star pattern; eliminating any voltage difference between the equipment. This makes everything a little bit saver and less entertainers get electrocuted by their microphones and guitars (fantom power which is pretty high voltage is send over the microphone cables to feed the amplifiers inside the microphones). Cheers, Take |
January 11th, 2008, 08:43 AM | #99 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Oh about the non-linear curve of the pixels.
Astronomers used to pre-flash a CCD before taking an image, basically when they start the integration they turn on a light in the telescope that gives off a known amount of light that will fill the light buckets until they are no longer non-linear. A dark image is then taken from this pre-flashed image and subtracted from photographs that are also pre-flashed (and thus compensating for the pre-flash). I could have done the same, but that would have taken me some work in mechanics, optics and electronics. Instead I've chosen to make a better algorithm. Cheers, Take |
January 13th, 2008, 05:32 PM | #100 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hello everyone,
Here are some pictures, these are exported as PNG after rendering in Final Cut Pro. The first one is without non-uniformity calibration data: http://www.vosgames.nl/images/Mirage...alibration.png The second one is with non-uniformity calibration data: http://www.vosgames.nl/images/Mirage...calibrated.png To see the difference, look at the black swatch on the color tester. In the first image there are some wiggly lines which are absent in the second image. The images are demosaiced using my own debayer algorithm that preserves more noise than for example AHD. If you zoom in you will see short horizontal and vertical lines which are caused by the directional interpolation, this happens in other algorithms as well. I've made a modification on my algorithm and it will eliminate these short lines completely, but it makes the image slightly softer. I think I will need to teach my algorithm the difference between noise and lines. |
January 14th, 2008, 06:15 AM | #101 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
What gain is this frame?
I think the uncalibrated one looks more natural. The noise looks more random. I assume both noise types are static, not changing from frame to frame. The bayer artifact should flunctuate more than the sensor artifact because it depends on the frame content. I remember an old comparison using eAHD(I think) that had much better signal per noise. It didn't have the maze artifact. Have you kept the bayer frames? |
January 14th, 2008, 07:30 AM | #102 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
0 gain, I do not allow the user to change the gain.
There is no more static noise in the calibrated frame, this is the whole purpose of the calibration. I actually want to have all the noise that was original on the sensor, noise reduction normally means that the image becomes less sharp. My algorithm actually uses high-frequency transplantation; - I first find all the high frequency noise/texture. - Then I interpolate a low frequency green. - Then I interpolate red and blue using the full green image. - Then I add the high frequency noise/texture to all the color components. I have an idea of using square interpolation when there is no edge to be seen, this would smooth out the greek-restaurant-pattern. I am not sure how to figure out which of the orientations to use, unless I would do homogeneity comparison like in AHD. I also have an idea for reducing the color aliasing even further, by using an weighted average, so the colors stay on the correct side of an edge. |
January 14th, 2008, 07:33 AM | #103 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
I still have the bayer frames, but they are quite large and in my own special movie format.
|
January 14th, 2008, 11:53 AM | #104 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
Greek restaurant pattern:P
http://www.mi.sanu.ac.yu/vismath/morrison/ It's sometimes called the greek key but it was quite popular throughtout the world. Someone found an ancient demosaicing description and thought it looked nice:) |
January 15th, 2008, 04:10 AM | #105 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hi,
I've been experimenting with a new algorithm for color interpolation to eliminate color aliasing. It worked, sort of, it does remove a whole pixel of color aliasing in exchange for a zipper effect. But the zipper effect was actually caused because the weird color changes where not color aliasing but color aberration from the lens. This caused my algorithm to behave weird. Cheers, Take |
| ||||||
|
|