View Full Version : Mirage Recorder


Pages : 1 [2] 3 4 5

Take Vos
August 14th, 2007, 02:00 AM
Solomon,

The end result will be uncompressed RAW 12 bit non-linear (I call this '1412') with a resolution of 1920 x 800 (2.40:1). And it will be direct editable in QuickTime applications like Final Cut Pro.

As for the codec, I think it is too late now, I have designed my own codec to read the calibration data and compensate the image during playback. I have done this, so that the calibration is not fixed, i.e. you can redo the calibration or change calibration method after you have shot the footage.

The bottleneck right now is the Firewire 800 speed, and then after this the hard disk speed (although one could always make stripe set). But I actually like the idea that you can use a single disk to record on.

I could also design a '1408' or 8 bit log encoding, this would allow for a 2048 x 1080 image easily, but I am not sure if you really want to use only 8 bits to encode the image.

As you have probably noticed during the pushing of the image, there is quite a bit of "bias", I think this is caused by reflections inside the camera.
The C-mount which doubles as back-focus is made of bare aluminum. The 35mm lens is probably projecting onto this bare aluminum which gets reflected onto the sensor. I guess I will need to paint it, or design one that is black (but that may be prohibitively expensive for single runs).

I think there is also quite a lot of noise in the image as you have seen. The noise is quite random, so it does give some information about the actually darkness when you view it. I think I will need to shill the camera, so that the noise will be reduced. With a bit of luck noise is reduced by half for every 6 kelvin drop in temperature. Finally one could use make a temporal noise reduction algorithm running as a filter inside Final Cut Pro.

Cheers,
Take

Take Vos
August 25th, 2007, 12:44 PM
Hello everyone,

Friday I received the new firmware for the Pike, which allows it to record RAW12 bit (through a LUT).
Mirage Recorder now can receive 1920 x 850 @ 24 fps, 12 bit.

Next step is getting this format to disk.

Cheers,
Take

Solomon Chase
August 28th, 2007, 09:17 AM
Great!
I think 1920 x 850 and 1707 x 960 (or 1728 x 972?) should be good starting formats.

Unfortunately, as you have noticed, sensor temperature relates very much to noise. The Viper cinema camera has the same issue, and uses cooling to get better results. (very good results, so even uncooled isn't too bad)

Take Vos
August 28th, 2007, 12:42 PM
Solomon,

I will design a housing for the camera which will include peltier cooling, so that I can cool to camera to around 5 degrees C or something like that.

With a bit of luck that is enough.

Cheers,
Take

Take Vos
September 2nd, 2007, 03:17 PM
Hello everyone,

I've just designed a new demosaic algorithm that works at near real time on a MacBook Pro GPU for 1920 x 850 @ 24 fps images.

I hope the description of the algorithm is easier to read than the AHD paper, but I am not really used to writing these kind of papers. It is only two pages, so don't be to shy, I would love to hear if anyone has any ideas.

http://www.vosgames.nl/products/MirageRecorder/FR_demosaic.pdf

Here is the GLSL (OpenGL Shader Language) reference implementation.

http://www.vosgames.nl/products/MirageRecorder/FR_demosaic.fs

Cheers,
Take

Igor Babic
September 3rd, 2007, 05:14 PM
I just want you to know that I have great admire to your work here, and I love to see that someone has knowlege and will to do it. I hope that many others will see it and buy your software to do wonderfull things. Keep up your good work!

Take Vos
October 3rd, 2007, 12:29 AM
Here is a new update. Yesterday I got the quicktime component working for the first time. This includes processing of the RAW frames with OpenGL.

I also received a reply from Apple development support and they told me that Final Cut Pro can handle 16 bits per component (gamma 1.8) if you write your own codec, currently the highest quality codecs go up to 10 bits.

To handle high speeds in Final Cut Pro my codec has to be used as the timeline codec as well, to how many bits should I record processed RGB frames? The full 16 bits, or would lower like 12 or 10 suffice. Or record RGB as 10-12-10 (nice 32 bit words).

Take Vos
October 3rd, 2007, 12:38 PM
So I've got it working inside FCP, as you see I just dropped the video from the Finder straight onto the timeline. Of course I first configured the timeline to have the animation codec with a resolution of 1920x850 @ 24fps.

Here is a simple screenshot:
http://www.vosgames.nl/products/MirageRecorder/FCP1.png

I will now have to make it possible to use this codec on the timeline so that the red bar above the footage hopefully disappears.

John C. Butler
October 9th, 2007, 11:01 AM
This project looks very promising. Take, how much is the Pike camera you're using? Someone on another thread said it costs several thousand dollars.

Take Vos
October 9th, 2007, 12:18 PM
John,

You are correct, the camera cost somewhere between 4000 and 6000 dollars, depending I guess what reseller you can use. Maybe I can become a reseller for this camera myself in some way.

But I will also expand on this camera by adding cooling, preview-screen and tripod mountings. Also needed is some electronics or firmware change to get a stable clock, I was thinking of basing the clock on a word clock of an USB audio interface or an ambient lockit box.

I may also include some changes to the optics, where the current camera has a C-mount with a mirrored finish, which should really be matt black. And maybe I want to optically create some black lines on the sensor for continues calibration purposes.

Lastly some kind of calibration set up is also needed, where there is a white light that has a very even surface, with a specific light output at a specific distance.

John C. Butler
October 10th, 2007, 07:33 PM
A little on the expensive side, but this camera appears to have some real potential. With all those interesting add-ons you're developing I wonder what the final cost of such a product might be.

Take Vos
October 20th, 2007, 01:59 AM
I just found out that Final Cut Pro does not handle 16 bit per component RGBA pixel values (QuickTime does). It seems I have to support r4fl, which is a Y'CbCrA pixel format with 32 bit floats for each component.

For the debayer decompressor this is not that bad, as most debayer algoritms like to use YUV/Y'CbCrA/CIE Lab anyway, so I don't have to convert it back to RGB anymore, so it is a win :-)

For the timeline compressor/decompressor I will probably store in a 16 bit float per component Y'CbCrA. OpenEXR also uses 16 bit floats, but in RGBA.

I did however get the timeline codec functionality working and can say that my plan is working, the camera footage does not need to be rendered anymore by Final Cut Pro.

The only issue I have is that FCP likes to open and close my codec for each frame rendered as you are scrubbing through the timeline. My open call takes a while, as it is setting up an OpenGL context. I hope I can fix this by keeping a set of initialised contexts available for re-use.

Take Vos
October 24th, 2007, 10:39 AM
I've received my new lens a: Fujinon CF16HA-1, 16mm, f1/1.4

Here are two pictures showing off a shallow depth of field, 1 and 2 meters from the subject.

http://www.vosgames.nl/products/MirageRecorder/new_lens1.png
http://www.vosgames.nl/products/MirageRecorder/new_lens2.png

Don't mind the noise, I am experimenting with the debayer and calibration system.

Take Vos
November 18th, 2007, 02:32 PM
Hi, I've just designed and ordered a gate that will fit in front of the sensor of the Pike. At least, I hope it will fit, I might have made it to thick to fit in the space between the sensor and lens.

The gate will cover part of the sensor so that the black part of the image can be used for continues calibration.

The gate will be around 870 pixels height, the camera will record 930 pixels height. The final picture will be 1920 x 800 pixels (2.40:1 ratio), so you have some wiggle room.

Take Vos
November 30th, 2007, 03:52 AM
Hi,

I can now confirm that my codec will work in high quality. I implemented AHD in 32bit floating point and Final Cut Pro will use 32 bit floating point Y'CbCrA to render the image.

I tried using the Color Corrector 3-way filter and made some extreme modification.
During preview the video scopes/histogram shows that there are quite a few gaps (color banding), which is logical as the preview is rendered in 8-bit.
After rendering the histogram shows solid again and the image shows no effects of color banding.

Take Vos
December 18th, 2007, 07:22 PM
I've been working on the recording application, specifically the preview screen.
The preview screen has three modes: color, focus and exposure.

- Color, simply the standard view most cameras have, it will be very honest and will show what the sensors sees, including sensor inbalance, dead pixels and fixed pattern noise, no white balance or other color corrections either.

- Focus, this is a simple edge detection algorithm, using only the green pixels. So the image is black except for the places where there are a lot of high frequency components. This should help with focusing, without requiring a large high definition screen.

- Exposure, this shows in false-color the amount of light falling on the sensor, so blue is dark, green is medium, red is bright, white is almost clipping. Because it shows the value of the maximum color component at that location it will allow for shooting colors that are brighter than maximum white.

The latency is pretty low. Because so little processing is going on the computer stays cool as well, which is kind to the sound mixer.

I also thought of a way to cool the camera without making noise, with PCs these days there are now very efficient cool blocks without a fan, together with a peltier element I probably can get at low temperatures (5 degrees Celsius). The camera will look very freaky though with such a contraption attached to it.

Here are some pictures for these cool blocks:

http://www.silentpcreview.com/article187-page1.html

http://www.frozencpu.com/products/6345/cpu-tri-44/Thermalright_HR-01_Plus_6-Heatpipe_CPU_Cooler_Socket_775_AM2.html?tl=g33c167s511&id=3Mns9ctm

http://www.tweaknews.net/reviews/zerotherm_btf95_fanless_copper_cpu_cooler/index3.php

http://www.thetechlounge.com/article/174-2/Thermaltake+Fanless103+Heatpipe+CPU+Cooler/

Daniel Lipats
December 18th, 2007, 09:59 PM
I was really looking into using one for my camera but I found them to be huge. So big that a lot of people had trouble fitting them onto the motherboard, and closing the pc case.

My next choice was water cooling. Not too big, defiantly quiet. But the water adds considerable weight to the camera, which I had designed to be shoulder mount.

In the end I decided to go with the Roswell Z2ex
http://www.newegg.com/Product/Product.aspx?Item=N82E16835200025&Tpk=z2ex

Performs very good, and for a CPU fan its not loud. I use it on my Q6600. At the height of 71mm its still big. But fits very nicely into my design.

Interesting project by the way. Looking forward to more.

Take Vos
December 21st, 2007, 10:47 AM
I have some success with black clamping.

I have received the gate (that I designed) that fits in front of the sensor to get a few black lines for continues calibration. I get some black lines, but because there is room between the gate and the sensor (because the sensor is covered by a piece of glass) the number of black lines become less as the iris is opened.

I now record 140 extra lines at the bottom of the frame, for a total of 1920 x 940 pixels. I have to use my lens with the iris set to f/2 or higher to keep 8 black lines at the bottom of the frame.

If I would average these 8 lines and then subtract them from all the other lines you get lots of strobing vertical lines, quite annoying. by also averaging these black lines for the last 8 frames I get a much smoother image. The image looks as smooth as the original, but much darker of course.

with enough luck this, together with the linearity algorithm will completely solve the balancing (the two halves of the sensor) problems.

Cheers,
Take

Take Vos
December 24th, 2007, 07:29 AM
Because of the black clamping I can really see the actual noise of the sensor, it seems a lot of the noise if caused by electronic interference.
I especially noticed just now when wiggling the firewire 800 connector that the noise became more or less.

Luckily I ordered the fibre version of the Pike on purpose, I want to feed camera with its own stabilised power and use optics to separate the electronics. I will also design the electronic trigger system to be galvanic separated.

I also finally got the camera to smear, by pointing it at a halogen light source at high shutter speed and over exposed. The algorithm will compensate for the smear in around 8 frames or 1/3 of a second. So you will see the smear if you pan the camera, but if you lock the camera down the smear will not be noticeable.

Take Vos
December 29th, 2007, 09:25 AM
As I said I now have color, focus and exposure modes on the viewfinder, here are some screenshots of it in action in Boom Recorder.

http://www.vosgames.nl/products/MirageRecorder/screenshots.shtml

I have also included a 1280 x 720 shoot and protect area, for people who like to shoot at a theatrical 1920 x 800 and at the same time a full screen 720p for on TV.

Emre Safak
December 29th, 2007, 10:54 AM
What do you think of the gamut? The color looked less than stellar in your recent pictures.

Take Vos
December 29th, 2007, 11:08 AM
Hi Emre,

I've decided to not compensate for anything except black level inside the recording application, so the colors are not white balanced and the color space is camera RGB.

Together with calibration data prepared by a calibration program, the codec will show much more correct colors, and should be at least as good as the older pictures.

Calibration can be done using the sun and: a white (how white doesn't even matter) sheet of paper and a gretag macbeth color target. The calibration program will then linearise the pixels from the white sheet of paper; and white balance and color matrix calculations from the gretag macbeth.

Take Vos
December 30th, 2007, 07:33 AM
Here is a image that was exported from the QuickTime player. As the QuickTime player uses the low quality high speed playback the image has not been processed using calibration data and there are many artefacts from the "nearest neighbour"-demosaic algorithm.

http://www.vosgames.nl/images/MirageRecorder/calibration-colorfield.png

I guess everyone will recognise the gretac macbeth color target, it has been lit by natural light through a cloud. I guess that corresponds to a D50 white point, please correct me if I am wrong because a lot of calculations will depend on this.

The image has been processed in 8 bit (from 14 bits linear from the sensor, through 12 bits non linear during data transfer), with a gamma of 1.8 (native Apple gamma)

John Papadopoulos
December 30th, 2007, 09:45 AM
You should ideally get white noise with a very clean FFT transform. If you get diagonal lines that tend to move it is interference. If you get horizontal lines moving up and down at extreme gain, it's probably realistic limitations of electronics. An ideal camera would reproduce only white noise from the sensor. If you test this with fft, do it before debayering without any balance of colors.

Take Vos
December 30th, 2007, 09:58 AM
Hi John,

They are horizontal blocks moving/flickering, but the amplitude is clearly changing when moving the firewire cable. Which looks to me as a low frequency interference. I think with its own power supply and not electrically connected to anything else will reduce the problem significantly.

John Papadopoulos
December 30th, 2007, 10:03 AM
Strange. Put the lens cap on. Put everything at 0db and flat color and post an uncompressed sample at 1/50sec.

Take Vos
December 30th, 2007, 11:26 AM
Hello John,

1/50 sec shutter time, nearest neighbour debayer, gamma 1.8.

The first one is straight from QuickTime Player exported as png:
http://www.vosgames.nl/images/MirageRecorder/calibration-black.png

This one has been stretched with gimp:
http://www.vosgames.nl/images/MirageRecorder/calibration-black_processed.png

John Papadopoulos
December 30th, 2007, 11:45 AM
If you hadn't lowered the black levels a little bit (the right side a couple of codes above), it would have codes up to 16 with noise wtih a center around code 8. If this is 0db, that's a lot. It should be like that at about 9db of gain.

Take Vos
December 30th, 2007, 12:14 PM
Hello John,

This is indeed with 0 db of gain (although how much that is reality I don't know). It is black compensated substracting some black lines from the whole image. So I didn't lower the black level.

left and right are of course from different halves of the sensor, with separate amplifiers and AD converter, so there is bound to be a slight difference between the two sides.

Although I would have expected there to be no more difference between left and right after subtracting the black lines, but there seem to be a slight slope vertically. This will probably be removed when I compensate for fixed pattern noise.

The fixed pattern noise is also contributing quite a bit to the noise you are seeing. In the end it will probably be very clean.

Cheers,
Take

John Papadopoulos
December 30th, 2007, 01:02 PM
The gain control for each tap is a true gain control. The only extra gain that could exist is for balancing the three colors. In some camera, the manufacturer provides up to 12db for this which covers most lighting situations. Your red and blue could already have a few dbs of gain from balance implementation. It is usually easy to tell in photoshop by switching between the channels. The noise difference should be obvious.

The Sony Z1 has noise with a center at around code 4. It doesn't change significantly with gain, so they obviously do extra processing. It's the ugly blocky type of course due to HDV.

Take Vos
January 4th, 2008, 07:06 AM
Hello John,

You made me think again about my black level subtraction. As it is now, noise that is darker than black will be clamped to black. Also if parts of the sensor are darker than the black lines (non uniformity) they will also be clamped to black.

So I will be adding a constant offset to the sensor data, which will enable to me to represent a small range of negative values in 12 bits.

This should increase the quality of my per-pixel-non-linearity compensation algorithm.

Cheers,
Take

John Papadopoulos
January 4th, 2008, 08:57 AM
The noise is a little higher than it should be. Could it be an issue with your trigger? How do you power it?

Take Vos
January 4th, 2008, 09:01 AM
Power and trigger is over firewire.

John Papadopoulos
January 4th, 2008, 09:30 AM
You mean it's a software trigger or a firewire trigger? Or firewire just for power? If you are just connecting a camera, perhaps the computer ports are not properly designed.

Take Vos
January 4th, 2008, 09:42 AM
Hello John,

The trigger is based on the packet size you set and on how much data it wants to send. As the packets are put in the real-time stream on firewire, the trigger is basically the firewire clock. I will include a trigger based on SMPTE or word clock (simpler) at a later time.

I think the power/ground of the firewire connection is not clean. I've tried running the notebook from the internal battery, but there was no change, so the noise is picked up or caused by the notebook itself.

In any way, this is why I want to use fiber-firewire, and give the camera its own battery.

John Papadopoulos
January 4th, 2008, 10:26 AM
Depending on the camera and the interface, limiting free run with packet size is not dependable. You could do a long duration video to test. But this is not critical if you are just using a single camera and recording sync audio.

Take Vos
January 4th, 2008, 10:36 AM
John,

It is even worse, with the packet size I can not even get it within 1% of 24 frame/sec. So I will have to make my own trigger.

Taking a word clock of 48000 and divide it by 2000 would be an easy route. Or maybe ask ambient to make a firmware change to output a frame trigger (it already does word clock, tri level sync and smpte) and it is battery powered.

In all probability I have to design some electronics, maybe based on a small microcontroller like the PIC, then I can add some buttons to the camera to change mode and shutter time. I also need a thermostat for the peltier element.

Cheers,
Take

John Papadopoulos
January 4th, 2008, 12:44 PM
1% is too much. What is the exact resolution and frame rate you want to get? 1920x800 24p 12bit?

Take Vos
January 4th, 2008, 12:59 PM
John,

Yip, that is too much. But it will be no problem when I make an external trigger.

I am capturing 1920 x 940 x 12 bit @ 24 fps.
Then I do black levelling in software and crop the image to 1920 x 800 x 12 bit @ 24 fps.

The 12 bit is of course a sort-of-logarithmic conversion from the 14 bit A/D converter.

Cheers,
Take

John Papadopoulos
January 4th, 2008, 01:05 PM
That's very close to full frame. Nice:) You can actually get a very sharp 720p if you crop the scope sides to 16:9 and downscale.

Take Vos
January 4th, 2008, 01:16 PM
Hi John,

Yes, if the manufacture will release firmware that does the black level correction in the way I like; I could actually use all those 940 lines for the image, instead 140 lines are covered by a piece of aluminium to get a couple of black lines.

Cheers,
Take

Take Vos
January 4th, 2008, 01:18 PM
Right now, I include a shoot-and-protect square for 16:9 720p in the 1920 x 800 frame preview. Scaling would be better, but as I said in the post before all those other 140 lines are unusable.

Take Vos
January 6th, 2008, 12:50 PM
Hi,

So I have started working on the calibration utility.
I will share you some statistical research of before and after.

First I point the camera at a white piece of paper and over exposing it at 80ms, then Boom Recorder creates a movie where it automatically reduces the exposure time and records the results. The calibration program will read this movie and average at least images at each exposure time.

The results are below, the statistics are split for each color component, and then shown between parenthesis the: average pixel intensity, the standard deviation of pixel intensity (spatial noise or pixel non-uniformity), and the average of the deviation of each pixel from multiple images from the same exposure time (temporal noise, for use in ISO calculations). The values are in stopFS (FS stands for full scale), so -1.0 stopFS means 50% exposure, -2.0 stopFS means 25% exposure.

black movie, 0.0 ms:red (-10.7,-12.1,-10.1) stopFS; green (-10.7,-12.1,-10.1) stopFS; blue (-10.7,-12.0,-10.1) stopFS.
white movie, 0.1 ms:red (-8.1,-9.4,-9.4) stopFS; green (-7.7,-8.9,-9.3) stopFS; blue (-7.5,-9.1,-9.2) stopFS.
white movie, 0.2 ms:red (-7.8,-9.3,-9.3) stopFS; green (-7.3,-8.7,-9.1) stopFS; blue (-7.0,-9.0,-8.9) stopFS.
white movie, 0.3 ms:red (-7.4,-9.0,-9.2) stopFS; green (-6.7,-8.6,-8.9) stopFS; blue (-6.4,-8.8,-8.7) stopFS.
white movie, 0.4 ms:red (-7.0,-8.8,-9.2) stopFS; green (-6.2,-8.4,-8.8) stopFS; blue (-5.8,-8.5,-8.5) stopFS.
white movie, 0.5 ms:red (-6.6,-8.7,-9.0) stopFS; green (-5.7,-8.3,-8.4) stopFS; blue (-5.3,-8.3,-8.1) stopFS.
white movie, 0.7 ms:red (-6.2,-8.6,-8.9) stopFS; green (-5.2,-8.1,-8.2) stopFS; blue (-4.7,-8.1,-7.8) stopFS.
white movie, 1.0 ms:red (-5.7,-8.4,-8.6) stopFS; green (-4.7,-7.9,-7.8) stopFS; blue (-4.3,-7.9,-7.4) stopFS.
white movie, 1.4 ms:red (-5.3,-8.1,-8.2) stopFS; green (-4.2,-7.8,-7.5) stopFS; blue (-3.8,-7.6,-7.0) stopFS.
white movie, 2.0 ms:red (-4.8,-8.0,-8.0) stopFS; green (-3.8,-7.4,-6.9) stopFS; blue (-3.3,-7.2,-6.5) stopFS.
white movie, 2.8 ms:red (-4.3,-7.8,-7.6) stopFS; green (-3.3,-7.0,-6.5) stopFS; blue (-2.8,-6.8,-6.1) stopFS.
white movie, 3.9 ms:red (-3.9,-7.5,-7.1) stopFS; green (-2.8,-6.6,-6.0) stopFS; blue (-2.4,-6.3,-5.6) stopFS.
white movie, 5.5 ms:red (-3.4,-7.1,-6.7) stopFS; green (-2.3,-6.2,-5.6) stopFS; blue (-1.9,-5.9,-5.1) stopFS.
white movie, 7.6 ms:red (-3.0,-6.7,-6.2) stopFS; green (-1.9,-5.7,-5.1) stopFS; blue (-1.4,-5.4,-4.6) stopFS.
white movie, 10.7 ms:red (-2.5,-6.3,-5.7) stopFS; green (-1.4,-5.2,-4.6) stopFS; blue (-0.9,-4.9,-4.2) stopFS.
white movie, 14.9 ms:red (-2.0,-5.9,-5.3) stopFS; green (-0.9,-4.8,-4.2) stopFS; blue (-0.5,-4.5,-3.8) stopFS.
white movie, 20.8 ms:red (-1.5,-5.4,-4.7) stopFS; green (-0.4,-4.4,-3.9) stopFS; blue (-0.0,-4.6,-6.7) stopFS.
white movie, 29.0 ms:red (-1.0,-5.0,-3.2) stopFS; green (-0.1,-4.5,-4.9) stopFS; blue (-0.0,-8.0,-11.5) stopFS.
white movie, 40.5 ms:red (-0.0,-5.9,-11.9) stopFS; green (-0.3,-2.6,-5.4) stopFS; blue (-0.0,-8.5,-11.7) stopFS.
white movie, 56.6 ms:red (-0.2,-4.3,-5.6) stopFS; green (-0.0,-7.6,-11.4) stopFS; blue (-0.0,-8.6,-11.2) stopFS.
white movie, 79.0 ms:red (-0.0,-8.5,-12.3) stopFS; green (-0.0,-7.6,-12.5) stopFS; blue (-0.0,-8.9,-12.5) stopFS.

From these measurements a PPLUT (Per Pixel Look Up Table) is calculated with 8 values, then this PPLUT is applied to the above images to linearize the pixel values and reduce the spatial noise. As you see, I win around 2 stops of spatial noise reduction. Of course when the pixels are over exposed the spatial noise increases.

I have found that this algorithm makes pixels that are normally counted as bad (too hot or too cold) pixels useable again. The bad pixels that can not be rescued can now be found and using a simple average be restored from other pixels.

Would be cool to do bad pixel fixing inside the debayer algorithm, so that pixels can be fixed horizontally and vertically and let the debayer algorithm figure out which is best.

calib movie, 0.0 ms:red (nan,-10.7,-inf) stopFS; green (-15.8,-8.6,-inf) stopFS; blue (-15.5,-11.7,-inf) stopFS.
calib movie, 0.1 ms:red (-8.6,-10.3,-inf) stopFS; green (-7.8,-8.5,-inf) stopFS; blue (-7.3,-10.1,-inf) stopFS.
calib movie, 0.2 ms:red (-8.3,-10.5,-inf) stopFS; green (-7.4,-9.6,-inf) stopFS; blue (-6.9,-10.3,-inf) stopFS.
calib movie, 0.3 ms:red (-7.8,-10.4,-inf) stopFS; green (-6.7,-10.2,-inf) stopFS; blue (-6.2,-10.8,-inf) stopFS.
calib movie, 0.4 ms:red (-7.2,-10.3,-inf) stopFS; green (-6.2,-10.9,-inf) stopFS; blue (-5.7,-10.5,-inf) stopFS.
calib movie, 0.5 ms:red (-6.8,-10.5,-inf) stopFS; green (-5.7,-10.5,-inf) stopFS; blue (-5.2,-9.8,-inf) stopFS.
calib movie, 0.7 ms:red (-6.3,-10.5,-inf) stopFS; green (-5.2,-9.9,-inf) stopFS; blue (-4.7,-10.4,-inf) stopFS.
calib movie, 1.0 ms:red (-5.8,-10.3,-inf) stopFS; green (-4.8,-10.3,-inf) stopFS; blue (-4.3,-11.8,-inf) stopFS.
calib movie, 1.4 ms:red (-5.3,-9.9,-inf) stopFS; green (-4.3,-11.2,-inf) stopFS; blue (-3.8,-10.5,-inf) stopFS.
calib movie, 2.0 ms:red (-4.8,-10.3,-inf) stopFS; green (-3.8,-10.1,-inf) stopFS; blue (-3.3,-10.5,-inf) stopFS.
calib movie, 2.8 ms:red (-4.4,-11.7,-inf) stopFS; green (-3.3,-10.2,-inf) stopFS; blue (-2.8,-11.1,-inf) stopFS.
calib movie, 3.9 ms:red (-3.9,-10.5,-inf) stopFS; green (-2.8,-11.8,-inf) stopFS; blue (-2.4,-10.0,-inf) stopFS.
calib movie, 5.5 ms:red (-3.4,-10.3,-inf) stopFS; green (-2.3,-9.9,-inf) stopFS; blue (-1.9,-10.4,-inf) stopFS.
calib movie, 7.6 ms:red (-2.9,-11.4,-inf) stopFS; green (-1.9,-8.9,-inf) stopFS; blue (-1.4,-9.5,-inf) stopFS.
calib movie, 10.7 ms:red (-2.4,-10.2,-inf) stopFS; green (-1.4,-9.5,-inf) stopFS; blue (-0.9,-8.8,-inf) stopFS.
calib movie, 14.9 ms:red (-2.0,-10.1,-inf) stopFS; green (-0.9,-8.7,-inf) stopFS; blue (-0.4,-8.3,-inf) stopFS.
calib movie, 20.8 ms:red (-1.5,-9.4,-inf) stopFS; green (-0.4,-7.0,-inf) stopFS; blue (-0.0,-5.1,-inf) stopFS.
calib movie, 29.0 ms:red (-1.0,-7.8,-inf) stopFS; green (-0.0,-4.5,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 40.5 ms:red (0.1,-3.4,-inf) stopFS; green (-0.2,-2.6,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 56.6 ms:red (-0.1,-4.8,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 79.0 ms:red (0.1,-3.4,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.

John Papadopoulos
January 6th, 2008, 12:53 PM
A before and an after image would be more useful:)

Take Vos
January 6th, 2008, 01:08 PM
Hi John,

I guess it would, but I don't have them yet, I first need to get my calibration program a little bit better. I already got it a little bit better by moving my calibration point to two stops down.

I also want to calculate the color correction matrix/lut from the ColorChecker image.

I learned a new term yesterday "daylight balanced film stock", I guess this calibration of the camera would be pretty much like that. It should also be possible to calibrate the camera to tungsten, but you can not really mix multiple calibrations.

Cheers,
Take

Take Vos
January 10th, 2008, 05:38 PM
I've spoken with the manufacturer and they told me that it is probably a ground loop problem caused by the firewire cable, something about the chassis not being connected to shield of the cable.

Although the computer was running on batteries and not connected to ground at the time, you can still have the "ground loop"-like effect where the firewire cable acts as an antenna. Especially because my firewire cable is rather long.

I guess running the camera on fiber with its own power will eliminate this problem so I'll be looking forward to testing it that way.

Next step on the agenda is adding reading of the calibration data by the codec, then I can make a couple of comparison pictures.

Cheers,
Take

John Papadopoulos
January 11th, 2008, 07:56 AM
I remember that the Pike has a shading correction feature. Doesn't that handle the pattern noise problems?

In audio installations, you usually disconnect the shield to break the loop but I don't know much about electronics. Almost all firewire cameras I have seen have horizontal lines deep into the shadows but you can't show them by just using gain. You need gain at very high values and extreme gamma settings.

Take Vos
January 11th, 2008, 08:31 AM
Hello John,

The standard way of reducing pattern noise is by taking a dark and a bright image (usually averaged out to remove the spatial noise). Basically you do linear interpolation using the two points captured by the dark and bright image.

However this doesn't seem to work (not enough for my taste) on my camera (maybe the camera is broken but the manufacturer says this is normal when you use gamma correction). What happens is that when there is a small amount of light the pixels start to behave non linear and each pixel has a different non-linear curve. When it gets brighter the curve becomes linear again, but because of the first part of the curve the black image can not properly be used to calculate the offset.
Either the non linearity of the curve is normal and caused by non-linear effect of semi-conductors on a per pixel basis; or it is caused by the microlenses, which may cause non-linear effects at such scales.

Instead of interpolating over a line using only a black and a white image, my algorithm interpolates over a curve that is measured from multiple gray images and a black image. And thus follows the non-linear parts of the curve at dark levels. The algorithm works extremely well, even hot and cold pixels become usable again (as long is there is a little bit of life in them).

As for ground loop problems, this is caused when two pieces of equipment are connected together and there is a voltage difference between the two grounds. This causes current to travel between the two, this is normally not a problem and needed to eliminate the voltage difference. The actual noise problem is caused when the current is travelling in the ground wire of a signal pair.

Sound engineers used to break the shield on the cables to eliminate current travelling over this shield. As we now know the actual cause of the problem, well educated sound engineers now make proper ground connections between the pieces of equipment, using heavy gauge wires and laid out in a star pattern; eliminating any voltage difference between the equipment. This makes everything a little bit saver and less entertainers get electrocuted by their microphones and guitars (fantom power which is pretty high voltage is send over the microphone cables to feed the amplifiers inside the microphones).

Cheers,
Take

Take Vos
January 11th, 2008, 08:43 AM
Oh about the non-linear curve of the pixels.
Astronomers used to pre-flash a CCD before taking an image, basically when they start the integration they turn on a light in the telescope that gives off a known amount of light that will fill the light buckets until they are no longer non-linear. A dark image is then taken from this pre-flashed image and subtracted from photographs that are also pre-flashed (and thus compensating for the pre-flash).

I could have done the same, but that would have taken me some work in mechanics, optics and electronics. Instead I've chosen to make a better algorithm.

Cheers,
Take

Take Vos
January 13th, 2008, 05:32 PM
Hello everyone,

Here are some pictures, these are exported as PNG after rendering in Final Cut Pro.

The first one is without non-uniformity calibration data:
http://www.vosgames.nl/images/MirageRecorder/fr_nocalibration.png

The second one is with non-uniformity calibration data:
http://www.vosgames.nl/images/MirageRecorder/fr_calibrated.png

To see the difference, look at the black swatch on the color tester. In the first image there are some wiggly lines which are absent in the second image.

The images are demosaiced using my own debayer algorithm that preserves more noise than for example AHD. If you zoom in you will see short horizontal and vertical lines which are caused by the directional interpolation, this happens in other algorithms as well.

I've made a modification on my algorithm and it will eliminate these short lines completely, but it makes the image slightly softer. I think I will need to teach my algorithm the difference between noise and lines.