View Full Version : Mirage Recorder


Pages : [1] 2 3 4 5

Take Vos
March 24th, 2007, 07:56 PM
Hello everyone,

This is the thread about my new Mac OS X software that I am writing to record 16 bit raw bayer data from an IIDC camera. In this case I am planning to use the Pike 210C from www.alliedvisiontec.com.

At this moment I can already see a live image from a stand-in camera, the Fire-i from unibrain, it does not do bayer but I wanted to start cheap.

I use libdc1394 for the camera feed. The 8 bit images are converted to 32 bit floating point using the vImage (accelerated image processing) library. These 32 bit images are then send to OpenGL as a luminance texture map.

I am converting to 32 bit floating point images as OpenGL is extremely slow when working with 16 bit integers. There are also more accelerated functions that can work on 32 bit floating point number than there are for 16 bit integer.

I am now planning on to compress the images before they will be added to the ring buffer. Compressing the images before hand would allow for more room in the ring buffer.

I have not yet decided on the compression algorithm to use. I will probably make a prediction algorithm that uses left and top neighbor pixels. The error values (real - predicted) are then converted using a sort of huffman code.

I have this weird idea of creating a set of static huffman dictionaries based on normal distributions. For each line or block of pixels the average error value is calculated and this average will select the dictionary to use for compression. It would be better to calculate the median, but an average is much faster to calculate (doesn't need a sort operation).

Cheers,
Take Vos
developer of Boom Recorder

Take Vos
March 25th, 2007, 04:42 PM
Hello everyone,

I just designed and implemented the compression algorithm.
It first predicts the value of a pixel by taking a sort of average of its four neighbor pixels, of the same color, positioned above and to its left.

The actual value is subtracted from the predicted value. When there is not to much high resolution contrast in the image the error values should on average be low.

These values are than written down using the exponential-golomb encoding, this encoding writes down small values in less bits than high values. With my 640x480 camera with lots of noise I mostly get a compression ratio between 1:1 and 1:2.

I've decided against implementing most of the compression pre-processing into OpenGL and write it in straight C. The amount of CPU time used is still within limits, but tight.

Cheers,
Take

Take Vos
March 29th, 2007, 02:29 PM
Hi again.

I've made the compression algorithm simpler, it now just looks at the pixel two positions (because of bayer) to the left of the current pixel and subtract. I still use exponential-golomb encoding. The cpu usage for 640x480@15fps is now between 10% and 11% for the conversion from RGB to Bayer (for testing with a RGB camera), compression and displaying the video.

I have just created a fragment program in OpenGL that does realtime demosaic with linear interpolation. The linear interpolation is not only used for debayering, but also for getting a smooth image when the image is displayed on the screen using a non-integer zoom factor.

Next step, recording.

Cheers,
Take

Wayne Morellini
March 30th, 2007, 03:41 AM
Congradulations, are there any HD webcams for the mac this would work mon. If you are using the portable or mini, the Intel chipset emaulates many GPU fucntions, next chipset (GM965) hopefully should not.

Good luck.

Take Vos
March 30th, 2007, 04:52 AM
Hello Wayne,

Thank you, I am pretty happy with the results right now.

The software now is specifically written for the Fire-i camera, which is a 640x480 RGB webcam using the IIDC Firewire protocol.

An HD camera that uses IIDC needs a lot of bandwidth and a Firewire 800 controller is required, which currently is only available on the MacBook Pro.
The sustained transfer rate required for the disks is around 60 MB/sec, this can not be done using a Firewire interface as it is already in use by the camera. Thus a eSATA ExpressCard is need, ExpressCard is only available on the MacBook Pro.

Cheers,
Take

Juan M. M. Fiebelkorn
April 2nd, 2007, 03:17 PM
It sounds to me quite a lot simillar to just how Huffyuv compresses video.

Take Vos
April 2nd, 2007, 03:22 PM
Indeed, it isn't exactly rocket science.

I am changing my mind slightly though.
To make it myself easy I am thinking of actually letting my recorder write OpenEXR files directly, probably without compression :-(

Cheers,
Take

Take Vos
April 8th, 2007, 12:30 PM
Good and bad news.

I've been able to record the images to disk in OpenEXR format. Bad news, OpenEXR is quite slow, for something that should be a simple copy operation it uses a tad to much CPU power.

So it is back to the exponential golomb coding and writing my own type of file.

Cheers,
Take

Take Vos
April 11th, 2007, 11:27 AM
Hello everyone,

The recorder is working, it is creating my own digital negative format.
I will publish specifications on this file format at release to source code to read and write these files.

Of course I can not yet actually see if it is recording correctly, that is my next task.

Cheers,
Take

Solomon Chase
April 11th, 2007, 07:08 PM
Cool, keep posting updates :)

Take Vos
April 25th, 2007, 12:36 PM
Hello everyone,

Another update, I have ordered the Pike 210C as I am really interested to see if it will all work.

Also I will build the video recorder into Boom Recorder instead of a separate program. This will allow you to enter metadata, such as scene and take numbers. It will also allow timecode signal via an audio input and a lot more features.

Cheers,
Take

Take Vos
May 3rd, 2007, 05:40 PM
I received the camera, it took me some time to get a picture from it, but it is working now. I currently have it set up to send quite a large 2.35:1 picture at 14 bits depth.

It seems that the camera by default does not do any compensation as I hoped it would not, so the picture is quite a mess. It should not be much of a problem to get all that sensor unevenness removed.

My current problem is that I had an old camera with a 1/3" CS mount lens, and I am using that lens on this camera. That does not make for a sharp picture without vignetting (although the inside of the lens and the three diaphragm blades are in focus).

As for computer performance, I found out that my compression routine was using quite a lot of CPU time and it did not actually compress the data much (actually it became larger). So instead of Delta-Golomb compression it now simply packs the 14 bits tightly.

Next step is finding a good lens and start working on writing to disk.

Cheers,
Take

Take Vos
May 8th, 2007, 12:46 PM
An other update.

Boom Recorder can now write the sensor data to disk, I just did a full scale test: 1800 x 750 (1:2.40 ratio), 16bits @ 24fps. I recorded for 9 minutes onto an external SATA disk, smooth as an android's bottom.

I only configured a video ring buffer of 128 MByte (I should put more memory in my computer), but it kept up rather well as the maximum was around 50% filled.

It was also pretty light on the CPU, the computer did not even turn on the fans.

Next step: QuickTime component so that you can natively edit the footage on Final Cut Pro, it must do real-time debayer and compensate for fixed sensor noise. I also have to make an intermediate encoder, so that you can render the footage in Final Cut Pro to a 16 bits depth.

QuickTime does have 16 bit depth capability, but it is not linear, rather it uses a gamma correction of 1.8. I guess Final Cut Pro converts it to linear floating point before doing any calculations. To bad QuickTime does not support linear floating point pixel types.

Cheers,
Take

Take Vos
May 11th, 2007, 06:53 AM
Hello everyone,

I finally have a decent lens, it is an zoom lens from an old Minolta 35mm photo camera. I had to do some back focus adjustments (the pike supports this by rotating the C-mount in its body-mount) to get it to focus through zoom. Strangely enough the focus marks on the lens are off.

It zooms between 35 and 70 mm and the iris is between f1/22 - f1/4.5, so it is rather slow.

Anyway I made three screenshots of my application. The pictures are not corrected for sensor anomalies or color. the only corrections applied are demosaic using a simple linear algorithm, gamma correction of 1.8 and resized to the smaller screen size.

The first image is the park from my open window, it was raining outside. f1/11, 35mm, infinite focus.
http://www.vosgames.nl/products/MirageRecorder/park.png

The second image is of some bicycles in the park, same settings.
http://www.vosgames.nl/products/MirageRecorder/bicycle.png

And everyone likes to see a depth of field shot. F1/4.5, 35mm, focused on the blue perfume sampler around 1 meter away. The green perfume sample on the left is 20 cm nearer and a mobile phone is 20 cm nearer than that. On the right the orange lens converter box is 20 cm further and the small C/S mount lens on the far right is 20 cm further than that.
The background wallpaper is around 3 meters from the lens. The photo camera bag is about 1.5 meters from the lens.
http://www.vosgames.nl/products/MirageRecorder/DOF.png

Cheers,
Take

Take Vos
May 27th, 2007, 01:02 AM
Hi there,

I have another update for you. The QuickTime component is working, it can read the raw digital negative and demosaic it during playing. It asks for a 16 bit per-component drawable, so with a bit of luck Final Cut Pro can use all the 14 bits from the camera.

I found in the Final Cut Pro manual that you only get fast (extreme) performance when the timeline uses the same codec and codec-settings as the clips that are imported. So I will make this QuickTime component to also be used on the timeline. Footage that needs to be coded by Final Cut Pro will use 48 bit RGB and not bayer, in other word it will silently use other settings without Final Cut Pro knowing about it.

At this time the QuickTime component is rather slow as all the demosaicing and other processing is done in software. The plan is to use OpenGL for processing the footage before it enters Final Cut Pro. With a bit of luck you will be able to edit the footage in real-time.

But the next programming challenge is an application that can make calibration data to be used by the QuickTime component and Mirage Recorder. The calibration data will consist of a dark-image offset and slope (based on the exposure time), and a light image offset and slope, and lastly a pixel repair list.

light-image calibration image can be done with any "white"-light that has enough intensity in each red, green and blue component, as long as the same light is smeared equally over the whole sensor. Some kind of diffuser would be handy, or take the lens off the camera. As the calibration is color agnostic, I can use the color conversion matrix for this camera without change. White balance is then the job of Final Cut Pro (or Color if that works).

Wayne Morellini
May 27th, 2007, 06:01 AM
Luckily, I believe in helping people, even if they don't reply to my emails, so here goes. You may notice people are not responding here, but go to the forum view it will show you how many times your page has been viewed. It is likely there are an number of people in the background keeping an eye on your progress.

To add some useful information, OpenGL has been reworked and jelled together, with two new versions coming. Version 2.6 will have an reworked Open GL, and version 3.0 will add DirectX 10 like GPU features that can work under XP (unlike standard dx10). So, you can have compatibility under Mac OS, Linux, Windows XP and Vista (and hand helds).

If I have not already, I aim to add an link to your thread from my thread so that everybody still subscribed can come and have an look.

Now, over at the Elphel thread, Zolt is stopping his compressed Raw Bayer project for the camera, and offering it up to who ever would like to take over.

I would like to ask some questions about your software that others might be interested in. Is it commercial, for what machines, OS's and cameras do you eventually aim ti make it for? Have you considered doing it for the many HD video enabled web cams coming out?

Thanks in advance


Wayne.

Take Vos
May 27th, 2007, 06:48 AM
Hello Wayne,

Did I forget to reply to an email? I am sorry about that. I do remember an email that asked me to post my findings as I am progressing with the application, which as you see I am doing.

OpenGL is sometimes quite messy and it is easy to get OpenGL in a not so desirable state, or even crash the computer as there is little checking done in the OS for performance reasons. Anyway I am using something along the line of OpenGL 2, but I do not know all the extensions and arbs that have been renamed in OpenGL 2. It doesn't really matter anyway.

As for your other questions:

Yes, it will be commercial, I will sell the license key for Mirage Recorder (an extension on top of Boom Recorder). I do not yet know if I will sell the complete system including camera head, mount, viewfinder, etc. If I don't sell the complete system I will include information in the manual on how to acquire and use all these components.

Currently I am using a Mac Book Pro for development and as target machine. A Mac Pro can probably also be used. As I use Boom Recorder as the base of this project I will not be able to target to Windows or Linux yet, unless OpenStep makes strides to be as complete as Apple's APIs.

I am also writing the software for the Pike 210 for now, although I am keeping my options open for other IIDC cameras. For example the file formats include information about the color space and pixel format. In the future I will only consider cameras that can be used to get the raw sensor data.

Why is Zolt stopping with his project?

Cheers,
Take

Djee Smit
May 27th, 2007, 08:34 AM
Hey Take, nice to see you here, I met you some time back, (Jonathan 'Djee; Smit) when I was doing a internship at Pat's company. He mentioned recently that you are working on a camera. It all sounds interesting. Keep up the good work.

Take Vos
May 27th, 2007, 09:14 AM
Hello Jonathan,

It is indeed pretty exiting. I hope I will be able to let someone use the camera soon. With a bit of luck Patrick will find a project for it.

Cheers,
Take

Djee Smit
May 27th, 2007, 09:41 AM
If Pat doesn't have anything soon, I might have some things coming up. a videoclip and a short film. (might be interesting then) I will keep an eye on this thread for the development.

Wayne Morellini
May 28th, 2007, 08:51 AM
Hello Wayne,

Did I forget to reply to an email? I am sorry about that. I do remember an email that asked me to post my findings as I am progressing with the application, which as you see I am doing.

Nope, no replies here (did I even ask to keep us informed by email, there was at least another email) or other replies.

OpenGL 2.6 is supposed to be cleaned up to be less messy.

Commercial, and 1394, I wish you luck, pity Apple never released an HD web cam. An lot of people round here used USB and GigE cameras, because the Firewire cones were so horribly expensive. Automotive firewire HD cameras might be potentially an lot cheaper, and some have very high latitude.

Zolt, is ready to start filming, so has run out of time to get the project ready, and needs to spend the money on hiring an camera.

Take Vos
May 28th, 2007, 10:06 AM
Hi Wayne,

Well, I don't have my screenplay ready for filming yet, so I think I will have a working solution before I start.

Anyway I have a question for everyone, I am making my calibration software and one of the features is that it shows some statistics about the footage that is loaded. This includes the average intensity, the amount of temporal noise and the signal to noise ratio.

In audio I would use dB and dBFS (decibel full scale) to show these values, which is probably also used in video signal processing. But I guess the people who would use this are more comfortable with stops.

So if a sensor would clamp its values between 0.0 (black) and 1.0 (white), then I would say a value of 0.125 to be: -20 dBFS or -3 stops.

What would you say?

Cheers,
Take

Take Vos
June 9th, 2007, 09:13 AM
Hello everyone,

I've made a calibration application and the recording software can read this calibration data. One problem is that I do not seem to be able to remove the static noise from the raw image yet. I believe this has to do with the temperature of the sensor. As a typical CCD sensor rises in temperature by 5 kelvin, the black current noise is doubled.

The calibration data I took was made at room temperature, while the footage I was attempting to compensate was made with a hot camera. The Pike gets quite hot, to a temperature where you can no longer hold the camera by hand, so I guess the temperature at the sensor is at least 50 kelvin higher.

I will need to redo the sensor calibration when the sensor has been exercising for at least 30 minutes. I will also look into how to keep the temperature low and stable.

I will probably be using the "Adaptive homogeneity-directed interpolation" demosaic algorithm in the recording application and in the QuickTime component. As soon as I figure out all of the math, I am getting there.

As you know the resolution is 1800 * 750 * 14 bits, but I believe it will be possible to handle 2048 * 850 * 12 bits in the future.

Cheers,
Take

Take Vos
June 22nd, 2007, 03:34 AM
I have three new pictures from my camera.
These are screenshots and it was scaled to fit the resolution of my screen by the recording application.

It shows the new demosaic algorithm and uses the color conversion matrix specified by the camera. It also does black current removable and compensates for the sensitivity of each pixel (although these don't work correct yet).

http://www.vosgames.nl/products/MirageRecorder/AHDDA1.png
http://www.vosgames.nl/products/MirageRecorder/AHDDA2.png
http://www.vosgames.nl/products/MirageRecorder/AHDDA3.png

Cheers,
Take

Jose A. Garcia
June 22nd, 2007, 04:08 AM
Those images look great Take! Looking forward to seeing more of your work.

David Delaney
June 22nd, 2007, 06:01 AM
This might be old news, but I heard that Kodak has a new patterning system that is better than Bayer and leads to less artifacting in low light.
http://www.pcworld.com/article/id,132865-pg,1/article.html

Take Vos
June 22nd, 2007, 07:55 AM
Hello David,

Yes, I read it on slashdot, it is kind of interesting. But you would have to modify the demosaic algorithm to handle it. I do not yet have the qualifications to create a demosaic algorithm from scratch.

I have seen some interesting other sensors, whereby there are smaller pixels around the large pixels, the smaller pixels could handle high luminosity.

Cheers,
Take

Take Vos
July 2nd, 2007, 03:15 PM
Hello everyone,

Here is a new screenshot, it shows the R,G,B waveform of part of a GretagMacbeth.

http://www.vosgames.nl/products/MirageRecorder/waveform.png

It is a bit disappointing as it uses to much CPU time. I don't like it when the fans of my notebook turn on.

But the simpler color and greyscale versions are fast and don't use too much CPU/GPU time.

Take Vos
July 5th, 2007, 01:22 PM
Hello everyone,

Could you please look at this noisy picture and tell me what you think?
Is this normal for an uncalibrated raw sensor image?

I have tried eliminating the noise with a dark image and a flat field image, but it seems that this noise is not linear depended on the pixel intensities.

http://www.vosgames.nl/products/MirageRecorder/FixedNoise.png

Jose A. Garcia
July 5th, 2007, 01:35 PM
Well... I would say no. I'm using CMOS and I know that's a CCD but I can't believe picture is so different comparing both sensors.

Take Vos
July 5th, 2007, 02:35 PM
Jose,

Are you sure your camera doesn't use some factory calibration data to compensate for non uniformity?
In any case I have contacted the reseller of the Pike, I wonder what they will reply.

Cheers,
Take

Take Vos
July 6th, 2007, 12:11 PM
Hi,

I contacted the dealer, and they have a firmware update for me to do.
So with a little bit of luck my fixed pattern noise will be over soon.

Now I'll have to get my hands on Windows Vista, as I can not update the firmware with Mac OS X. A little bit sad that I will have to get an OS just to upload firmware :-(

Cheers,
Take

Take Vos
July 12th, 2007, 12:02 AM
Hi, I measured the linearity of the sensor in the pike.
It seems that any value below 100 is trash, but above that it is linear. I guess it would be possible to use two flat field images to compensate for anything above 100. But I am scared that it won't look right, when clamping that much data.

here are the results from 10 neighboring green pixels in a single column. every forth pixel is to hot (thus three pixels in this case).

http://www.vosgames.nl/products/MirageRecorder/PixelLinearity.png

Take Vos
July 12th, 2007, 12:19 PM
I've made an other measurement using the camera flash light (in continues mode) of my mobile phone. This makes for a much more stable measurement.

You can see here that 7 pixels are pretty much linear. But three pixels are slightly parabolic. I may be able to compensate for it.

I guess I will need to build some form of calibration light that can be screwed on the camera, to make it easy to do accurate automatic calibration.

http://www.vosgames.nl/products/MirageRecorder/PixelLinearity2.png

Take Vos
July 28th, 2007, 11:55 AM
It has been some time since I last posted, but I have an update.

I seem to have solved the dark pixel non uniformity, using a per-pixel 7 point lookup table with cubic interpolation.

However there are still some bugs in my code so I can not let you see a good image just yet.

Cheers,
Take

Take Vos
July 28th, 2007, 03:42 PM
Hello everyone,

I've been able to fix most of the problems now. Here is a picture of the results of today.
http://www.vosgames.nl/products/MirageRecorder/FirstGoodPicture.png

This algorithm includes bad pixel removal, marking of bad pixels is depended on brightness. So in dark patches more pixels are removed, this is to solve unfixable non-linearities.

It seems to not have solved all the double tap problems yet, which it should, maybe I need a more accurate LUT table at higher brightness levels.

Cheers,
Take

Jose A. Garcia
July 28th, 2007, 06:29 PM
That image looks just great Take! Keep it up!

Solomon Chase
July 28th, 2007, 10:01 PM
Great job, thats a really smooth image :)

Take Vos
August 6th, 2007, 11:51 PM
Hello everyone,

I've made a new picture, there isn't much sun as it is pretty early in the morning and overcast.

http://www.vosgames.nl/products/MirageRecorder/BalancingSolved.png

Here you can see that there is no longer any visible difference between the left and right side of the image.

I now use a 15 point LUT and separate average black left for the left and right side of the picture. The black level is first subtracted from the image before the 15 point LUT is applied. The black level does drift and right now you will have to manually (cover the lens, press button) measure this before each take. I hope with new firmware on the Pike I will be able to access the left and right light shielded pixels.

If one would look closely to flat field target with a flashlight pointing to it, I am seeing color banding. I am not sure why this is caused, maybe by the sharp edges in the LUT which would be solved by using cubic interpolation instead of linear interpolation.

I also like to mention that the 15 point LUT will now pull the pixel values to the theoretical linear photo response. I do this according to this formula:
when average intensity is around 90 %:
intensity_per_second = average_intensity[90] / shutter_time[90]

for all the measurements done I will calculate 0% <= k <= 100%:
theoretical_intensity[k] = intensity_per_second * shutter_time[k]


Cheers,
Take

Take Vos
August 7th, 2007, 12:17 PM
I've made a new picture that was better calibrated.

http://www.vosgames.nl/products/MirageRecorder/BalancingSolved2.png

I've done some white balancing and set the black level lower (sun was shining into lens).

Solomon Chase
August 7th, 2007, 02:16 PM
I've made a new picture that was better calibrated.

http://www.vosgames.nl/products/MirageRecorder/BalancingSolved2.png

I've done some white balancing and set the black level lower (sun was shining into lens).

The white seems to be clipping a bit too much, like on bike lady's shirt. I did some color grading with your first picture, pushed it pretty far... (see attached JPG)

Take Vos
August 7th, 2007, 04:48 PM
The white seems to be clipping a bit too much, like on bike lady's shirt. I did some color grading with your first picture, pushed it pretty far... (see attached JPG)

Yes, I noticed the ladies shirt, I was not really paying to much attention (still have to add a zebra kind of thing). But I wanted something a little bit more bright than the image you color graded.

As you noticed it had still a lot of latitude left. You may also have noticed the amount of black offset, there was quite a bit of light falling into the lens then as well, and the calibration I did was done very shoddy (The white target was not evenly lit).

The second image should already be much better, even if it is clipping a little bit to much. Although it looks to me that this camera somehow handles white clipping more gracefully as the DV cameras I am used to.

although that corner stone for that arced window, in the foreground, is not really pretty.

strange thing is that I seem to have compensated for what normally would be considered dead pixels (there are around 8 bad pixels in this image according to how the manufacturer would measure them, and really visible on an uncompensated image). I am doing statistical analysis of each pixel after compensation compared to the theoretical value and I come up with zero bad pixels for an average light intensity above 10%.

Solomon Chase
August 10th, 2007, 01:08 AM
Yes, I noticed the ladies shirt, I was not really paying to much attention (still have to add a zebra kind of thing). But I wanted something a little bit more bright than the image you color graded.

As you noticed it had still a lot of latitude left. You may also have noticed the amount of black offset, there was quite a bit of light falling into the lens then as well, and the calibration I did was done very shoddy (The white target was not evenly lit).

The second image should already be much better, even if it is clipping a little bit to much. Although it looks to me that this camera somehow handles white clipping more gracefully as the DV cameras I am used to.

although that corner stone for that arced window, in the foreground, is not really pretty.

strange thing is that I seem to have compensated for what normally would be considered dead pixels (there are around 8 bad pixels in this image according to how the manufacturer would measure them, and really visible on an uncompensated image). I am doing statistical analysis of each pixel after compensation compared to the theoretical value and I come up with zero bad pixels for an average light intensity above 10%.

Yeah, Your 2nd image (with the clipped highlights) has tons of latitude and detail in the blacks, leading me to believe that you can underexpose a good bit without worrying about noise.

I'm really interested in getting a Pike 210c, as I have some ultra fast c-mount lenses made for a 1" image sensor. (Canon f0.95, Fujinon F0.85, Schneider f0.95, and some others) Would definately give some beautiful shallow DOF shots.

Solomon Chase
August 10th, 2007, 02:00 AM
What is the maximum resolution you are getting right now at 14-bit, 24fps?
I read that AVT is releasing firmware with 12-bit support for pike 210c. What kind of resolution at could you get at 12-bit instead of 14-bit?

Take Vos
August 10th, 2007, 02:14 AM
Hello Solomon,

I wish I had one of these fast lenses, I only have a 4.5, and I can not really use it in my room with just the lights. But it looks like the depth of field I get with this old SLR minolta lens is pretty nice.

As for the firmware, I was aware of this and I have requested it.

With 14 bits (16 bit transfer) I can do the following resolution:
1800 x 750

With 12 bits it should do:
1920 x 800 (maximum 1920 x 930)
With 12 bits and the larger Pike:
2048 x 850 (maximum 2048 x 870)

I want to get access to the light shielded columns, but for both these resolutions you would then still be able to get the full 2.40:1 ratio.

Take Vos
August 10th, 2007, 11:40 AM
Hello everyone,

There was some imperfections in the conversion from float to half, which was causing a small bit of colour banding. The only colour banding I have now seem to be caused by the LCD display of my MacBook Pro.

For the RAW12, I will have to convert the 14 bits from the DAC to the 12 bits in the data stream. So I will be using the LUT of the camera to keep precision in low light, but also have the latitude for high lights.

The idea is to use a truncated version of 16 bit floats (half). I take the 14 bit integer and divide it by 16384.0 (normalising from 0.0 to 1.0). Then I divide that result by 256.0, which causes the 16 bit float to:
- starts with an exponent of zero and a mantissa that increments by 4.
- It ends with an exponent of 6 and increments the mantissa by 1/8.
- sadly it has 1024 (512 after shift) numbers left unused.
- The first 2048 (1024 after shift) numbers encoded exactly.

Now I shift the 16 bit float one to the right. I am only using 9 bits of mantissa and 3 bits of exponent and no sign, 12 bits in total.

By encoding it in the LUT like this, I only have to shift it left by one before passing it on to the graphics card.

Take Vos
August 11th, 2007, 03:04 PM
That was smart in theory, not in practice.
It seems that numbers with an exponent of zero are ignored in the calculations on the graphics card.

So I had to think of an other encoding. I first thought of 12 bit log, but the problem with that it would displace the measured values to fit on the log scale.

So I thought of an encoding where the first 1024 numbers are exact, after which it would slowly loose accuracy, but still have a somewhat monotone increments. The following shows how to define a table to convert 14 bit data into 12 bit.


for (value = 0; value < 16384; value++) {
if (value < 512) {
e = 0;
m = (value - 0) / 1;
} else if (value < 1024) {
e = 1;
m = (value - 512) / 1;
} else if (value < 2048) {
e = 2;
m = (value - 1024) / 2;
} else if (value < 3584) {
e = 3;
m = (value - 2048) / 3;
} else if (value < 5632) {
e = 4;
m = (value - 3584) / 4;
} else if (value < 8704) {
e = 5;
m = (value - 5632) / 6;
} else if (value < 12288) {
e = 6;
m = (value - 8704) / 7;
} else {
e = 7;
m = (value - 12288) / 8;
}

uint14_to_uint1412[value] = (e << 9) | m;
}

Take Vos
August 12th, 2007, 11:45 PM
Hello everyone,

I thought I would share an other picture, this time I made on during dusk, from the trees in the park. So the image is incredibly dark.
This is still a screen shot, so we still don't have the full 14 bits of latitude.

http://www.vosgames.nl/products/MirageRecorder/UnderExposed.png

I am now sending the picture as integers to the graphics card, so I am no longer first converting them to 16 bit float. This would allow for a little bit more accuracy, as the graphics card would internally convert this to a normalised 32-bit float.

Solomon Chase
August 13th, 2007, 11:58 PM
If your going commercial with this, it needs to capture at 1920x1080. You'd probably have to go 10-bit log or 8-bit. 8-bit would work well in 16-bit stream (no padding)

10-bit log is what Silicon Imaging is using for their high end 1080p camera. The 10-bit bayer capture codec they are licensing (cineform RAW) is a very efficient visually lossless wavelet codec. Since it is currently being ported to OSX with full quicktime support, you might want to inquire about licensing. They are cool guys..

I "push" graded the under exposed image you posted. (attached) Looks like there still a bit of information there, even if its not 14-bit :)

Solomon Chase
August 14th, 2007, 12:09 AM
BTW, what is the bottleneck that is stopping you from getting the full 1920x1080? Harddrive speed? Firewire Bandwidth? Sensor Readout Speed? CPU?