![]() |
1 Attachment(s)
Quote:
|
Quote:
As you noticed it had still a lot of latitude left. You may also have noticed the amount of black offset, there was quite a bit of light falling into the lens then as well, and the calibration I did was done very shoddy (The white target was not evenly lit). The second image should already be much better, even if it is clipping a little bit to much. Although it looks to me that this camera somehow handles white clipping more gracefully as the DV cameras I am used to. although that corner stone for that arced window, in the foreground, is not really pretty. strange thing is that I seem to have compensated for what normally would be considered dead pixels (there are around 8 bad pixels in this image according to how the manufacturer would measure them, and really visible on an uncompensated image). I am doing statistical analysis of each pixel after compensation compared to the theoretical value and I come up with zero bad pixels for an average light intensity above 10%. |
Quote:
I'm really interested in getting a Pike 210c, as I have some ultra fast c-mount lenses made for a 1" image sensor. (Canon f0.95, Fujinon F0.85, Schneider f0.95, and some others) Would definately give some beautiful shallow DOF shots. |
What is the maximum resolution you are getting right now at 14-bit, 24fps?
I read that AVT is releasing firmware with 12-bit support for pike 210c. What kind of resolution at could you get at 12-bit instead of 14-bit? |
Hello Solomon,
I wish I had one of these fast lenses, I only have a 4.5, and I can not really use it in my room with just the lights. But it looks like the depth of field I get with this old SLR minolta lens is pretty nice. As for the firmware, I was aware of this and I have requested it. With 14 bits (16 bit transfer) I can do the following resolution: 1800 x 750 With 12 bits it should do: 1920 x 800 (maximum 1920 x 930) With 12 bits and the larger Pike: 2048 x 850 (maximum 2048 x 870) I want to get access to the light shielded columns, but for both these resolutions you would then still be able to get the full 2.40:1 ratio. |
Hello everyone,
There was some imperfections in the conversion from float to half, which was causing a small bit of colour banding. The only colour banding I have now seem to be caused by the LCD display of my MacBook Pro. For the RAW12, I will have to convert the 14 bits from the DAC to the 12 bits in the data stream. So I will be using the LUT of the camera to keep precision in low light, but also have the latitude for high lights. The idea is to use a truncated version of 16 bit floats (half). I take the 14 bit integer and divide it by 16384.0 (normalising from 0.0 to 1.0). Then I divide that result by 256.0, which causes the 16 bit float to: - starts with an exponent of zero and a mantissa that increments by 4. - It ends with an exponent of 6 and increments the mantissa by 1/8. - sadly it has 1024 (512 after shift) numbers left unused. - The first 2048 (1024 after shift) numbers encoded exactly. Now I shift the 16 bit float one to the right. I am only using 9 bits of mantissa and 3 bits of exponent and no sign, 12 bits in total. By encoding it in the LUT like this, I only have to shift it left by one before passing it on to the graphics card. |
That was smart in theory, not in practice.
It seems that numbers with an exponent of zero are ignored in the calculations on the graphics card. So I had to think of an other encoding. I first thought of 12 bit log, but the problem with that it would displace the measured values to fit on the log scale. So I thought of an encoding where the first 1024 numbers are exact, after which it would slowly loose accuracy, but still have a somewhat monotone increments. The following shows how to define a table to convert 14 bit data into 12 bit. Code:
for (value = 0; value < 16384; value++) { |
Under exposed image
Hello everyone,
I thought I would share an other picture, this time I made on during dusk, from the trees in the park. So the image is incredibly dark. This is still a screen shot, so we still don't have the full 14 bits of latitude. http://www.vosgames.nl/products/Mira...derExposed.png I am now sending the picture as integers to the graphics card, so I am no longer first converting them to 16 bit float. This would allow for a little bit more accuracy, as the graphics card would internally convert this to a normalised 32-bit float. |
1 Attachment(s)
If your going commercial with this, it needs to capture at 1920x1080. You'd probably have to go 10-bit log or 8-bit. 8-bit would work well in 16-bit stream (no padding)
10-bit log is what Silicon Imaging is using for their high end 1080p camera. The 10-bit bayer capture codec they are licensing (cineform RAW) is a very efficient visually lossless wavelet codec. Since it is currently being ported to OSX with full quicktime support, you might want to inquire about licensing. They are cool guys.. I "push" graded the under exposed image you posted. (attached) Looks like there still a bit of information there, even if its not 14-bit :) |
BTW, what is the bottleneck that is stopping you from getting the full 1920x1080? Harddrive speed? Firewire Bandwidth? Sensor Readout Speed? CPU?
|
Solomon,
The end result will be uncompressed RAW 12 bit non-linear (I call this '1412') with a resolution of 1920 x 800 (2.40:1). And it will be direct editable in QuickTime applications like Final Cut Pro. As for the codec, I think it is too late now, I have designed my own codec to read the calibration data and compensate the image during playback. I have done this, so that the calibration is not fixed, i.e. you can redo the calibration or change calibration method after you have shot the footage. The bottleneck right now is the Firewire 800 speed, and then after this the hard disk speed (although one could always make stripe set). But I actually like the idea that you can use a single disk to record on. I could also design a '1408' or 8 bit log encoding, this would allow for a 2048 x 1080 image easily, but I am not sure if you really want to use only 8 bits to encode the image. As you have probably noticed during the pushing of the image, there is quite a bit of "bias", I think this is caused by reflections inside the camera. The C-mount which doubles as back-focus is made of bare aluminum. The 35mm lens is probably projecting onto this bare aluminum which gets reflected onto the sensor. I guess I will need to paint it, or design one that is black (but that may be prohibitively expensive for single runs). I think there is also quite a lot of noise in the image as you have seen. The noise is quite random, so it does give some information about the actually darkness when you view it. I think I will need to shill the camera, so that the noise will be reduced. With a bit of luck noise is reduced by half for every 6 kelvin drop in temperature. Finally one could use make a temporal noise reduction algorithm running as a filter inside Final Cut Pro. Cheers, Take |
Hello everyone,
Friday I received the new firmware for the Pike, which allows it to record RAW12 bit (through a LUT). Mirage Recorder now can receive 1920 x 850 @ 24 fps, 12 bit. Next step is getting this format to disk. Cheers, Take |
Great!
I think 1920 x 850 and 1707 x 960 (or 1728 x 972?) should be good starting formats. Unfortunately, as you have noticed, sensor temperature relates very much to noise. The Viper cinema camera has the same issue, and uses cooling to get better results. (very good results, so even uncooled isn't too bad) |
Solomon,
I will design a housing for the camera which will include peltier cooling, so that I can cool to camera to around 5 degrees C or something like that. With a bit of luck that is enough. Cheers, Take |
New demosaic algorithm
Hello everyone,
I've just designed a new demosaic algorithm that works at near real time on a MacBook Pro GPU for 1920 x 850 @ 24 fps images. I hope the description of the algorithm is easier to read than the AHD paper, but I am not really used to writing these kind of papers. It is only two pages, so don't be to shy, I would love to hear if anyone has any ideas. http://www.vosgames.nl/products/Mira...R_demosaic.pdf Here is the GLSL (OpenGL Shader Language) reference implementation. http://www.vosgames.nl/products/Mira...FR_demosaic.fs Cheers, Take |
I just want you to know that I have great admire to your work here, and I love to see that someone has knowlege and will to do it. I hope that many others will see it and buy your software to do wonderfull things. Keep up your good work!
|
QuickTime component is working
Here is a new update. Yesterday I got the quicktime component working for the first time. This includes processing of the RAW frames with OpenGL.
I also received a reply from Apple development support and they told me that Final Cut Pro can handle 16 bits per component (gamma 1.8) if you write your own codec, currently the highest quality codecs go up to 10 bits. To handle high speeds in Final Cut Pro my codec has to be used as the timeline codec as well, to how many bits should I record processed RGB frames? The full 16 bits, or would lower like 12 or 10 suffice. Or record RGB as 10-12-10 (nice 32 bit words). |
It is working inside FCP
So I've got it working inside FCP, as you see I just dropped the video from the Finder straight onto the timeline. Of course I first configured the timeline to have the animation codec with a resolution of 1920x850 @ 24fps.
Here is a simple screenshot: http://www.vosgames.nl/products/MirageRecorder/FCP1.png I will now have to make it possible to use this codec on the timeline so that the red bar above the footage hopefully disappears. |
This project looks very promising. Take, how much is the Pike camera you're using? Someone on another thread said it costs several thousand dollars.
|
John,
You are correct, the camera cost somewhere between 4000 and 6000 dollars, depending I guess what reseller you can use. Maybe I can become a reseller for this camera myself in some way. But I will also expand on this camera by adding cooling, preview-screen and tripod mountings. Also needed is some electronics or firmware change to get a stable clock, I was thinking of basing the clock on a word clock of an USB audio interface or an ambient lockit box. I may also include some changes to the optics, where the current camera has a C-mount with a mirrored finish, which should really be matt black. And maybe I want to optically create some black lines on the sensor for continues calibration purposes. Lastly some kind of calibration set up is also needed, where there is a white light that has a very even surface, with a specific light output at a specific distance. |
A little on the expensive side, but this camera appears to have some real potential. With all those interesting add-ons you're developing I wonder what the final cost of such a product might be.
|
I just found out that Final Cut Pro does not handle 16 bit per component RGBA pixel values (QuickTime does). It seems I have to support r4fl, which is a Y'CbCrA pixel format with 32 bit floats for each component.
For the debayer decompressor this is not that bad, as most debayer algoritms like to use YUV/Y'CbCrA/CIE Lab anyway, so I don't have to convert it back to RGB anymore, so it is a win :-) For the timeline compressor/decompressor I will probably store in a 16 bit float per component Y'CbCrA. OpenEXR also uses 16 bit floats, but in RGBA. I did however get the timeline codec functionality working and can say that my plan is working, the camera footage does not need to be rendered anymore by Final Cut Pro. The only issue I have is that FCP likes to open and close my codec for each frame rendered as you are scrubbing through the timeline. My open call takes a while, as it is setting up an OpenGL context. I hope I can fix this by keeping a set of initialised contexts available for re-use. |
I've received my new lens a: Fujinon CF16HA-1, 16mm, f1/1.4
Here are two pictures showing off a shallow depth of field, 1 and 2 meters from the subject. http://www.vosgames.nl/products/Mira.../new_lens1.png http://www.vosgames.nl/products/Mira.../new_lens2.png Don't mind the noise, I am experimenting with the debayer and calibration system. |
Hi, I've just designed and ordered a gate that will fit in front of the sensor of the Pike. At least, I hope it will fit, I might have made it to thick to fit in the space between the sensor and lens.
The gate will cover part of the sensor so that the black part of the image can be used for continues calibration. The gate will be around 870 pixels height, the camera will record 930 pixels height. The final picture will be 1920 x 800 pixels (2.40:1 ratio), so you have some wiggle room. |
Hi,
I can now confirm that my codec will work in high quality. I implemented AHD in 32bit floating point and Final Cut Pro will use 32 bit floating point Y'CbCrA to render the image. I tried using the Color Corrector 3-way filter and made some extreme modification. During preview the video scopes/histogram shows that there are quite a few gaps (color banding), which is logical as the preview is rendered in 8-bit. After rendering the histogram shows solid again and the image shows no effects of color banding. |
I've been working on the recording application, specifically the preview screen.
The preview screen has three modes: color, focus and exposure. - Color, simply the standard view most cameras have, it will be very honest and will show what the sensors sees, including sensor inbalance, dead pixels and fixed pattern noise, no white balance or other color corrections either. - Focus, this is a simple edge detection algorithm, using only the green pixels. So the image is black except for the places where there are a lot of high frequency components. This should help with focusing, without requiring a large high definition screen. - Exposure, this shows in false-color the amount of light falling on the sensor, so blue is dark, green is medium, red is bright, white is almost clipping. Because it shows the value of the maximum color component at that location it will allow for shooting colors that are brighter than maximum white. The latency is pretty low. Because so little processing is going on the computer stays cool as well, which is kind to the sound mixer. I also thought of a way to cool the camera without making noise, with PCs these days there are now very efficient cool blocks without a fan, together with a peltier element I probably can get at low temperatures (5 degrees Celsius). The camera will look very freaky though with such a contraption attached to it. Here are some pictures for these cool blocks: http://www.silentpcreview.com/article187-page1.html http://www.frozencpu.com/products/63...11&id=3Mns9ctm http://www.tweaknews.net/reviews/zer...ler/index3.php http://www.thetechlounge.com/article...pe+CPU+Cooler/ |
I was really looking into using one for my camera but I found them to be huge. So big that a lot of people had trouble fitting them onto the motherboard, and closing the pc case.
My next choice was water cooling. Not too big, defiantly quiet. But the water adds considerable weight to the camera, which I had designed to be shoulder mount. In the end I decided to go with the Roswell Z2ex http://www.newegg.com/Product/Produc...00025&Tpk=z2ex Performs very good, and for a CPU fan its not loud. I use it on my Q6600. At the height of 71mm its still big. But fits very nicely into my design. Interesting project by the way. Looking forward to more. |
I have some success with black clamping.
I have received the gate (that I designed) that fits in front of the sensor to get a few black lines for continues calibration. I get some black lines, but because there is room between the gate and the sensor (because the sensor is covered by a piece of glass) the number of black lines become less as the iris is opened. I now record 140 extra lines at the bottom of the frame, for a total of 1920 x 940 pixels. I have to use my lens with the iris set to f/2 or higher to keep 8 black lines at the bottom of the frame. If I would average these 8 lines and then subtract them from all the other lines you get lots of strobing vertical lines, quite annoying. by also averaging these black lines for the last 8 frames I get a much smoother image. The image looks as smooth as the original, but much darker of course. with enough luck this, together with the linearity algorithm will completely solve the balancing (the two halves of the sensor) problems. Cheers, Take |
Because of the black clamping I can really see the actual noise of the sensor, it seems a lot of the noise if caused by electronic interference.
I especially noticed just now when wiggling the firewire 800 connector that the noise became more or less. Luckily I ordered the fibre version of the Pike on purpose, I want to feed camera with its own stabilised power and use optics to separate the electronics. I will also design the electronic trigger system to be galvanic separated. I also finally got the camera to smear, by pointing it at a halogen light source at high shutter speed and over exposed. The algorithm will compensate for the smear in around 8 frames or 1/3 of a second. So you will see the smear if you pan the camera, but if you lock the camera down the smear will not be noticeable. |
As I said I now have color, focus and exposure modes on the viewfinder, here are some screenshots of it in action in Boom Recorder.
http://www.vosgames.nl/products/Mira...eenshots.shtml I have also included a 1280 x 720 shoot and protect area, for people who like to shoot at a theatrical 1920 x 800 and at the same time a full screen 720p for on TV. |
What do you think of the gamut? The color looked less than stellar in your recent pictures.
|
Hi Emre,
I've decided to not compensate for anything except black level inside the recording application, so the colors are not white balanced and the color space is camera RGB. Together with calibration data prepared by a calibration program, the codec will show much more correct colors, and should be at least as good as the older pictures. Calibration can be done using the sun and: a white (how white doesn't even matter) sheet of paper and a gretag macbeth color target. The calibration program will then linearise the pixels from the white sheet of paper; and white balance and color matrix calculations from the gretag macbeth. |
Here is a image that was exported from the QuickTime player. As the QuickTime player uses the low quality high speed playback the image has not been processed using calibration data and there are many artefacts from the "nearest neighbour"-demosaic algorithm.
http://www.vosgames.nl/images/Mirage...colorfield.png I guess everyone will recognise the gretac macbeth color target, it has been lit by natural light through a cloud. I guess that corresponds to a D50 white point, please correct me if I am wrong because a lot of calculations will depend on this. The image has been processed in 8 bit (from 14 bits linear from the sensor, through 12 bits non linear during data transfer), with a gamma of 1.8 (native Apple gamma) |
You should ideally get white noise with a very clean FFT transform. If you get diagonal lines that tend to move it is interference. If you get horizontal lines moving up and down at extreme gain, it's probably realistic limitations of electronics. An ideal camera would reproduce only white noise from the sensor. If you test this with fft, do it before debayering without any balance of colors.
|
Hi John,
They are horizontal blocks moving/flickering, but the amplitude is clearly changing when moving the firewire cable. Which looks to me as a low frequency interference. I think with its own power supply and not electrically connected to anything else will reduce the problem significantly. |
Strange. Put the lens cap on. Put everything at 0db and flat color and post an uncompressed sample at 1/50sec.
|
Hello John,
1/50 sec shutter time, nearest neighbour debayer, gamma 1.8. The first one is straight from QuickTime Player exported as png: http://www.vosgames.nl/images/Mirage...tion-black.png This one has been stretched with gimp: http://www.vosgames.nl/images/Mirage..._processed.png |
If you hadn't lowered the black levels a little bit (the right side a couple of codes above), it would have codes up to 16 with noise wtih a center around code 8. If this is 0db, that's a lot. It should be like that at about 9db of gain.
|
Hello John,
This is indeed with 0 db of gain (although how much that is reality I don't know). It is black compensated substracting some black lines from the whole image. So I didn't lower the black level. left and right are of course from different halves of the sensor, with separate amplifiers and AD converter, so there is bound to be a slight difference between the two sides. Although I would have expected there to be no more difference between left and right after subtracting the black lines, but there seem to be a slight slope vertically. This will probably be removed when I compensate for fixed pattern noise. The fixed pattern noise is also contributing quite a bit to the noise you are seeing. In the end it will probably be very clean. Cheers, Take |
The gain control for each tap is a true gain control. The only extra gain that could exist is for balancing the three colors. In some camera, the manufacturer provides up to 12db for this which covers most lighting situations. Your red and blue could already have a few dbs of gain from balance implementation. It is usually easy to tell in photoshop by switching between the channels. The noise difference should be obvious.
The Sony Z1 has noise with a center at around code 4. It doesn't change significantly with gain, so they obviously do extra processing. It's the ugly blocky type of course due to HDV. |
All times are GMT -6. The time now is 03:47 AM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network