View Full Version : Mirage Recorder


Pages : 1 2 3 [4] 5

Paul Curtis
January 22nd, 2008, 02:48 AM
This is some interesting dsp. We removed the bilinear debayer effects and did a debayer and lens correction from scratch. I didn't code the processing but I find amusing that you can get from A to B:)

http://img255.imageshack.us/img255/6387/rebuildaq5.jpg

Very impressive work John, especially the edge reconstruction! So when can we buy it :) (I know...!)

What do you think of the Red debayer? It seems very naturalistic.

cheers
paul

Take Vos
January 22nd, 2008, 03:07 AM
I found that I did some stupid things, like using a power function instead of the exponential function for the X-axis of my per pixel LUT that I used to do. Now I will try cubic interpolation to guess the values better, and use adaptive pixel repair.

So I have some work to do.

John Papadopoulos
January 22nd, 2008, 07:39 AM
Very impressive work John, especially the edge reconstruction! So when can we buy it :) (I know...!)

What do you think of the Red debayer? It seems very naturalistic.

cheers
paul

It looks good but I wonder how fast it is. I also do not think anyone needs 4k. Hollywood has been quite happy with 2k and many cinematographers are happy with 16:9 low compression or uncompressed sd. We have to think about those who care about productivity. 800p is 10times cheaper in cpu resources compared to 4k and it will easily outperform any small pixel solution (especially cmos ones) in terms of low light performance and pixel quality. I have seen a night clip using the Red with a car moving through a well lit part of the city. It was only 1024 pixel wide, more than 16 pixels summed to 1, but the sky was a noisy dark blue mess. A CCD in that situation would look ok at 1:1 pixel and it would still look excellent with lots of gain if the output was uncompressed. I have also seen iso800-1000 shots (which were a true iso300-500 according to some users) and the output wasn't usable at all. When designing a practical solution, it's not as easy as maximising the most commercial feature(pixels), the goal is to improve the performance of the package as a whole. When designing something for the independent and low budget markets, you have to think about low light quality, affordability of fast lenses, ease of editing etc because that market cannot afford large crews, lots of lighting or expensive editing facilities. A digital cinema camera for low budget features that has lighting requirements similar to low sensitivity film is not very useful in the real world.

Paul Curtis
January 22nd, 2008, 09:26 AM
It looks good but I wonder how fast it is. I also do not think anyone needs 4k. Hollywood has been quite happy with 2k and many cinematographers are happy with 16:9 low compression or uncompressed sd. We have to think about those who care about productivity. 800p is 10times cheaper in cpu resources compared to 4k and it will easily outperform any small pixel solution (especially cmos ones) in terms of low light performance and pixel quality. I have seen a night clip using the Red with a car moving through a well lit part of the city. It was only 1024 pixel wide, more than 16 pixels summed to 1, but the sky was a noisy dark blue mess. A CCD in that situation would look ok at 1:1 pixel and it would still look excellent with lots of gain if the output was uncompressed. I have also seen iso800-1000 shots (which were a true iso300-500 according to some users) and the output wasn't usable at all. When designing a practical solution, it's not as easy as maximising the most commercial feature(pixels), the goal is to improve the performance of the package as a whole. When designing something for the independent and low budget markets, you have to think about low light quality, affordability of fast lenses, ease of editing etc because that market cannot afford large crews, lots of lighting or expensive editing facilities. A digital cinema camera for low budget features that has lighting requirements similar to low sensitivity film is not very useful in the real world.

In my experience cinematographers sadly don't always get to choose (producers usually do) but in principle i agree with you. The varicam still outputs very nice images at 720. Although one aspect of the Red that draws people is shallow depth of field offered by a 35mm sensor. Now with the right lenses the same applies to 2/3 and 1" but unless you're at zeiss superspeed quality levels a lot of lenses aren't so good around T1.3. I'm just starting to experiment with c mount so who knows what's possible.

Since you've been posting your comments have made me rethink CCD although im not *totally* convinced or converted yet :) because i only see CCD examples, no comparison with like for like CMOS to be able to quantify the differences visually.

In broad terms with generic cmos/ccd sensors i can see where you're coming from but i'm interested in specific sensors, the altasens ProCam HD vs the Kodak KAI2093 for example. On paper the altasens is actually more sensitive, across a broader range of light and cmos, as i understand, generally outputs a cleaner signal (less hardware needed around the sensor because a lot of it is onboard). CCDs bloom and CMOS usually have a rolling shutter (mitigated more by making sure the sensor is running fast enough). CMOS frame rates are higher and that is a valid narrative requirement sometimes, especially for model shots and so on.

Have you found *any* cmos that you're impressed with? Aside from rolling shutter, what about the other aspects?

paul

John Papadopoulos
January 22nd, 2008, 11:42 AM
Well, the producer is usually not an engineer or even the photographer, so I guess every single one of them will want the Red because of the higher resolution even if that's not required by the project.

Most people involved with advertising or special effects will also prefer the Red because the higher chroma resolution gives more options and you can use as much light as you want in these shoots. They will only need CCD is there is lots of real camera motion (not synthetic).

Where CCD will always be better is in very active camera work, action shots, natural light cinematography etc. On a good CCD outputing uncompressed there is natural ccd noise that can be used creatively. You normally do not have to use gain, so there is no grain, but if you like it, it look great and has excellent statistical properties that are very close to high sensitivity film. On a compressed format or a CMOS sensor you would not want to do that.

On the other hand, not everyone has the same idea about what an image should look like. These days everything has too much grading and an artificial image is very common and even considered cool by some people. I personally think film is still the reference for color quality and this is what we are after. If the user is after desaturated, metallic, green/blue looking science fiction looks, any camera will be equally good I guess. But some people want realism out of the camera and most people interested in the camera so far are involved in pseudodocumentaries, drama, comedy and horror. Which is nice, since that was the intended market for this camera. We also have interest from film people who have never used a digital camera. Which made us extremely happy I admit.

When comparing sensors of the same resolution, the ccd is usually superior in every image quality aspect except smear performance. You can except the kodak 2093 to significantly outpeform any CMOS at 2k, including SI and Red in 2k crop.

To answer you question, in a direct comparison, the CCD will have higher sensitivity, an excellent statistical nature in its noise, higher saturation, better motion quality, more realism and a far more impressive image. In low light situations the CMOS will quickly deteriote to a flat lifeless image. Every single cmos I have seen so far has these characteristics. I have first hand experience with many cmos sensors including the altasens 1080p but not with the Red sensor. We would only use CMOS if the reduction in cost was extremely important for the complete package. With an altasens it can be very significant, so there is always room for CMOS even in our product.

The issue of depth of field has many sides. F1.4 is cheap in the 2/3" and 1" sensors, but how much does it cost on the full frame Red? SLR lenses at that speed (beyond 50mm) do not exist and film lenses are extremely expensive. You settle for f2.8 or larger and there you go, there is no DOF advantage and you also get an enormous loss of light. There are many 35mm shooters that prefer to stay above f4 because of DOF limitations and consider working at larger apertures a problem. They will be very satisfied with a sensor like the Kodak that has much more sensitivity for the same DOF. 2.3" and 1" has good shallow DOF capabilities. If a user comes from an 1/3" or 1/4" f1.6 or f1.8 camcorder he might be starved for some DOF flexibility, but super35mm is probably too much for most applications. Personally I find 2/3" DOF annoying sometimes at large apertures because it is too shallow.

CCDs are not made by startups or companies that were created yesterday, they are made by Sony and Kodak, companies that pioneered and are dominating the imaging (even film) market for decades. There is some solid engineering behind the sensors and it is obvious in the output of the sensors. Why is our sony 2/3" CCD sensor used in camera heads that cost 20,000 euro and output a frame every 10 seconds? Why is it considered the highest quality low light ccd for scientific applications if CMOS actually had a chance to compete at a fraction of the cost? Why do all serious microscopy cameras come with that sensor? Why do NASA choose CCDs for all space based inspection instead of the affordable cmos sensors? You would expect the engineers behind these solutions to have done their homework and be able to read beyond the CMOS related marketing and be free from cost restrictions of a few $100s. And you would be right:)

I only see CMOS in a few places on the market, mobile phones, consumer camcorders, and a couple of digital cinema cameras. Why is Sony and 95% of other companies still using the expensive CCD sensors in even the cheapest still cameras if CMOS is up to the task? They don't want to save money or improve quality? CMOS is used in DSLRs now, but there is no camera motion involved in those and the system can afford to do quite a lot of processing on the cmos output. These cameras (body only) do not make any real profit, the companies survive on selling aftermarket lenses for their system, and there is obviously a pressure to reduce cost and, why not, increase resolution on the side. The expensive medium format camera backs still use CCDs, just like any serious camcorder and digital cinema camera from the big manufacturers.

Red are very smart. I can get an altasens implementation and do a direct comparison with a CCD. We have evaluated an altasens head and are in the process of evaluating another one. But since RED claim the sensor is not available to anyone else, they can claim anything and we have to spend 20,000 to buy a Red One camera and wait many months to get it in order to directly compare. We would never be trusted when publishing such results, so there is not much we can do except wait for the users to discover the quality differences in actual use. Which will be hard. The Red users thought the NoX samples looked bad but they look excellent IMHO. We have a DVX user in the team and, damn, it is like questioning one's religion sometimes when comparing anything to the DVX!

Paul Curtis
January 23rd, 2008, 03:55 PM
John,

I just wanted to say thank you for such a comprehensive thought out reply!

I'd like to add that you talk about having real chroma and luma and the nice thing about a bayered 2k or 4k is the extra information for chroma by the time it's downsampled to 2k. Although i know your debayer is exceptionally good.

You make a good point about the nature of 'uncompressed' noise

Can smear be 'fixed' or minimised in CCD sensor design by good supporting electronics and hardware?

The point about lenses is important especially with this sensor, im finding it quite difficult locating glass that has a big enough image circle. Even the zeiss superspeeds and S16mm cookes don't look like they'll cover it. There's a bit of a void here in the market save for some machine vision lenses (like the fujinons) - SLR lens are too big and all the masses of 2/3 lenses are too small. Have you found some quality lens manufacturers?

>We have evaluated an altasens head and are in the process of evaluating another one.

don't suppose you care to mention what you have tested and your thoughts? I suspect you're in a unqiue position of really being able to test and understand these heads?

thanks again
paul

Take Vos
January 25th, 2008, 07:33 AM
I've have worked on my non-uniformity calibration algorithm and here is the result:
http://www.vosgames.nl/images/MirageRecorder/fr_cal_bp.png

I first had to find the best values for the x-axis of my per pixel luts, as the errors are exponential on an exponential system (or is that logarithmic on a logarithmic system) I had to choose the x-axis accordingly.

Then I added a bad pixel detector. A bad pixel is marked when it is non uniform (after uniformity restoration) by more than 6% (taken from a kodak white paper) from the average. I do this for each white field that has been taken and mark the bad pixel by the brightest value that it is off by more than 6%.

During rendering, first the pixels are made uniform. Then each pixel value is compared to its own bad-pixel-brightness-value. If the pixel is bad then the pixel is interpolated from neighbour pixels that are good, if no good neighbour pixel is found an average is taken from all the neighbours (I guess I could do this using weights).

There is also a bad line somewhere at the top of the image that goes all the way from left to right. weirdly this line does not show up in the white fields, otherwise the bad pixel detector would have detected it (I also checked it visually). I guess I will have to find a way to add manual pixels and lines to the bad pixel map.

Next step is getting color conversion to work, i am thinking of using a 3D-LUT for color-space conversion, instead of a color conversion matrix. I am not sure how to implement one, but I guess I will find out.

Jason Rodriguez
January 25th, 2008, 10:36 AM
Hey Take,

3D LUT's are not hard per-se, just depends on where you need to apply one.

The GPU will give you a 3D LUT for "free", at least sort of, meaning linear interpolation is typically a part of any GPU's architecture (i.e., it can linearly interpolate a texture map), so you can use a volumetric texture and the GPU will linearly interpolate it using trilinear interpolation to get all the values you need out the 3D LUT.

If you have to rely on the CPU for the 3D LUT generation, then trilinear gets a bit cumbersome since there are a lot of operations (you need a total of 8 points to construct the interpolated point). Tetrahedral interpolation can be a lot easier since you're now only dealing with 4 points necessary for interpolation. Since you're not doing a scattered mesh, you don't need to worry about stuff like Delaunay triangulations, i.e, your 3D LUT should be a regular mesh of points evenly spaced, so you can simply split each cube into six tetrahedra using a single diagonal on the cube. In fact, it's probably a lot simpler than that, i.e., with an evenly spaced mesh, you can just pick the 4 closest points and make a tetrahedra out of them, but I think you will need to make sure that no point can be addressed by two separate tetrahedra (i.e., if you don't pick an axis of the cube for creating the tetrahedra, then there is a situation where one point can become assigned to one set of 4 points, and another point right next to it might get assigned to another set of 4 points, and then the point right next to that will be assigned back to the first tetrahedra, so now you have this odd cross-over situation. Depending on the precision of the math used, this could cause interpolation inconsistencies. Theoretically it wouldn't, but in the real-world it might).

Thanks,

Jason

Take Vos
January 25th, 2008, 12:06 PM
Hello Jason,

Thanks for the information.

I started and stopped with trilinear interpolation, it was to messy. I was actually doing a scattered mesh, i.e. only the colors from the ColorChecker would be in the LUT. And I think I had it quite good, until I needed to do extrapolation and then it became extremely weird.

So instead I am trying to find a 3x3 matrix for color conversion.
I am using Gauss-Jordan to find the matrix.
Then I am going to repeat that for each combination of 3 ColorChecker colors.
Then I take the median of all the results and that will be my correction matrix.

Cheers,
Take Vos

Take Vos
January 25th, 2008, 12:37 PM
Jason asked me the exact steps when creating the image fr_cal_bp.png so here they are.

+ Read footage into Final Cut Pro
- Camera bayer 12 bit linear (already black corrected with a small offset to handle negative values)
- Apply Per Pixel uniformity correction (also linearizes each color channel).
- Fix bad pixels
- Debayer using a direction algorithm
- <--------- Here is where the color conversion will be.
- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion
+ Final Cut Pro with "3 way color correction filter", with neutral settings, just to force high dynamic range rendering when previewing (not needed for normal export)
+ Export to my own intermediate codec from Final Cut Pro
- Convert to RGB with rec709 YUV conversion
- Remove rec709 gamma correction
- Saved as 16 bit float linear RGB
+ Read back into Final Cut Pro
- Add Apple native gamma correction (1.8) (because it is exported to .png)
- image is now in 24 bit RGB (because it is exported to .png)
+ Save as .png by Final Cut Pro.

Just to reiterate, the "3 way color correction filter" is not doing anything to the image, it is just there for forcing high quality rendering for debugging purposes.

Jason Rodriguez
January 25th, 2008, 12:39 PM
I started and stopped with trilinear interpolation, it was to messy. I was actually doing a scattered mesh, i.e. only the colors from the ColorChecker would be in the LUT. And I think I had it quite good, until I needed to do extrapolation and then it became extremely weird.

Eek, yeah, that's gonna be messy. . . first-off you can't really do effective trilinear on a scattered data-set . . . trilinear likes evenly spaced meshes. What would be more effective and accurate would be something like tetrahedral with only 4 points and do a triangulation of the mesh (like Delaunay). Secondly, the placement of the colors on the color-checker in device-independent color-spaces like CIELab are not beneficial for scattered data-set interpolation (at least not using literal interpolation, i.e, you need to move to some regression method), and typically what happens is that after triangulation you have tetrahedra that do not allow one to weight the points in the color-space appropriately for the weight of the point that is the training point . . . i.e., you can end up with saturated colors that are also interpolating through the mid-tones, so your mid-tone range gets all skewed as you move around the convex hull of the training points which are the colors from the ColorChecker chart.

In an ideal scattered data-set training sample, the colors are correctly spaced so that a nice evenly spaced lattice/solid can be used to create the interpolated data, but the ColorChecker series typically "clumps" samples around the convex hull of the color-space, and 4-points is not enough to weight the points in the interior correctly. So what typically happens is that colors that are near the exterior of the color-space and closely aligned with the color-checker points look fine, but then as you move into the interior of the solid, the points are all skewed incorrectly as there simply aren't enough points to create a nice interpolation lattice.

I think a better chart would be the IT-8 or something of that nature that creates a more "even" interpolation lattice with evenly spaced samples throughout the volume of the color-space, not clumps along the convex hull like the ColorChecker series.

Jason Rodriguez
January 25th, 2008, 12:45 PM
Just to reiterate, the "3 way color correction filter" is not doing anything to the image, it is just there for forcing high quality rendering for debugging purposes.

So if I understand correctly that image then only has had gamma correction applied, there has been no saturation added to the image? For instance, you mentioned:

- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?

I'm just wondering if that's the level of saturation you're getting straight from the camera head, or if there is a multiplier somewhere in your color-conversion steps to give the more saturated image I'm seeing as the end product. It sounds like from your description there isn't any saturation stages.

Thanks,

Jason

Take Vos
January 25th, 2008, 12:51 PM
So if I understand correctly that image then only has had gamma correction applied, there has been no saturation added to the image? For instance, you mentioned:

- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?

Indeed, these two steps are NOT adding color saturation.

There have not been any color saturation added anywhere, nor has any whitebalancing taken place, the colors are still in camera RGB.

The picture was taken in natural light from an overcast sky during noon in the netherlands.

Jason Rodriguez
January 25th, 2008, 12:58 PM
first-off you can't really do effective trilinear on a scattered data-set . . . trilinear likes evenly spaced meshes.

BTW, I'd just like to clarify the "tone" of that statement . . . it sounds a little harsh and didactic, I wasn't meaning for it to sound like that . . . I'm sure trilinear can be done (and you obviously said it was working to some extent), but according to a lot of papers and sources I've read, scattered data-sets that are not evenly spaced tended to get better results from tetrahedral vs. trilinear interpolation.

Thanks,

Jason

Jason Rodriguez
January 25th, 2008, 01:01 PM
There have not been any color saturation added anywhere, nor has any whitebalancing taken place, the colors are still in camera RGB.

Wow, that's pretty impressive then . . . it hopefully won't take you too much work to align those results to a proper color-space . . . a lot of your color vectors are already in the right spot or very close to them, so you shouldn't need any crazy matricies like those needed to uncouple sensors with a lot of color-channel cross-talk.

Thanks,

Jason

Take Vos
January 25th, 2008, 01:04 PM
Hello Jason,

Well the color LUTs wasn't working yet, I thought I would be able to implement them quite easilly, just when I got to the point that is also should be able to extrapolate when it became really complicated ugly code.

I thought I would show you the YUV and gamma correction formulas.


R = gamma_correction(input_row[(x * 3) + 0]);
G = gamma_correction(input_row[(x * 3) + 1]);
B = gamma_correction(input_row[(x * 3) + 2]);

rec709_RGB_to_YPbPr(R, G, B, &Y, &Pb, &Pr);

output_row[(x << 2) + 0] = 1.0f;
output_row[(x << 2) + 1] = (Y * 0.85882352941176465f) + 0.062745098039215685f;
output_row[(x << 2) + 2] = (Pb * 0.8784313725490196f) + 0.5f;
output_row[(x << 2) + 3] = (Pr * 0.8784313725490196f) + 0.5f;



static inline float rec709_gamma(float L)
{
if (L < 0.018f) {
return 4.5f * L;
} else {
return 1.099f * powf(L, 0.45f) - 0.099f;
}
}



static inline void rec709_RGB_to_YPbPr(float R, float G, float B, float *Y, float *Pb, float *Pr)
{
const float Kr = 0.2126f;
const float Kg = 0.7152f;
const float Kb = 0.0722f;

*Y = Kr * R + Kg * G + Kb * B;
*Pb = (B - *Y) / (2.0f - 2.0f * Kb);
*Pr = (R - *Y) / (2.0f - 2.0f * Kr);
}

John Papadopoulos
January 25th, 2008, 01:13 PM
So if I understand correctly that image then only has had gamma correction applied, there has been no saturation added to the image? For instance, you mentioned:

- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?

I'm just wondering if that's the level of saturation you're getting straight from the camera head, or if there is a multiplier somewhere in your color-conversion steps to give the more saturated image I'm seeing as the end product. It sounds like from your description there isn't any saturation stages.

Thanks,

Jason


Welcome to natural CCD saturation:) Being used to CMOS, the difference might be striking.

Take Vos
January 25th, 2008, 01:20 PM
Hi John,

I find it strange that there is a difference between CCD and CMOS in regards to color saturation. If both CCD and CMOS are the same size and have the same color filter on them them the same amount of light will fall in each photon well.

Maybe you can't have the same filters for CCD and CMOS?

Cheers,
Take

John Papadopoulos
January 25th, 2008, 01:36 PM
Hi John,

I find it strange that there is a difference between CCD and CMOS in regards to color saturation. If both CCD and CMOS are the same size and have the same color filter on them them the same amount of light will fall in each photon well.

Maybe you can't have the same filters for CCD and CMOS?

Cheers,
Take

They are not the same size at all! The Kodak pixel is 7.4um. The pixel on Red and SI and any low frame rate cmos is much smaller. The pixel on the SI sensor is 5um or something like that. The Kodak pixel is twice the area. The filters are also different depending on the manufacturer technology and experience. Kodak came from a huge film colorimetry background and Sony absolutely dominates the ccd market with its ccd technology. CMOS includes more processing on chip and comes with a higher noise floor. Sensor filters play a large part in saturation, the overlap, the relative balance etc.

I believe the cmos low saturation is inherent to the technology and the complexity of the sensor pixels. I have seen lots of unprocessed images from cmos sensors and this appears to be universally true.

Jason Rodriguez
January 25th, 2008, 01:43 PM
Actually there are a number of CMOS manufacturers who get excellent color from their sensors . . . for instance, Micron can get the same level of color-saturation and accuracy from their camera-native RGB image, and so can Canon as what I'm seeing from the Kodak CCD's.

So I don't think it's fair to state that CCD=good color while CMOS=bad color. A lot of it has to-do with the manufacturing process, the pigments used, the color-fastness of the pigments (a trade-off of less saturation for more long-term robustness), and the compatibility of the color pigments with the manufacturing process.

Also the pixel size is 5um on the Altasens in order to get 1920x1080 in a 2/3" compatible format. And from seeing the work that Micron has done, small pixels (<5um) does not mean poor color saturation out-of-camera.

Thanks,

Jason

John Papadopoulos
January 25th, 2008, 01:47 PM
Yes, lots of tiny pixel CCD still get excellent saturation, tiny sensor multimegapixel still cameras etc.

But any time I tried to get low light saturation in a CMOS I had to process a lot. With CCD I think I should even reduce saturation in good light. The saturation is usually natural at mid levels.

I do believe a Canon 350d is poor in color performance compared to a D70s. But that's just personal preference.

Jason Rodriguez
January 25th, 2008, 02:02 PM
But any time I tried to get low light saturation in a CMOS I had to process a lot.

Have you tried the Microns? The "low-light" performance of those sensors might not be as excellent as a large-sensor CCD, but the color saturation is very nice.

Another thing to realize is the CCD's are clock-constrained . . . for instance, if one wants to have a single camera that can be as "film-like" as far as is possible in the range of frame-rates that one can cover, you can't do that with CCD's at the moment.

Also CCD's can get very hot compared to a simliar CMOS, and the hotter they get, the noisier. They also use up a lot of power, which gets dissipated at some point along the line as heat. Various off-chip generated bias voltages, etc. also can cause issues, especially as the sensor head gets hotter and more current must be drawn.

So CCD's have their shortcoming as well.

John Papadopoulos
January 25th, 2008, 02:04 PM
A part from a Micron 2048xsomething CMOS frame I found on the web:

http://img107.imageshack.us/img107/7257/79093717ls7.jpg

It's just not realistic. This could be a bright red car and we will never find out.

On the Silicon Imaging site there is a page with LUT tables. There is a "no look" file and a sample:

http://www.siliconimaging.com/DigitalCinema/downloads/no_look.zip

I don't believe the camera is desaturating on purpose, so this must be the out of camera saturation. What kind of processing is applied with a look file? Is there saturation processing?

John Papadopoulos
January 25th, 2008, 02:11 PM
Yes, CCD is harder to design, more expensive and problematic to get right with all the extra components and costs a lot more in development and materials. This is also true for most Italian supercars, still, many people will prefer one of those over a BMW with a equivalent engine:)

EDIT: I think the car analogy suits the situation. We all know that a top of the line BMW might be a better tool for most transport applications compared to something with italian engineering. But the italian car still has its market because many people like the sound of the engine, the engineering mentality, the way these things work and look. And even if the specifications might be similar, the italian car can certainly be a lot more engoyable and handle better in extreme scenarios, even though the engineering is much simpler, the technology is not as advanced and it doesn't come with 20 3-letter acronyms of its various systems/technologies. This type of car is a financial nightmare for any automotive company, but engineers and management know there are reasons to maintain the production.

Jason Rodriguez
January 25th, 2008, 02:46 PM
I don't believe the camera is desaturating on purpose, so this must be the out of camera saturation. What kind of processing is applied with a look file? Is there saturation processing?

Definitely . . . if you download the XML, there is a saturation matrix in there, and you can see all the settings that are being applied to the camera image.

In the end I feel that both technologies have their place, with advantages and disadvantages on either side . . . it's not just "marketing" false-hoods that have created the popularity around CMOS as you have described in your other posts. There are advantages, and ways to mitigate the disadvantages.

Choice is a good thing.

John Papadopoulos
January 25th, 2008, 02:51 PM
Increasing saturation with post look files does come at a cost though. It's better to get more from the camera directly so you avoid boosting noise etc. I made a comparison of the out of camera, neutral (looks undersaturated to me) and film look.

http://img178.imageshack.us/img178/1217/lookcompdz4.jpg

Jason Rodriguez
January 25th, 2008, 03:09 PM
Yes, it does, but as noted, it's a "mitigated" loss, meaning that for a little more noise you get the "good" saturation we've been talking about, along with the benefits of flexible frame-rates, low-power, high temp tolerance, all data pipeline (on-board A/D converters), optical format compatibility with 2/3" and S16mm, up to 2K resolution, etc., etc.

Technology is always moving, and tomorrow's CMOS will make today's CCD's look bad and vice versa . . . both technologies will have their respective places for sometime as far as I can see.

There is one thing though that I am seeing, and that is a lot more R&D and intellectual property is being applied toward improved CMOS designs than what I'm seeing with CCD . . . I think a lot of this has to-do with the ability for "fabless" firms to design CMOS sensors compared to the difficulties required to create CCD's. As such, I think we will probably see CMOS in the long-run out-pacing CCD design, with the "end-results" being a bit of a "pseudo-CMOS/CCD sensor", that is CMOS designs being created on very high-end mixed-signal processes that are typical of CCD designs. At that point you'll get the advantages of both, with less of the disadvantages of either.

Take Vos
January 25th, 2008, 03:35 PM
As you may have gathered I am not trying to make my camera have a certain look and I am taking a more scientific viewpoint. This is why I have taken so long to get the camera output perfectly linear (within 6%) and also get the colors as exact as possible as well.

This would allow the most consistent image in post and give you the most control over the colors.

I think one of the reasons that colors are pretty good already is because of linearity and getting black level correct. I've seen the same thing when calibrating a CRT projector and doing the greyscale tracking using a photosensor and a voltage meter instead of trying to do the same thing by eye.

Jason Rodriguez
January 25th, 2008, 03:43 PM
Nope, you're right Take, and I'm sorry for hijacking your thread . . . I didn't want to get into a CCD vs. CMOS discussion, but just wanted to point out you've definitely done some very fine work here, and the images from your software look really nice.

Take Vos
January 25th, 2008, 03:45 PM
I don't mind the thread hijacking.

John Papadopoulos
January 25th, 2008, 03:49 PM
Black level and contrast will boost the sense of saturation, just like with analog imaging systems. That's why this looks more saturated:

http://img265.imageshack.us/img265/9996/frcalibrated2pushed1coplo0.jpg

I would set the CRT black by eye. The proper setting depends on room lighting, reflectance of surface and many other parameters. I wouldn't trust anything except the eye for this.

Take Vos
January 25th, 2008, 03:59 PM
black level, yes you do that by eye, greyscale tracking should only be done using instruments.

Take Vos
January 26th, 2008, 09:08 AM
Hi, so I've been working on calculating the color matrix from a picture of a ColorChecker.

Here is the result:
http://www.vosgames.nl/images/MirageRecorder/fr_cal_bp_col.png

Take Vos
January 26th, 2008, 09:10 AM
I actually thought when I first saw the picture that it was too much, but then I actually looked at the ColorChecker under light and compared it to the screen and it is really close.

I guess you get used to the desaturated look after working on something a long while.

Now of course I also need to look at some real live pictures before I can truly say the picture is correct.

Cheers,
Take

Take Vos
January 26th, 2008, 09:53 AM
So, I have shot some footage of the park from my window, to see how everything will hold up in real live.

I dropped it in Final Cut Pro and started editing it. I happy to say that FCP will run the timeline (edited footage) in real time when the image is zoomed to 50% or smaller, at 100% it will skip frames but still work pretty good. The footage is on the harddisk that I've recorded on, a single SATA western digital SA16 250 GB disk.

Because this is a third party codec it does not do real time rendering of things like transitions, but scrubbing through a transitions works pretty smooth.

In any case I am currently exporting the edit to my intermediate codec (so it renders in high quality). It says it needs two hours to render 1 minute and 35 seconds (I will need to solve some performance issues :-). I actually thought that FCP would render on both cores of my computer, but only 50% cpu time is used.

In any case I will try and put the footage online for all to see, does anyone have a preference for which codec I should use to export it? I thought I would use Apple's codec that they also use to encode their movie trailers in.

John Papadopoulos
January 26th, 2008, 11:44 AM
So it's 75:1 real time. What cpu is on your laptop? Any multithreading?

The noise looks very nice on the last sample.

I think the look is more important than precision. We know it's uncompressed video. WMV or MOV is ok.

Don't get the desaturated look syndrome:)

Take Vos
January 26th, 2008, 05:55 PM
Hi there, it took a little longer because there was a bug in my codec stopping the last conversion. I didn't know about a certain data transfer method.

First I would like to apologise for the shaky image, I have no tripod mount made for my camera and was a bit excited being it my first real footage.

Anyhow, I finally made a conversion to h.264, which is the codec that Apple really likes. I did notice that when the image is particularly smooth and out of focus it tents to color band quite a bit, therefor I also added some still images to show the difference.

This was the workflow:
- Create calibration data for camera using DNFCalibrator (DNF stands for Digital Negative Format)
- Capture footage with Boom Recorder
- Drop footage on the Final Cut Pro timeline and edit (the timeline automatically reconfigures to the DNF Intermediate codec)
- Export footage as QuickTime with the DNF Intermediate codec
- Using the Compressor.app (Apple) convert the DNF codec to h.264 codec

You may want to right click and "save as" for the movie file, otherwise it will show inside the browser.

http://www.xs4all.nl/~takev/ThePark-H.264.mov
http://www.xs4all.nl/~takev/TheParkImage1.png
http://www.xs4all.nl/~takev/TheParkImage2.png
http://www.xs4all.nl/~takev/TheParkImage3.png

Take Vos
January 27th, 2008, 08:55 AM
So, this footage was made with a CF16HA1 (16mm) which has a 43 degrees horizontal viewing angle. Which I think is a normal lens for this camera.
If I would add two other lenses, which should I order.

These lenses are available, third number is the 35 mm equivalent:
12.5 mm, 54 degrees, 35 mm
16 mm, 43 degrees, 45 mm
25 mm, 28 degrees, 70 mm
35 mm, 20 degrees, 100 mm
50 mm, 14 degrees, 140 mm
75 mm, 9 degrees, 200 mm

Paul Curtis
January 27th, 2008, 10:31 AM
Take, congratulations, you're doing an amazing job! It's great to watch your progress.

The png show a lot of pixelation, is this a product of the de-bayering? There's some FPN too, is that right? If you look at the tree going across the yellow advertising there's a white outline in the blue channel. Im trying to work out whether this is a debayering artifact or whether it's the lens. (You can see chromatic aberations quite clearly, which has to be the lens - check the roof of the mini).

I would love to see some bright sunny sky and darker areas together sometime.

So the lens isn't so hot, i wonder what other choices there are for this sensor size? This is the fujinon right? This has been a concern for me and this particular sensor - getting a good lens on there. (unless of course the aberations are correctable?)

It also looks like you're shooting through a window as well? That can't be helping at all!

Does the camera has an OLPF? Im not sure it does, do you think this will become a problem when the camera is moving a lot?

The colours look very good and the range seems pretty good too. Very promising!

I'd like to see a wide lens the 12.5 because wide is more difficult than zoom. on 16mm a 40mm is usually the 'normal' lens so you're pretty much there already.

So when's the windows version come out ;)

cheers
paul

Take Vos
January 27th, 2008, 10:48 AM
Thanks paul,

The pixelation is I think indeed from the debayer, maybe I should teach it how to do diagonal interpolation. Right now it seems to think, "this is neither horizontal nor vertical lets use the box interpolator".

I am thinking of an new design for a debayer, but it probably won't work, I would use a median filter because as they say, "median filters are edge preserving", I just have to see how true that is for demosaicing a bayer.

There is quite a lot of color aberation, it seems to be caused by the lens. I first thought it was my debayer algorithm, but the color shift was extending beyond the edges. Also in the video you can see the color aberation on a tree, the color shift swaps around when the tree is panned from the left to the right side of the sensor/lens.

Yes, I was shooting through the window, it is cold outside.

There is no OLPF, and as you see I am moving the camera a lot. I think the motion blur takes care of the aliasing. I am not sure what happens when I reduce the shutter time, I guess I will have to try.

There won't be a windows version, until Microsoft decides to implement a lot of APIs from Apple. It may be possible that at some point the QuickTime component will work on windows, so that at least it becomes possible to edit on windows.

Take Vos
January 27th, 2008, 10:51 AM
Can you fix chromatic aberrations, by just looking at the distance from the center of the screen and shift red and blue back?

Paul Curtis
January 27th, 2008, 12:35 PM
Can you fix chromatic aberrations, by just looking at the distance from the center of the screen and shift red and blue back?

Sometimes, it depends on the lens and the type of abberation. the magenta/green type you're seeing changes depending on the contrast, so the top of the mini would show more. You can also on the tree trunk that when the lighter path behind crosses they are more obvious there. So it couldn't be a global shift. Point it through a tree at the sky, that'll show up the worst.

Is that fujinon designed for a single sensor or designed for a 3 CCD sensor? Do you have anything else to compare with?

http://www.vanwalree.com/optics.html is a nice reference.

A camera like the EX has CA correction built in (so it must be algorithmically possible for a given lens). You can see when zooming the lens that the aberations appear then disappear when you stop.

Shooting through the window really isn't helping your contrast in the scene.

The OLPF effect will most likely show more for textured materials when you're moving slowly around with a higher shutter speed. I suspect it might end up being an issue. Just slowly move the camera pointing at some patterned material perhaps and create a small region of interest uncompressed movie from it.

Have you seen any heat or thermal issues with the Pike?

Why did you choose the Pike in the end, did you look at some others first?

cheers
paul

Take Vos
January 27th, 2008, 12:55 PM
Hello Paul,

I saw an article about fixing chromatic aberrations. It seems that is basically scales the color channels. You have to specify the amount on a per lens basis, so it will probably be a Final Cut Pro filter.

From what I understand de fujinon is a single CCD lens. I have an old Minolta 35mm reflex camera lens.

The Pike can become quite hot, I am planning to make a cooling system.

I have chosen the Pike because it was pretty much the only one that handles 1920x800 at high bit depth at 24 fps and IIDC that could be connected to an Apple and had drivers for it.

Paul Curtis
January 27th, 2008, 03:36 PM
[QUOTE=Take Vos;815316]Hello Paul,
From what I understand de fujinon is a single CCD lens. I have an old Minolta 35mm reflex camera lens.
QUOTE]

I don't know ultimately what you intend to do with your system, but if you're hoping to use it out there in the field then the lenses are going to be key.

Ive been looking for suitable lens options for this sensor. It's between 2/3 and 35mm and there's quite a gap here. Some S16 lens *may* be big enough but for the most part the imaging circle will be too small as this sensor is bigger than S16. (it's also difficult to get technical details on a lot of older lenses)

You can stick 35mm on with a crop factor, around 2.5 i think. So a good 14mm lens works out to around 35mm fov, but the fastest i've found is f2.8 which'd be like having the DOF at f7 on 35. I think the better 35mm lenses would be fine resolution wise even though you're only using the centre.

There're a few lenses specifically for APS-C sized sensors, i have a canon 10-22 which is actually a pretty nice lens. But it's not manual so unless you have a birger mount it's not going to work. I don't know if there are any manual lenses for APS-C but they would be an interesting option because they're designed for the smaller sensor size and they'd be plenty big enough for this sensor. 10mm would be a nice wide fov on it. Again these tend not to be very fast.

So machine vision lenses are the most obvious choice. The fujinons look nice on paper but so far aren't performing too well in real world. Do you know what aperture you had that on - probably smaller than f1.4 i'd guess? The Pentax are probably at the same level and the scheider kreuznach are a lot more expensive, perhaps they'll perform better?

A lot of SLR lenses don't have much travel on the barrels for focus, i'm not sure about all the machine vision lenses.

So unless im missing something blindingly obvious (not the first time!), i think it might be a struggle to get some quality glass in front of the sensor to operate at reasonable fov at a flexible range of Dof for narrative purposes.

Conversely i don't think there's a suitable 1920x1080 CCD sensor in 2/3rds format -- which'd open up the choices enormously.

cheers
paul

Take Vos
January 27th, 2008, 04:11 PM
Hello Paul,

All the 2/3 lenses I've seen are designed for 3CCD, not for a single sensor. 35mm SLR is interesting, especially if you need a long lens, but it is almost impossible to get really short 35 mm SLR lenses.
S16 lenses would also be interesting if we can find a couple that produce a 1" circle (altough at 2.40:1 ratio the circle may be slightly smaller).

Machine vision lenses remains the only solution if you want to buy them new. The Fujinon are not that expensive 250 USD per lens, the number of blades in the aperture is pretty good and the build seems very solid. The focus travel is pretty short but smooth, aperture setting is also smooth with a light click at each stop. I think you can find a follow focus that is geared for these short travel lenses.

I switched aperture quite a lot in the movie, it was quite dark outside, so I had it open pretty wide. I also wasn't that careful is focus.

Take Vos
January 28th, 2008, 02:02 PM
When a dark image is shown my bad pixel detector basically disables all pixels and thus each pixel will be interpolated from its neighbour pixels, basically dropping in resolution.

This is not really a problem as it removes a lot of fixed pattern noise and smoothes out the image. But it will also show these blobs of noiseless dark patches which are a bit distracted compared to the rest of the image.

Luckily I already have measured the amount of white noise during calibration, so the only thing to do was to add a random amount of that white noise back onto the interpolated pixel. Now the patches are gone and it looks a bit more uniform.

Cheers,
Take

Paul Curtis
January 29th, 2008, 03:32 AM
Take,

I hope im not polluting the thread (well i am) but this lens looks like it might be a good lens, the charts look pretty good and on a 1" should be better.

http://stilar.de/hp32638/Stilar_-2_8_8.htm?ITServ=C818e7fbX117bd6e8392XY25be

No idea how much though...

cheers
paul

Take Vos
January 29th, 2008, 08:47 AM
Hi Paul,

This is not really pollution. I like to have a lively discussion on my thread.
Anyway, interesting lens, to bad there is only a single lens and not a whole series available, it is also very wide.

I think I've seen 1.2" lenses before, which could be interesting if I am ever going to the 2048 pixels wide sensor. But of course the alias filter for a 1.2" lens is probably wrong.

Cheers,
Take

Paul Curtis
January 29th, 2008, 09:30 AM
Take

http://docter-optics.de/hp834/TEVIDON_-CCD-lenses-.htm

from the same people, so there are some others (of which only a couple apply to a 1")

When you say alias filter do you mean a OLPF? Does the Pike have something like that already? Does it have an IR filter? (you can get filters that do both)

cheers
paul

Take Vos
January 30th, 2008, 08:29 AM
Paul,

Yes, I do mean OLPF when I say alias filter.

The IR filter for the pike is glued (or something) to the C-mount. The C-mount itself is mounted into a wider screw mount, with very narrow threads for back focus.

I do not see a way to add or change the build-in IR filter, but I may not have looked at it closely enough.