View Full Version : Mirage Recorder
John Papadopoulos January 14th, 2008, 06:15 AM What gain is this frame?
I think the uncalibrated one looks more natural. The noise looks more random. I assume both noise types are static, not changing from frame to frame. The bayer artifact should flunctuate more than the sensor artifact because it depends on the frame content. I remember an old comparison using eAHD(I think) that had much better signal per noise. It didn't have the maze artifact.
Have you kept the bayer frames?
Take Vos January 14th, 2008, 07:30 AM 0 gain, I do not allow the user to change the gain.
There is no more static noise in the calibrated frame, this is the whole purpose of the calibration.
I actually want to have all the noise that was original on the sensor, noise reduction normally means that the image becomes less sharp. My algorithm actually uses high-frequency transplantation;
- I first find all the high frequency noise/texture.
- Then I interpolate a low frequency green.
- Then I interpolate red and blue using the full green image.
- Then I add the high frequency noise/texture to all the color components.
I have an idea of using square interpolation when there is no edge to be seen, this would smooth out the greek-restaurant-pattern. I am not sure how to figure out which of the orientations to use, unless I would do homogeneity comparison like in AHD.
I also have an idea for reducing the color aliasing even further, by using an weighted average, so the colors stay on the correct side of an edge.
Take Vos January 14th, 2008, 07:33 AM I still have the bayer frames, but they are quite large and in my own special movie format.
John Papadopoulos January 14th, 2008, 11:53 AM Greek restaurant pattern:P
http://www.mi.sanu.ac.yu/vismath/morrison/
It's sometimes called the greek key but it was quite popular throughtout the world. Someone found an ancient demosaicing description and thought it looked nice:)
Take Vos January 15th, 2008, 04:10 AM Hi,
I've been experimenting with a new algorithm for color interpolation to eliminate color aliasing. It worked, sort of, it does remove a whole pixel of color aliasing in exchange for a zipper effect.
But the zipper effect was actually caused because the weird color changes where not color aliasing but color aberration from the lens. This caused my algorithm to behave weird.
Cheers,
Take
Djee Smit January 15th, 2008, 04:52 AM Hey Take, it's looking more promising every day. Maybe a bit too early to ask, but do you have any ideas on how the camera as a total package is going to look like? Not in an esthetical way, more in a practical sense. Somehing like the SI-2k (mini)?
Take Vos January 15th, 2008, 05:25 AM Hello Djee,
Yes, I am pretty sure how it will look.
The camera head will be:
- The pike 210 C fiber
- A gate in front of the sensor for 2.40:1 filming
- An alluminum block that will be the:
tripod mount
lens rods mount
the trigger and temperature controller casing
a mount for the exposure rotary switch and mode switch.
a mount for the screen
a mount for the battery
and will also function as a cool block.
- Temperature Controller
- Peltier element
- Battery for 4-8 hours running time
- Trigger controller for a stable 24fps
The camera head will only be connected to the computer by a fiber cable, which can be rather long.
The computer:
- MacBook Pro
- Firewire 800 fiber hub
- eSATA controller in the PCI Express Card slot
- single SATA disk to record on
- VGA->composite converter
- video transmitter
- A sort of a docking station to put all the extra equipment in.
- USB high quality audio interface.
The edit computer could attach to the recording computer using ethernet and its own SATA disk, so it could make backups of the data and start editing on-set.
The monitor:
- LCD monitor
- Battery mount
- Battery for 4-8 hours of operation
- video receiver
Of course things can change, I am not sure yet about the monitor solution.
Djee Smit January 15th, 2008, 06:25 AM sounds good, good luck with it. When do you think it's ready for use?
Take Vos January 15th, 2008, 06:38 AM Djee,
I will probably start using it with my friends in a month or two. In principle it seems that the recording application, calibration application and QuickTime component are functioning pretty good.
So I could already start with filming and increase the quality of the calibration application and QuickTime component as I get more experience with it.
But I would love to first electrically separating the camera from the computer before starting filming. And a tripod mount and screen would also be nice.
Creating a package that someone will be able to buy is a whole other can-'o-worms. I would love to assemble a package which includes everything (excluding the computer) even with a nice carrying case.
Take Vos January 20th, 2008, 11:22 AM So, I was quite annoyed by the maze pattern, so I build a new debayer algorithm that interpolates horizontally, vertically and crosswise.
The crosswise interpolation is used when there is no edge in the neighbourhood. This means that noise now shows as noise, not as a decoration of a greek restaurant.
Of course after this I found that the noise was actually fixed pattern noise caused by my own fixed-pattern-elimination-algorithm, ironic I guess.
Somehow the green pixels in the red line are brighter than the green pixels on the blue lines. I seem to have fix this bug, somehow.
The system now works in 12 bit linear, instead of 12 bit non linear.
Although the A/D converter is 14 bit, the sensor is only 12 bit. It makes processing much easier though.
Cheers,
Take
Take Vos January 20th, 2008, 12:05 PM Hi,
Here is a new image, I am not entirely happy with the calibration in the almost black, like on the black/white pillows on the right.
Also it seems my debayer algorithm can't handle blacker than black (negative values), so I will need to find out how to solve that.
http://www.vosgames.nl/images/MirageRecorder/fr_calibrated2.png
Cheers,
Take
John Papadopoulos January 20th, 2008, 01:53 PM I think you should leave resolution aside and compare with a commercial ccd camcorder. Try to reduce the resolution and see if the image has any punch or realism in comparison. The outputs reminds me of a CMOS sensor. The lighting looks normal but there is lack of saturation and the blacks have a very unnatural character. If you try to push the shadows you will not get the natural grain like pattern of a ccd outputing uncompressed video. If you add saturation in post you will add more problems. If this happens at 0dB of gain how will a pushed 12dB image handle the processing? The loss of power in blue and green spectrum of an incadescent shot will quickly make it look very noisy. Generally, it is a good idea to check all algorithms at high gain with some ccd noise present. It's easier to fix the problems there.
Take Vos January 20th, 2008, 02:58 PM Hello John,
I can not test my algorithms at high gain, I will need to do all the calibrations again.
The lack of saturation is normal, there has not been any color conversion to rec/itu 709 space done, so the RGB is still in camera color space.
I explained that the fixed pattern noise in the dark areas is still apparent, which with some tweaking I hope will be gone. This is what you mean by unnatural, that is the fixed pattern noise of the sensor.
It also seems because the fixed pattern noise here has actual zero values that it doesn't look good when increasing the contrast in the dark areas.
If you look at the black patch of the ColorChecker, it does hold up well after pushing the contrast.
Below I include the same image that has been pushed a bit by Final Cut Pro. The original bayer image was first rendered by Final Cut Pro and rendered into the 16 bit float intermediate format. Then the 3-Way Color Corrector filter was used on this intermediate, added some saturation, pushed the mids (increases contrast in the blacks) and white balanced.
http://www.vosgames.nl/images/MirageRecorder/fr_calibrated2_pushed.png
Cheers,
Take
John Papadopoulos January 20th, 2008, 04:05 PM You have a bug which creates a positive offset in the Red channel.
Take Vos January 20th, 2008, 04:11 PM Hi John,
How did you notice that, I did notice there is some mathematical/physical seepage of red into the green channels on the red/green line?
I think I found more prove of the bug, my calibration program should operate almost as good with or without a black field. However when I don't add a black field it goes completely wonkers. I hate it, I've been looking at this bug for a couple of days now.
Cheers,
Take
John Papadopoulos January 20th, 2008, 05:41 PM I just looked at the frame! There is a red cast all over the frame.
If you correct it, dip the noise to black and change gamma to something approaching video gamma, it will look like this:
http://img265.imageshack.us/img265/9996/frcalibrated2pushed1coplo0.jpg
John Papadopoulos January 20th, 2008, 05:49 PM Btw, did you get the color correction coefficients from the manufacturer preset of calculated them youself?
Take Vos January 21st, 2008, 01:34 AM Hello John,
I do have the color conversion matrix from the manufacturer, but I rather calculate them myself. But I first need my other algorithms to function correctly.
The red cast is strange, but I've got all sorts of strange things happening now, it almost seems like something is overwriting the calibration values. The red cast could be caused by wrong calibration values as well.
Cheers,
Take
Paul Curtis January 21st, 2008, 04:52 AM Take,
Great work, i assume all the later images are via the Pike 210? This uses the KAI2093 CCD sensor at 1920x1080?
Are you able to get the full un bayered data from this over firewire? (at 30/32fps)? I didn't think there would be enough bandwidth over firewire for this. (or is this the reason for 2.40?)
Do you have any problems with getting lenses to cover the sensor (14.2mm x 7.9mm). The circle needed for that is bigger than 16mm and a little bigger than S16. Most c mount and older style cine lenses would vignette on that size.
cheers
paul
Take Vos January 21st, 2008, 04:58 AM Hello Paul,
Bandwidth is the reason for the 2.40 ratio, but I actually transfer at 2.00 ratio to get some black bars. It is firewire 800 at 12 bits and my framerate is only 24 fps. Also I get bayered data, debayering is done in the QuickTime codec.
I have a lens designed for 1" sensors, and it doesn't show vignetting.
All my images are from the Pike 210, although earlier images where screenshots which had lower resolution. The newer images are exported from Final Cut Pro.
Paul Curtis January 21st, 2008, 05:42 AM Take,
thanks, have you considered one of the many GigE versions out there, what was the reason for going with Pike vs the other camera heads? Or in fact why not the PIke with the 2/3 sony ICX285 sensor (which claimes better range but lower res)
It seems that you've had quite a bit of calibration problems, is this down to the pike itself or all the work involved in getting your recorder to work? I've seen images from the pike via streampix/cineform RAW that *seem* to look pretty good?
That particular sensor is slightly wider than the usual 1" hence the question. Which lens have you been using?
many thanks
paul
Take Vos January 21st, 2008, 05:55 AM Paul,
Yes, I first considered a GigE, but I could not find the specifications of that protocol. Which is why I now use a IIDC camera, which has an open protocol and drivers for OS X.
I like the HD resolution, I like to at some point get the 2K sensor. Which is why I use the 1" version.
I am not sure why I have these non uniformity problems, maybe it is a bad sensor or it is caused by the micro lenses (which according to literature causes all sorts of problems). The non uniformity only shows in dark scenes with gamma correction, the manufacture says that I should use gamma correction and that is normal operations.
At one time I was able to fix all the non uniformity problems, but I am redesigning some stuff to work better, it seems that my new algorithm has a bug somewhere so that is why I have so many problems now.
I use the Fujinon CF16HA-1:
http://www.fujinon.com/Security/ProductCategory.aspx?cat=47
Paul Curtis January 21st, 2008, 06:32 AM There's an open standard GigE vision, i wonder if it's worth your while taking a look at that because firewire i would think is going to seriously constrict your options. Most of the cameras i've looked at also have thier own drivers and software to a greater or lesser extent. I have to say though that i've not tried developing with any of them yet so most of what im writing is guesswork based on specs and any information i can gather. Please take with a nice pinch of salt as the real world is often quite different!
http://www.prosilica.com/gigecameras/index.html
that's one of the many manufacturers that use the KAI 2093 (others inlcude basler and JAI for example)
At the moment i've not found any CCD based sensor at 2k with decent frame rates. That's one of limitations of CCD is quite slow rates (compared with CMOS).
But then i thought the CCDs were much better with regard to FPN? I understand that because each cell on cmos is addressable individually then it's much more likely to show variations of amplification. Hence the question about the problems you've been experiencing.
I thought that Cesar Rubio on his site had just plugged a Pike into streampix and cineform and output some pretty decent looking images.
cheers!
Take Vos January 21st, 2008, 06:40 AM Hallo Paul,
All the drivers for camera are non OS X.
You can no where download this "open standard".
I may at some point switch to GigE, but for now I have got IIDC working.
I may need to design my own camera at some point I am afraid.
That is also why I think my camera is just bad, because Cesar gets nice pictures out of them. But then his nice pictures are not very black, I also get nice pictures from the outside without any processing.
Cheers,
Take
John Papadopoulos January 21st, 2008, 08:12 AM The quality of cameras will vary within a series, even for serial numbers that change only a little, so it's always a serious bet that can cost 5k or 9k euro for the large sensors. You cannot trust the manufacturers to replace the cameras you don't like. In a production environment, you need to test a number of cameras in a scientific way, one by one, and only use the ones that cover your image quality requirements. Digital cinema is an application that is well above anything in machine vision in terms of required quality. You need to be sure the sensor will perform because it will possibly be used in natural light and with lots of gain and post processed in extreme ways.
You can't expect a produced image out of a camera. It takes work.
Rubio's samples use quite a lot of commercial software, a recording app and cineform, cannot maintain precise frame rates and have audio sync problems. The lenses are very soft which hides the image quality problems of the simple debayer. But you can still see there is lots of chroma alias if you zoom. Generally, the package is expensive for an unfinished solution. There is no user interface, no focus aids, no real usable control. If you do not want a solution that uses the camcorder form factor, Take's colution will be cheaper and better than anything than can be put together with off the shelf software components because Take actually writes software:)
It took Rubio quite a while to realise that ISO speeds of the bus (iso400, iso800 etc) have nothing to do with sensitivity, even though the frame rate halved when ISO was changed.
John Papadopoulos January 21st, 2008, 08:32 AM A JAI Kodak HD gigE is about 9,000 euro with tax and shipping btw and it's practically a good small webcam that outputs uncompressed. There are many technicalities in designing a camera and lots of software and hardware engineering issues to solve. 1000s of manhours in user interface design, testing, processing algorithms, troubleshooting etc. It's not as simple as buying a head and using a computer. It would cost as much as a complete properly engineered solution and it would still be completely useless in a video production situation due to user interface, image quality, and basic implementation problems such as frame rate and audio sync.
Paul Curtis January 21st, 2008, 08:40 AM John,
Whilst i agree with the artifacts in most examples, the Elmo Raw 12 looked much nicer. I wonder if StreamPix have fixed some aspects of the cineform integration by that stage. Also the divx of him with his kids in nice in terms of frame to frame consistency.
The examples, lighting and environments are not ideal testing though! Im not sure what lens he was using, some photos show a f1.2 50mm nikon which would have a angle of view similar to 125mm (i think!) on this sensor and some of these examples look wider than that.
Some of the companies i've spoken to about various cameras imply that some machine vision applications are beyond digital cinematography, it depends on the camera and supporting hardware behind the sensor. Can't beat creating your own though (which you're doing).
I've found cineform to work very well for us (Prospect though) but i have no experience of the RAW version. SI footage looks very good though.
Take, i understand the mac os x issue now, i hadn't taken that into account. It's been wonderfully enlightening reading your reports.
I have no problem with development but i'll always try to avoid reinventing the wheel. If i can take someone elses and smooth it off a bit, that'd make me happier!
cheers
paul
Take Vos January 21st, 2008, 08:46 AM Hi John,
The difference in sensors is why I thought would be the problem as well. Anyway I am designing my system that even bad sensors would be good enough for digital systems. 6000 Euro was a pretty expensive bet, so I have no choice then continue what I have started.
In a sort of weird way I am lucky I got such a bad sensor, the work I am doing will be a benefit later on.
Cheers,
Take
John Papadopoulos January 21st, 2008, 08:54 AM I think SI are using their own debayer algorithms.
The recording app is just moving data, I believe it has nothing to do with image quality in this case.
The machine vision applications are designed for processing images in scientific or industrial applications. Streampix can do practically nothing in that area. There are better packages which work great, but not in video applications. It's not the intended market. The software is not designed for streaming video so you have to build everything yourself.
Anything will look ok if highly compressed to an mpeg4 variant. The format cannot preserve texture detail, shadow detail is eliminated, noise is reduced because the format cannot code it etc.
The elmo sample with a sharper lens would look like this: (200% zoom)
http://img182.imageshack.us/img182/3417/elmoraw12uncomct2.png
A lot worse probably because some alias is already filtered with the ultra soft lens.
John Papadopoulos January 21st, 2008, 09:04 AM In a sort of weird way I am lucky I got such a bad sensor, the work I am doing will be a benefit later on.
Cheers,
Take
Knowing the baseline performance is a good thing. We always test in extreme situations and bad case scenarios for the same reason.
John Papadopoulos January 21st, 2008, 09:09 AM Take a look at these cameras. These cost about 3600 euro a piece and are 2/3". Using the manufacturer provided video recording apps.
http://img168.imageshack.us/img168/5671/comparison2gx8.jpg
Notice the debayer quality problems of the app and the uniformity problems (vertical lines) of the sensors. Also the difference in sensitivity. Both are using the same expensive sensor. I don't believe someone used to even $200 consumer quality camcorders would find this quality acceptable, but it is acceptable in most machine vision applications and these cameras are quite popular.
Take Vos January 21st, 2008, 09:16 AM Hi John,
I don't have any fancy lights or something, so I only can use daylight. That particular ColorChart was made indoor with natural light from outside. On a overcast day with lots of rain. Exposure was 0.02 seconds, and I think the lens was set on its third stop (f/2.4?)
Cheers,
Take
John Papadopoulos January 21st, 2008, 09:20 AM The usual incadescent indoor lighting and candles are good for testing. You might have noticed we use them a lot:)
Take Vos January 21st, 2008, 09:20 AM Whow John, those images are pretty bad, I do have that striping as well, but horizontally. This makes me a little bit more comfortable.
I wonder what they do in consumer cameras. Do they just make sure the sensors produce an acceptable picture, or do they solve it in software.
Take Vos January 21st, 2008, 09:23 AM I noticed your candle pictures. I am planning to be able to create multiple calibration data sets. So that you can use daylight or tungsten balanced. You will have to select which calibration data you like to use in the recording application before hand though (or modify the movie file with a hex editor :-).
John Papadopoulos January 21st, 2008, 09:27 AM They process the consumer camcorder output a lot. But the format themselves save the day. There is not much in the shadows of most compressed video formats, so the problems sit below the format limits. With uncompressed video there is nothing to hide the flaws.
The test I posted was quite extreme though. It was lit by my mobile phone screen from a long distance, it is 200% zoom and uses 24db of gain:) What would a professional camcorder look like with 24dbs in this situation? Totally black perhaps. I have tried XDCAM HD at 12dbs and it was already very noisy.
We would never sell these camera heads to someone of course. There are better cameras out there. And better samples even for these particular cameras.
Take Vos January 21st, 2008, 09:48 AM I was just thinking of low light situations, and I have a starlight ceiling in my bedroom. Which is even hard to see by eye, so the camera would have no change unless it was integrating over many images.
And then I though these would be very small dot and would only be seen by a single pixel on the sensor. Say these stars where white, the resulting color would be from the pixel, pretty weird.
In my debayer algorithm I wanted to separate the high frequency components anyway, now they are added immediately to the color we are trying to interpolate. I guess I should add the high frequency component at the end of the debayer (after direction selection) as a white offset.
This would maintain white stars on a black background.
Paul Curtis January 21st, 2008, 10:29 AM John,
Im pretty sure that SI use cineform all the way through, i will check with them though. The cineform RAW product lightly compresses the sensor data before debayering. Then you can choose a real time debayer (with real time grading possibilities) or render a final quality debayer with the editing system. I believe the final render is bilinear based.
Now what happens to the sensor data before giving it to cineform may be the grey area we are talking about. What kind of sensor fixing is needed, hardware based or software. Perhaps that's the area that silicon imaging have put their effort in to as well? Is this the key area? Hardware based correction before the data?
But there's a chance that some of those images are from the quick and dirty debayer, not final.
You've mentioned in the past you didn't like cineforms debayer, so do you have some other samples you've seen?
cheers
paul
Take Vos January 21st, 2008, 11:05 AM Hi Paul,
bilinear debayer is very low quality, so I hope they do something a little bit better.
My own debayer is very slow though, it takes around 5 seconds for a single image. The fast debayer is automatically selected by Final Cut Pro, so you can edit at real-time performance. When you export in Final Cut Pro it automatically selects the high quality debayer.
Cheers,
Take
Paul Curtis January 21st, 2008, 11:13 AM Take,
Actually i've since gone back to the images on silicon imaging and pixel peeped and can quite easily see debayering artifacts (even without sharpening). So maybe it's not that good (the workflow is nice and the overall 'feel' of the images are great)
I've had another look at johns images and the debayering is substantially better!
Is your custom debayer based on bilinear too?
Also there's a thread about cineform vs Reds debayering which is worth a read over on reduser.net
cheers
paul
John Papadopoulos January 21st, 2008, 11:33 AM Debayer quality is always a trade between ability for post processing and resolution. The expensive debayers are not very good in postprocessing and can have some small artifacts in motion. Their intelligence is their weak point. The cheap debayers are not very good in terms of resolution and come with artifacts. You have to find a solution somewhere in the middle. The realtime debayers are a challange on their own. For a production tool like a camera, you cannot have something that needs 50x or 150x realtime to debayer. It's not practical.
We are using a debayer we have designed from scratch. Lots of different versions are used in the samples, some with bugs, some without.
I have seen lots of images from the SI camera. I'm sure they have better quality options on final output besides bilinear. I remembering reading about that. The cineform codec is using a basic bilinear I believe. The SI uses cineform raw, but the SI is a complete system with user interface, monitoring, extra processing, finetuned to the specific sensor, etc. The cineform codec is just compression with a low quality playback preview and medium quality final output.
Take Vos January 21st, 2008, 12:49 PM Paul, no my debayer is not based on bilinear interpolation.
My interpolation is a melt of AHD, posteriori and some of my own.
AHD and posteriori interpolates the green plane two times one by interpolating horizontally and the second time vertically. I add a third time by interpolating in using all surrounding pixels. In all three algorithms interpolation also uses the red and blue plane to retrieve high frequency component and transplant it into the green plane, this increased the resolution by quite a bit.
Now we can interpolate the red and blue planes two or three times based on the green planes we already reconstructed. When reconstructing blue values in red pixels (or red values in blue pixels) we again use directed interpolation. The red and blue planes are reconstructed by looking at the color difference compared to green, otherwise you get a lot of color aliasing like in bilinear interpolation.
Now we have two or three full color images. we select a pixel from one of these, depending on the smoothness of the area around it. This is how we eliminate the zipper artefacts. My third image is used to get rid of the maze artefact which I feel is important when used for cinema.
Then for an encore the resulting image is passed through a couple of median filters that work on the color differences. This will reduce the color aliasing even further.
Most debayer algorithms work with integer numbers which can reduce quality by quite a bit and makes color grading difficult. I am doing all these calculations at 32-bit floating point to retain precision, which I hope will make color grading easy.
John Papadopoulos January 21st, 2008, 01:32 PM Paul, no my debayer is not based on bilinear interpolation.
My interpolation is a melt of AHD, posteriori and some of my own.
I never said it was bilinear. It doesn't look bilinear at all. It does look like an AHD variant in the shadows.
We use synthetic images to test the debayer but mainly largely downsampled DSLR images (using a custom filter to prevent alias) which results very smooth and sharp 4:4:4 images. Then remove the color portions that cannot be encoded by debayer, send the images through the debayer and compare to the original 4:4:4. It is a much better test than actual images from the sensor, because the MTF is extremely high and any alias and image quality issues show up.
Take Vos January 21st, 2008, 01:38 PM John, Paul asked me if my algorithm is bilinear.
Jason Rodriguez January 21st, 2008, 01:46 PM Actually i've since gone back to the images on silicon imaging and pixel peeped and can quite easily see debayering artifacts (even without sharpening). So maybe it's not that good (the workflow is nice and the overall 'feel' of the images are great).
The images for "pixel-peeping" on the stills gallery are unfortunately only bilinear, and this was what we got from that user (i.e., they exported the footage for us, we didn't shoot that ourselves).
CineForm provides better demosaic options for final output.
John Papadopoulos January 21st, 2008, 01:59 PM The images for "pixel-peeping" on the stills gallery are unfortunately only bilinear, and this was what we got from that user (i.e., they exported the footage for us, we didn't shoot that ourselves).
CineForm provides better demosaic options for final output.
Hi Jason,
Would you say the users actually prefer bilinear because of the cost of other methods?
By cineform you mean the generic codec or the one you use?
Daniel Lipats January 21st, 2008, 02:02 PM This may seem like a silly question, things such as debayering are over my head. But if its a problem to get a high quality debayer real time, why not just do it in post on a high end system?
Wouldn't it make sense to record in the highest possible quality format and debayer it later with a more complex algorithm? Also have it available for the future as better, more efficient, and sharper algorithms become available?
John Papadopoulos January 21st, 2008, 02:07 PM This may seem like a silly question, things such as debayering are over my head. But if its a problem to get a high quality debayer real time, why not just do it in post on a high end system?
Wouldn't it make sense to record in the highest possible quality format and debayer it later with a more complex algorithm? Also have it available for the future as better, more efficient, and sharper algorithms become available?
It's always in post. The data on the video files is still bayer. But it still makes a difference. If you render a 20minute video and it takes 40minutes vs 40hours, I would say that's important:)
Paul Curtis January 21st, 2008, 04:29 PM The images for "pixel-peeping" on the stills gallery are unfortunately only bilinear, and this was what we got from that user (i.e., they exported the footage for us, we didn't shoot that ourselves).
CineForm provides better demosaic options for final output.
Jason, the next question is: do you have any samples up anywhere? I thought i remembered a whole batch of images on the SI site but not anymore.
Im not a pixel peeper but in those you can clearly see artifacts in the eye lights and other places of high transition without zooming.
thanks
paul
John Papadopoulos January 21st, 2008, 09:13 PM This is some interesting dsp. We removed the bilinear debayer effects and did a debayer and lens correction from scratch. I didn't code the processing but I find amusing that you can get from A to B:)
http://img255.imageshack.us/img255/6387/rebuildaq5.jpg
|
|