January 21st, 2008, 09:27 AM | #136 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
They process the consumer camcorder output a lot. But the format themselves save the day. There is not much in the shadows of most compressed video formats, so the problems sit below the format limits. With uncompressed video there is nothing to hide the flaws.
The test I posted was quite extreme though. It was lit by my mobile phone screen from a long distance, it is 200% zoom and uses 24db of gain:) What would a professional camcorder look like with 24dbs in this situation? Totally black perhaps. I have tried XDCAM HD at 12dbs and it was already very noisy. We would never sell these camera heads to someone of course. There are better cameras out there. And better samples even for these particular cameras. |
January 21st, 2008, 09:48 AM | #137 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
I was just thinking of low light situations, and I have a starlight ceiling in my bedroom. Which is even hard to see by eye, so the camera would have no change unless it was integrating over many images.
And then I though these would be very small dot and would only be seen by a single pixel on the sensor. Say these stars where white, the resulting color would be from the pixel, pretty weird. In my debayer algorithm I wanted to separate the high frequency components anyway, now they are added immediately to the color we are trying to interpolate. I guess I should add the high frequency component at the end of the debayer (after direction selection) as a white offset. This would maintain white stars on a black background. |
January 21st, 2008, 10:29 AM | #138 |
Major Player
Join Date: Mar 2005
Location: canterbury
Posts: 411
|
John,
Im pretty sure that SI use cineform all the way through, i will check with them though. The cineform RAW product lightly compresses the sensor data before debayering. Then you can choose a real time debayer (with real time grading possibilities) or render a final quality debayer with the editing system. I believe the final render is bilinear based. Now what happens to the sensor data before giving it to cineform may be the grey area we are talking about. What kind of sensor fixing is needed, hardware based or software. Perhaps that's the area that silicon imaging have put their effort in to as well? Is this the key area? Hardware based correction before the data? But there's a chance that some of those images are from the quick and dirty debayer, not final. You've mentioned in the past you didn't like cineforms debayer, so do you have some other samples you've seen? cheers paul |
January 21st, 2008, 11:05 AM | #139 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Hi Paul,
bilinear debayer is very low quality, so I hope they do something a little bit better. My own debayer is very slow though, it takes around 5 seconds for a single image. The fast debayer is automatically selected by Final Cut Pro, so you can edit at real-time performance. When you export in Final Cut Pro it automatically selects the high quality debayer. Cheers, Take |
January 21st, 2008, 11:13 AM | #140 |
Major Player
Join Date: Mar 2005
Location: canterbury
Posts: 411
|
Take,
Actually i've since gone back to the images on silicon imaging and pixel peeped and can quite easily see debayering artifacts (even without sharpening). So maybe it's not that good (the workflow is nice and the overall 'feel' of the images are great) I've had another look at johns images and the debayering is substantially better! Is your custom debayer based on bilinear too? Also there's a thread about cineform vs Reds debayering which is worth a read over on reduser.net cheers paul |
January 21st, 2008, 11:33 AM | #141 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
Debayer quality is always a trade between ability for post processing and resolution. The expensive debayers are not very good in postprocessing and can have some small artifacts in motion. Their intelligence is their weak point. The cheap debayers are not very good in terms of resolution and come with artifacts. You have to find a solution somewhere in the middle. The realtime debayers are a challange on their own. For a production tool like a camera, you cannot have something that needs 50x or 150x realtime to debayer. It's not practical.
We are using a debayer we have designed from scratch. Lots of different versions are used in the samples, some with bugs, some without. I have seen lots of images from the SI camera. I'm sure they have better quality options on final output besides bilinear. I remembering reading about that. The cineform codec is using a basic bilinear I believe. The SI uses cineform raw, but the SI is a complete system with user interface, monitoring, extra processing, finetuned to the specific sensor, etc. The cineform codec is just compression with a low quality playback preview and medium quality final output. |
January 21st, 2008, 12:49 PM | #142 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Paul, no my debayer is not based on bilinear interpolation.
My interpolation is a melt of AHD, posteriori and some of my own. AHD and posteriori interpolates the green plane two times one by interpolating horizontally and the second time vertically. I add a third time by interpolating in using all surrounding pixels. In all three algorithms interpolation also uses the red and blue plane to retrieve high frequency component and transplant it into the green plane, this increased the resolution by quite a bit. Now we can interpolate the red and blue planes two or three times based on the green planes we already reconstructed. When reconstructing blue values in red pixels (or red values in blue pixels) we again use directed interpolation. The red and blue planes are reconstructed by looking at the color difference compared to green, otherwise you get a lot of color aliasing like in bilinear interpolation. Now we have two or three full color images. we select a pixel from one of these, depending on the smoothness of the area around it. This is how we eliminate the zipper artefacts. My third image is used to get rid of the maze artefact which I feel is important when used for cinema. Then for an encore the resulting image is passed through a couple of median filters that work on the color differences. This will reduce the color aliasing even further. Most debayer algorithms work with integer numbers which can reduce quality by quite a bit and makes color grading difficult. I am doing all these calculations at 32-bit floating point to retain precision, which I hope will make color grading easy. |
January 21st, 2008, 01:32 PM | #143 | |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
Quote:
We use synthetic images to test the debayer but mainly largely downsampled DSLR images (using a custom filter to prevent alias) which results very smooth and sharp 4:4:4 images. Then remove the color portions that cannot be encoded by debayer, send the images through the debayer and compare to the original 4:4:4. It is a much better test than actual images from the sensor, because the MTF is extremely high and any alias and image quality issues show up. |
|
January 21st, 2008, 01:38 PM | #144 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
John, Paul asked me if my algorithm is bilinear.
|
January 21st, 2008, 01:46 PM | #145 | |
Trustee
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
|
Quote:
CineForm provides better demosaic options for final output. |
|
January 21st, 2008, 01:59 PM | #146 | |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
Quote:
Would you say the users actually prefer bilinear because of the cost of other methods? By cineform you mean the generic codec or the one you use? |
|
January 21st, 2008, 02:02 PM | #147 |
Major Player
Join Date: Feb 2006
Posts: 260
|
This may seem like a silly question, things such as debayering are over my head. But if its a problem to get a high quality debayer real time, why not just do it in post on a high end system?
Wouldn't it make sense to record in the highest possible quality format and debayer it later with a more complex algorithm? Also have it available for the future as better, more efficient, and sharper algorithms become available? |
January 21st, 2008, 02:07 PM | #148 | |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
Quote:
|
|
January 21st, 2008, 04:29 PM | #149 | |
Major Player
Join Date: Mar 2005
Location: canterbury
Posts: 411
|
Quote:
Im not a pixel peeper but in those you can clearly see artifacts in the eye lights and other places of high transition without zooming. thanks paul |
|
January 21st, 2008, 09:13 PM | #150 |
Major Player
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
|
This is some interesting dsp. We removed the bilinear debayer effects and did a debayer and lens correction from scratch. I didn't code the processing but I find amusing that you can get from A to B:)
http://img255.imageshack.us/img255/6387/rebuildaq5.jpg |
| ||||||
|
|