View Full Version : 4:4:4 10bit single CMOS HD project



Obin Olson
July 31st, 2004, 04:21 PM
Ben your idea is awesome: Jpeg type codec with 10bit! I don't care about jpeg as long as we keep it at 10bit for color work! the bit depth is the most important thing IMHO...and the most lacking from the NLE people

Well for us Jason all we need is the highquality codec...everything we master is SD so if we can work in HD and do effects in HD and color work in HD then the last step is SD..I don't want or need all that gear..not yet anyway

Jason Rodriguez
July 31st, 2004, 04:28 PM
If you're doing everything to SD, then you might want to try out the PhotoJPEG codec from Apple or the DVCProHD codecs from Apple/Panasonic.

FCP should be able to edit with Sheer Video, unless of course Sheer doesn't really support HD frame-sizes in FCP (which I think was our problem).

I don't know of any 10-bit JPEG codecs our there except for something based of JPEG2000.

At 720P sizes you could also probably make your own HD array with 4 SATA drives-that might be enough.the bit depth is the most important thing IMHO...and the most lacking from the NLE peopleThe reason it's the most lacking is because almost every tape format is 8-bit except for D-5, HDCAM-SR, D-6 Voodoo and Digibeta. Everything else is 8-bit.

Juan M. M. Fiebelkorn
July 31st, 2004, 04:33 PM
Well, that is what I was saying the last time I talked about JPEG2000 (open source projects JASPER, Libj2K, etc) and most of the people here condemned me.
Jpeg2000 supports all the way up to 16 bit depth.
Ben, wasn't you the guy that always talks against compression??
It looks to me things are changing very fast here, hehehe :)


BTW, if anybody is interested about this, the nearest colorspace for a Bayer pattern is YUV 4:2:0

Ben Syverson
July 31st, 2004, 05:02 PM
Juan, I just think there should be a 100% lossless pathway for people who want to maintain everything. However, realistically, this isn't necessary for most people. So something like JPEG at a high quality in 10bit is enough.

I agree with Obin -- I don't want a bunch of high-end gear that depreciates hundreds of dollars every minute. I have absolutely no idea how I'll be producing an HD master in the future, but I'm not concerned. For now, no one I know (including me) has an HD display (besides a computer monitor), so what's the point of producing an HD final product? I'm interested in finishing on SD DVDs. When HD DVDs are available, that will be my output option of choice.

The point is flexibility. With uncompressed 10bit 720p, you have tons of options. But I won't be outputting to HD tape until there's a good, inexpensive option (better than HDV anyway), or someone with lots of money wants to see my movie on the big screen. :)

- ben

Les Dit
July 31st, 2004, 09:22 PM
I think I've mentioned before I've used Jpeg 12 bit for years for feature work. It's close enough to lossless. Not gunna be real time. See the jpeg public domain code, it's free for you to compile however you want.
-Les

Jason Rodriguez
July 31st, 2004, 09:27 PM
How hard would it be to adapt DCRAW?

Juan and I were talking, and this seems like the best conversion out there that already has the source-code available.

you can find the source code here: http://www.cybercom.net/~dcoffin/dcraw/dcraw.c

I was testing this out on some images, and they look really good.

I've also emailed David to see if he won't implement converting TIFF files that are greyscale bayer to color files. He might, and he might not, so we'll see.

Jason Keenan
August 1st, 2004, 08:24 PM
Ok, here I go again putting forward info I know little about.

Just found a multimedia library, unfortunately only for windows, which seems to include bayer conversion. It's here http://www.gromada.com/mcl.html

I'm not sure if this will work in this case though.

(ok, I just downloaded their app and it doesn't convert automatically. That's not to say that the library couldn't be made to do it)

I was also mucking around at home and had a look at the raw_cap.bin file in a hex editor and the same file converted to an uncompressed tiff. It's just a matter of cutting the header and end section from the tiff and applying them to the bin file. I was going to put together a very basic windows assembly tool but I'm a bit rusty. (don't get excited, my 'programming' skills come from cracking in my younger years. Anything more than a hack job is beyond me)

Obviously, this is just the bayer greyscale image. I'm still trying to get my head around bayer but I found a site that explains it a bit. Here, tis http://www.siliconimaging.com/RGB%20Bayer.htm with some formulas. This one is better with matlab scripts http://www-ise.stanford.edu/~tingchen/main.htm

In windows you can put the following in a .bat file

COPY /B header.bin /B + raw_cap.bin /B + tail.bin /B file.tif

header.bin - being a binary file containing the start of the tiff
raw_cap.bin - being the still capture
tail.bin - being the tail bit from the tiff

You will of course need to generate your header and tail bins.

Do this by getting your image file, open it in photoshop, save it as an uncompressed tif.

Open the raw image and your tif in a hex editor. Search for the first 5 or so bytes from the raw image in the tiff file. Delete from there on then save the result as header.bin, open the tiff again and search fro the last 5 or so bytes from the raw file and delete all ahead of it. Save as tail.bin

In theory, this should make your raw bayer file into a bayer tiff.

You can do a similar thing in a Unix script so OSX users can do it too.

I can post the binary files somewhere if you like, but until then you can give it a bash yourself.

Raavin :)

Jason Rodriguez
August 2nd, 2004, 01:03 AM
Hey Guys,

I'm not sure if this is a problem or not,

But as of right now, if you're trying to convert (especially from the Altasens) the RAW images to a color 16-bitt TIFF, etc., even if it takes 1 second per frame (which is pretty quick), it's going to take 24 hours to process 1 hour of shot footage!

Another thing to consider is that you can't edit with TIFF sequences, so you'll have to render some offline format, and again, that's going to take at least 1 second per frame to shrink the HD down to SD resolutions for editing (or to encode to Sheer, etc.), meaning again, at least another full 24 hours for 1 hour of shot footage.

2 seconds per frame would take 48 hours, etc.

I'm trying to think if that will be a problem, namely a minimum of 48 hours on some of today's fastest gear (I have a dual 2Ghz G5, so it's no slouch) to get your frames output, unless you invest in a render farm. Also that render farm is going to need some very fat networking pipes, because simply hooking up ethernet (if it's not Jumbo Frames on gigabit, and those switches cost $5,000) through a cheap hub, the amount of information you'll be passing around will eat up processor cycles and ruin the effectiveness of the render farm. That's why I mentioned a good switch with jumbo frames, which again can easily cost $5K or more.

Anyways, I'm just wondering how this would apply to shooting a film. I guess you would develop as-you-go? Also be ware that the uncompressed footage converted to RGB will take up around 1.2TB at 12-bit-per-channel for the 1920 Altasens and 1 hour of shot footage. Compressed RAW greyscale should be about a third that amount, actually it'll be 432GB for an hour of RAW bayer footage from the Altasens at 24fps (considering it'll be 72MB/s).

Not a no-show, but I guess bayer image processing and offline processing time-frames are going to be some things that need to be considered for a film production workflow.

Ben Syverson
August 2nd, 2004, 01:11 AM
You could process footage way, way, way faster than 1fps on reasonably decent hardware. Even if you're doing pretty sophisticated de-Bayering. Especially if the code is optimized (possibly with vectorization/SIMD).

Totally unoptimized, it would probably run at about 2-3fps. Vectorized, your main bottleneck becomes the drive speed. I'm sure you could get between 10 and 15fps.

Jason Rodriguez
August 2nd, 2004, 01:15 AM
With LinBayer I'm only getting around 1.2fps on my dual 2Ghz G5

Les Dit
August 2nd, 2004, 01:43 AM
Ben, what are you basing this frame rate estimate on? How long does the software package the still photo guys use to demosaik take? I forget the name of the app, but it's for doing work on still camera RAW images.
-Les



<<<-- Originally posted by Ben Syverson : You could process footage way, way, way faster than 1fps on reasonably decent hardware. Even if you're doing pretty sophisticated de-Bayering. Especially if the code is optimized (possibly with vectorization/SIMD).

Totally unoptimized, it would probably run at about 2-3fps. Vectorized, your main bottleneck becomes the drive speed. I'm sure you could get between 10 and 15fps. -->>>

Juan M. M. Fiebelkorn
August 2nd, 2004, 01:55 AM
Well, a long time ago I made a resizing filter for Avisynth that had a speed of around 18 fps on an Athlon 2000 for a resulting image of 1600x1200.Obviously it worked on 8 bit depth images.
It was really un-optimized and very alpha with lots of comparisons, multiplications and divisions.It uses some techniques that are well suited for demosaicking (in fact I was upsampling color vectors based on Luma info, or Red and Blue based on Green for RGB).Maybe if I can get the time to code again I could release something in a month or two......

Ben Syverson
August 2nd, 2004, 01:59 AM
linBayer is totally unoptimized, and 100% single-thread -- ie, it's only using one of your processors, and using it extremely inefficiently. I won't optimize it until the code is a little more finalized.

Multithreading alone will give you an almost 100% speed increase, because it'll start using that second processor -- that would put you anywhere from 2fps to 2.4fps. Simple code optimization could probably give at least a 25% boost, to around 2.75 - 3

But linBayer is a prime candidate for vectorization, which could boost the speed even more, and for each processor. On a 2x2ghz G5, I wouldn't expect anything less than 8 - 10fps once it's fully optimized, and that's conservative.

The real bottleneck is disk speed, not processing speed. AE is not designed for fast disk reads -- rather than reading a bunch of frames into memory at a time, it determines file dependencies on a frame-by-frame basis. So. Every. Frame. Is. Read. In. Separately. That's necessary to keep AE flexible, but if you have a specialized need like we do, it's unecessary.

A well-coded standalone app could read in a bunch of frames at a time, churn through them with multithreaded and possibly vectorized code, and spit them to disk in sequence. That way, the drive gets to do a nice big sequential read (which is where you get your burst data transfer rates), your processor gets to work directly from RAM, and then it goes back to disk in a big fast sequential write.

Additionally, processing will probably be faster in general than in AE, because you won't have quite as much overhead associated with being a plug-in within a large app.

These are the reasons why 10 - 15fps is totally possible, given some nice disk throughput.

Add to this fact that MPEG-4 compresses in better than realtime on a G5, and you could probably write out a low-res editing proxy at the same time as your Bayer->RGB conversion with a small speed hit.

But the bigger question is, why convert Bayer->RGB this way? It seems to me that the data should be stored as Bayer information, and de-Bayered by a codec as it's played back. That way you get the storage benefits of compressed Bayer (lossless or lossy, either way), but you can see the image in RGB in all your apps. (Including AE, FCP, etc)

It makes absolutely no sense to store uncompressed 4:4:4 RGB versions of Bayer images on your hard drive, unless you just love eating up 3X the disk space and throughput for absolutely NO gains...

Rai Orz
August 2nd, 2004, 02:13 AM
Today i will order the Sumix SMX 150C, because we think the IBIS5 sensor is mutch better than others (in this price range = 2/3" and global shutter). Ben, how much did you pay for it, exactly? It seems, Sumix want to co-operate with us (software and hardware changes).
I know the bad software and the slow USB2.0, but this will be not a problem, we have other solutions so we will use the camera at 24fps with 10Bit and global shutter.

Global shutter was the reason for the Sumix decision. Because the SI rolling shutter is absolutely unfit. We made test with a german camera head (with same sensor characteristics). At 24fps, but also at 48fps, the rolling shutter artefacts destroys each moving picture.
The altasens will have also a rolling shutter, but mutch faster (not the whole fps-time). This produces mutch less artefacts. The altasens supported also a external shutter, so on this way the artefacts will disappear completely.

Ben Syverson
August 2nd, 2004, 02:19 AM
Rai,

I agree that 2/3" is totally key. The rolling shutter is not such a show-stopper for me, but the IBIS-5 can indeed do global shutter. What remains to be seen is whether the SMX-150c can deliver 24fps of global shutter of USB2. I have my doubts, but I'll be interested to hear how your experiments go.

If you email Sumix and mention my name, they should give you the same deal they gave me. They're very eager to work with filmmakers and figure out our needs.

I should note that they're taking a bit of a summer vacation right now, so you may not get an immediate response. I think they get back in full force in a week or two.

- ben

Wayne Morellini
August 2nd, 2004, 02:49 AM
I thought we were working together towards an eventual cheap solution, step by step? I think we are still going well, with more of the steps getting done. If anyone wants to jump the gun and experiment, or wants broadcast compliance, before then it is going to cost you more.

My 2 cents, we are fine, and if you read Rob's Developement log, he is making progress on speeding up the software (I did warn everybody that it wasn't easy to program for the best speed).

Juan and Ben, it is obviouse you want to do bayer and compression codecs, why don't you volunteer to work on them with Rob. He is allready doing capture software you could each volunteer to do bayer and compression? It maybe crazy economics, but I think it is worth it.

Jaun, you wrote about people shooting you down about jpeg2000, but I say go ahead great (but remember the BBC version). We are looking at universal codec support, but for now we could use a few codecs, the fastest codec (huffy??) and the best (wavelet based lossless, near lossless, and 20:1).

Everybody has different needs, all we need to do is support the best needs.

Ben you wrote a lot of wisdom there, unless we are passing to broadcast, display, some non-bayer NLE, or using 3 chips, there is no reason to debayer.

Rai, been sick in bed for days because of stomach bug, I will try to email you by the end of the day.

Somebody posted here (or one of the other threads, even photo camera to video mod threads) that there was a number of SD cameras you could catch "frames" from live through firewire. Which models I don't know but worth asking around about.

Thanks

Wayne.

http://www.cameraaction.com.au/, should be able to do a better price on an XL1S.

Jason Rodriguez
August 2nd, 2004, 04:45 AM
Unless your app captures directly to Quicktime Ben, you're going to have to convert from RAW to Quicktime, but maybe Rob's app could capture directly to QT, although dropped frames will be a new concern (ala FCP captures).

Keep in mind that QT on Windows (where the framegrabbers are) is horribly slow, much slower than the Mac.

Also for HD 1920x1080 I'm definitely NOT getting better than real-time for MPEG4 transcodes. On SD footage yes, but not HD-it could be disk-write speeds, but again, I'm back at the good 'ole 2-3fps for pure transcodes/resizing.

Rai Orz
August 2nd, 2004, 05:24 AM
<<<-- Originally posted by Ben Syverson: What remains to be seen is whether the SMX-150c can deliver 24fps of global shutter of USB2. I have my doubts, but I'll be interested to hear how your experiments go-->>>

Fist, we can change the hardware inside the camera. For example in the past we changed canon video cameras (up-side-down) to work with our 35mm solution, but without a prism. So we can go arround the USB2.0. But...

...second, read this part of a email from Sumix:
"...In the present SMX-150C pixels are sampled at 10 bit. Then a look up table (which can be arbitrarily programed) converts the 10bit to 8bit. In effect at present SMX-150 user has access to all 10 bit by choosing the look up table accordingly. However we have a new version (a software/firmware upgrade) that transfers directly 10 bit pixel data to PC. This upgrade will be ready in 2-3 weeks. In addition this version has the option of using the multi-gain (muti-slope) sampling features in the IBIS5 sensor used in the cameras. This muti-sampling at different gains provides defective 12 bit pixels (compressed in 10 bits for transfer to PC.) This upgrade will be free of charge for people who have purchased the present SMX-150C...
...Both global and rolling shutters can be used for video streaming..."

Juan M. M. Fiebelkorn
August 2nd, 2004, 05:28 AM
I would like to see how the dual slope operation will look like..... :)

Defective or Efective??

I'm still thinking they should give us an SDK.It would be good both for us and for them....

Rob Scott
August 2nd, 2004, 05:43 AM
Jason Rodriguez wrote:
Unless your app captures directly to Quicktime Ben, you're going to have to convert from RAW to Quicktime, but maybe Rob's app could capture directly to QT, although dropped frames will be a new concern (ala FCP captures).I suppose that is possible, but my plan right now is to write to a very simple raw format for speed. QuickTime conversion will be done offline.Ben Syverson wrote:
It seems to me that the data should be stored as Bayer information, and de-Bayered by a codec as it's played back. If we had such an on-the-fly Bayer codec, the offline conversion would be very fast; it would just rewrite the video stream into QuickTime format without any further processing.

Mostly, though, I expect the conversion stage to be used to Bayer-process the footage, apply gamma correction (etc.) and then write to QuickTime using a near-lossless codec for a reasonable compress ratio.

Rai Orz
August 2nd, 2004, 06:25 AM
Write in a very simple RAW format is the key.

I dont look for other ways, because all other ways will "change" the image-information (differnt decoder = different bayer-pixel-resolution). If you store the original RAW format, you can select the best bayer decoder for your images later.

I think a real-time decoder is need only for the viewfinder or camera-display. But maybe you can live with the B&W RAW format.

Juan M. M. Fiebelkorn
August 2nd, 2004, 07:02 AM
what's the problem of seeing a 1280x720 camera at 640x360 in color on the viewfinder??
Anyway you would need to view a MODIFIED version of the RAW image on the Viewfinder, unless you have some kind of gamma correction suitable for this kind of data inside your Brain...

I'm getting lost again.....

And , What's the problem of LOSSLESS transformations on the RAW Bayer? (LOSSLESS means you always have the EXACT original data)

Ben Syverson
August 2nd, 2004, 09:38 AM
@Jason: Unless your app captures directly to Quicktime Ben

The Sumix app can capture directly to AVI, a frame sequence, or a "SMX" video stream, which is basically a stream of raw data. The easiest solution is to generate a grayscale Bayer AVI, and convert that to Quicktime. I guess I don't understand what the problem with that is.

@Jason: Keep in mind that QT on Windows (where the framegrabbers are) is horribly slow, much slower than the Mac.

Doesn't matter much to me -- my PC laptop is just a capture slave. Everything else in my pipeline happens on a Mac.

@Jason: Also for HD 1920x1080 I'm definitely NOT getting better than real-time for MPEG4 transcodes.

I was talking about SD (low res) proxy footage for editing purposes. Just a throwaway thought. It's just one possibility...

@Wayne: Juan and Ben, it is obviouse you want to do bayer and compression codecs, why don't you volunteer to work on them with Rob.

I already have -- I have an open invitation to anyone working on an free or open source integrated solution to use my de-Bayering code. But it's just as well -- I'd like to get it in better shape before I let anyone see it.

But I'd rather not work directly on Rob's app, as I think he's focusing on a PC-based solution, whereas everything I do is on the Mac. Also, I think Rob's sw is basically capture software specifically for CameraLink (Rob, correct me if I'm wrong), which doesn't apply to my camera. It may make sense to share certain techniques and code snippets, but if I decide to go forward with a Quicktime codec, a lot of that code will be fairly specific.

I don't believe Rob is working on a codec, but maybe he has plans for one...

@Wanye: the fastest codec (huffy??)

Wayne, remember that HuffYUV is... YUV. To get YUV from Bayer, you have to do Bayer->RGB->YUV. And remember that YUV @ 4:2:2 takes up TWICE as much space as Bayer. There's no reason to use HuffYUV.

I'm not sure how well Huffman encoding would work on raw Bayer channels (ie, 1/2 size G, 1/4 size R+B). HuffYUV probably works so well because U and V are mostly gray....

A 10 or 12 bit JPEG/Wavelet would be fine for 99.9999999999% of the people on this board, but it would be slower than lossless, so what's the point?


The single best option in my eyes is RLE/Huff on Bayer data, inside a codec that displays RGB for you. That way, you get the 15-20MB/sec file size, and you never have to worry about de-Bayering, because the codec does it for you. That would be the way to go, and it's what I'm currently investigating via a Quicktime Component. If anyone wants to help, they're more than welcome to.

- ben

Steve Nordhauser
August 2nd, 2004, 10:16 AM
At the risk of supporting my competitors' sales, here is an application note on dual slope with the IBIS-5:
http://www.siliconimaging.com/Specifications/Dual-Slope%20Ap%20Note%20001.pdf

Here is another test:
I just did a quick experiment with the dual slope capabilities of the SI-1280F using XCAP. I placed a 20W halogen in a fairly dark scene, pointing directly at the camera.

(all files are 1.3MB Tiff files)
http://www.siliconimaging.com/Samples/DualSlope/1280%20Single%20slope%2024%20microsec%20exp.tif
http://www.siliconimaging.com/Samples/DualSlope/1280%20Single%20slope%2013msec%20exp.tif
Then I turned on the dual slope:
http://www.siliconimaging.com/Samples/DualSlope/1280%20dual%20slope%2024%20microsec%20bright%20exp.tif

I didn't spend a lot of time tweaking this but you can see that the details of the lamp are there along with the background image in the dual slope capture. I'm sure it would be possible to improve the contrast of the dark image, but this will give you an idea of what can be done.

Having provided this public service, I'll drop in a commercial. If you find that when you move to 10 bit and global shutter, USB 2.0 can't hack it anymore, think about our camera link version. It is true 12 bit and clocks to 60MHz (important in global shutter mode). It is still an IBIS-5 so it comes with all the other baggage (noise and washed out colors), but that is sensor level stuff.

Ben Syverson
August 2nd, 2004, 10:19 AM
Woah -- Steve, what was your gain set to? I've never seen footage that noisy unless my gain was set through the roof...

Ben Syverson
August 2nd, 2004, 10:23 AM
Hopefully there's some way to control the slopes (Sumix mentioned a multislope setup, with control over each color), because as it is, the dual-slope looks extremely unnatural...

Steve Nordhauser
August 2nd, 2004, 12:52 PM
Ben:
This is one of my beefs with the IBIS-5. In global shutter mode, the integration time is sequential with the readout time. Let's say that they can move data over USB 2.0 at 40MB/sec and that the 10 bit data is packed. So, you are streaming at 320Mbps or 32Mpix/sec max rate. The real transfer time you need is 1280x720 x 24 = 22Mpix/sec. That leftover time is all you get for integration. The IBIS-5 isn't that sensitive to start, so crank up the gain or the lights in global shutter mode.

With CCD interline transfer or CMOS Truesnap you can overlap the integation time and readout time.

The IBIS-5 doesn't exactly control slope. You can set a knee point and any pixel that is saturated at the knee is reset. In my lamp example, the light is reset very shortly before closing the shutter. This lets the bright areas integrate for a short period while allowing the full integration for the rest of the image. Unless it is finely tuned, it does look very artificial. Very nice for machine vision though.

Ben Syverson
August 2nd, 2004, 03:46 PM
Steve -- interesting. Thanks for the illumination. I did do a couple of global shutter tests recently (with still frames) and I found it to be much noisier (and darker) than rolling shutter with the same settings (even without gain). I'm not 100% sure what's going on there, but it was enough to dissuade me from global shutter for now.

I'm implementing a 16 pixel Spline interpolation for linBayer. The results are extremely promising.

This is the normal (bilinear) interpolation (http://www.bensyverson.com/hd/images/tests/bilinear.jpg)

This is the Spline16 interpolation (http://www.bensyverson.com/hd/images/tests/spline.jpg)

I should note that the process is still 100% lossless, even with Spline16 -- the original pixel values are not changed, so you could extract the original sensor data from the RGB data and do a different de-Bayer routine in the future if you so desired...

It's an interesting problem, because any interpolator has to fill in 75% of the image data in the R + B channels. Spline seems to do a really good job. I don't think Spline36 would be worth it, for all the extra processing.

- ben

Jason Rodriguez
August 2nd, 2004, 03:49 PM
Hi Ben,

That spline-16 looks very nice-will this also be implemented on the green channel as well as the red and blue?

Ben Syverson
August 2nd, 2004, 03:56 PM
Jason, I'll give it a shot. Maybe Spline is all we need to get rid of our gridding and zippering problems, since it sees a neighborhood of 16 pixels for every pixel interpolated. However, it means that processing 720p really will be slow, 1080 even more so. Bilinear is an integer operation, whereas spline really needs to take place in floating point. That in itself is not the speed hit, it's the type conversion to/from integer from/to float -- 512 times for every pixel.

At this point, it might make sense to read the image into a floating point array and do everything in float. No type conversion means no slowdown...

Jason et al,

Spline doesn't help at all with zippering. In fact, it looks almost identical to bilinear, since it doesn't have to do very much interpolating (50% of the data is already there).

Our de-zippering stuff still beats anything I've seen, but I'm open to suggestions as always!

Jason Keenan
August 2nd, 2004, 07:37 PM
Wayne:

Yes Raavin is Jason Keenan. I'm using my normal handle just so when people reply to me, others don't get confused with the other Jason. I've seen cameras cheaper from www.videoguys.com.au. They have a good selection too.

Ben:

From a lay point of view, I agree with you about leaving the raw data alone and converting it with a codec. It just seems to make sense. I think it was you who replied to my comment about using avisynth. I suppose that's more what I was thinking.

I'm still trying to get my head around the 8 bit versus 10 bit stuff. Having a look at the raw file in a hex editor and using some flawed logic, it seemed to me that when you drop the raw file to 8 bit you halve the size of the original because the original 10 bits needs a full 16 bits to fit into. Is that right?

On another line, just thinking out loud, is it possible to stream the file through a fantasy 8 bit conversion codec, basically just dropping the 2 extra bits on the fly for preview and editing, then be able to change codecs for colour grading and final rendering. I'm assuming though that you need at least most of the bits to do the bayer conversion though. It just seems that in order to keep things flexible and compatible you need to have the ability to open the file in anything and at the full frame rate.

Anyone got any links to how bitmaps are arranged in hex format for me???

Raavin :)

Jason Rodriguez
August 2nd, 2004, 08:02 PM
Spline doesn't help at all with zippering

Did it help at all with the "gridding"? I thought your de-zipper system was already really good, so I definitely couldn't see much improvement there, but the "gridding" in low contrast areas/out-of-focus/gradient areas was really the problem for me at least.

Ben Syverson
August 2nd, 2004, 08:06 PM
The gridding is an artifact of the de-zippering that only occurs when the two green offets or gains don't quite match. So since I wasn't doing any de-zippering, there wasn't any gridding. However, since the two greens on the shot I was working on didn't quite match, you could see a slight horizontal line pattern (just as you would with bilinear)...

Basically, the solution is to correctly calibrate your shot before you record. Otherwise, you'll either get a horizontal line pattern. Or gridding when you de-zipper.

- ben

Jason Keenan
August 2nd, 2004, 10:31 PM
Another couple of stupid questions

1) In the '10 bit' raw bayer file, does that mean that in each pixel, there are 1024 shades of grey?

2) Anyone care to give a breif explaination of how these pixel values are represented in the file???

Raavin :)

Ben Syverson
August 2nd, 2004, 10:41 PM
Raavin, yeah -- 10 bit represents 1024 values. You can always determine the number of values by doing
2^x
Where x = the number of bits. So 2^10 == 1024

I don't know what you mean by "represented in the file," but any native 10-bit format will do bit packing. That is, since a byte is 8bits, and since the byte is how we write to files, it means that uncompressed 10 bit would be written like this. I'll use "1" to represent the first 10-bit value, "2" to represent the second 10-bit value, etc. Each group of 8 numbers is a byte written to the file.

11111111____11222222____22223333____33333344____44444444

etc

That means it won't make much sense in a hex viewer. However, one extremely inefficient way to store 10bit data is to pad it to 2 bytes (16bit). That would look like this:

0000001111111111___0000002222222222___0000003333333333___0000004444444444

As you can see, we're using far more bits/bytes to represent the same data.

- ben

Wayne Morellini
August 2nd, 2004, 11:22 PM
<<<-- Originally posted by Ben Syverson :
But I'd rather not work directly on Rob's app, as I think he's focusing on a PC-based solution, whereas everything I do is on the Mac. Also, I think Rob's sw is basically capture software specifically for CameraLink

@Wanye: the fastest codec (huffy??)

A 10 or 12 bit JPEG/Wavelet would be fine for 99.9999999999% of the people on this board, but it would be slower than lossless, so what's the point?

- ben -->>>

Only mentioned it because those two are popular topics here and support the range of compression ratios that people want to see.

I think it was Jaun that spoke about a Bayer version of Huff.

The wavelet stuff is not the fastest but one of the best people like here, and able to do completely lossless.

Rob is trying to do a Universal version of capture software that works with any codec, on the PC first then MAC/Linux. I think he intends to support USB and Gigabit versions of cameralink cameras, but I am unsure. So the codec and capture side would still be seperate.

Wayne.

Wayne Morellini
August 2nd, 2004, 11:23 PM
<<<-- Originally posted by Jason Keenan : Wayne:

confused with the other Jason. I've seen cameras cheaper from www.videoguys.com.au. They have a good selection too.
-->>>

Thanks.

Jason Keenan
August 2nd, 2004, 11:35 PM
Ok, I think I've got it.

So, in your example-

"11111111____11222222____22223333____33333344____44444444"

-if we had 4 pixels which were alternating white and black, it would be

11111111____11000000____00001111____11111100____00000000

yeah???

Would this then show in a hex editor as

ff C0 0f fc 00 ?????? if it was true 10 bit and not padded to 16 bit????

Has it been ascertained whether the bin captures are padded to 16 bit or not. Every other byte has a leading zero so if it's the 'rawest' format available I'm guessing it is.

eg "00 00 90 05 ac 06 18 05 54 06 38 06 28 07 28"
from raw_cap.bin

Thanks for the answers, I'm just trying to get my head around it.

Cheers,

Raavin

Ben Syverson
August 2nd, 2004, 11:48 PM
@Raavin: Would this then show in a hex editor as ff C0 0f fc 00 ?????? if it was true 10 bit and not padded to 16 bit????

Yes, precisely. As Rob clarified, the 10-bit data coming off the SI cameras is padded to 16bits in the following manner: (this would represent 1 value)
0000##########00
Where the hashes represent bits with data. So it's 10-bits of data, padded out to 12-bits, within a 16bit number. It seems very strange to me.

But yes, that's why every pixel will look like: 0X XX in hex, because the first four bits are always 0. And the last two bits are always zeroes, so the last hex value of every pixel will always be C, D, E or F.

But who cares about hex? :)

- ben

Edgar Rodriguez
August 3rd, 2004, 12:16 AM
Hi, this is my first post but I've been following this thread since the start. Congratulations everyone for the great work!!!

-Avid has a codec called DNxHD http://www.avid.com/DNxHD/
6:1 full raster (sampling every pixel on the image) It works for 8-bit and 10-bit at 4:2:2
this will give us uncompressed quality at almost uncompressed SD size

I think this realy would be the best codec to go for, with this project.

Jason Keenan
August 3rd, 2004, 12:18 AM
The reality is that Its likely I won't get this camera. I'm just a geek so I like to learn geeky stuff. I'm looking at other cameras though so this stuff will probably apply.

Re the hex stuff, I've done a little bit of assembly in my younger days. In reality it was mainly 'disassembly' so fairly basic, but I just wanted to know what I was dealing with if I decided to have a play at a basic assembly based bayer filter. It wouldn't be very good, but hey, it's just geek fun.

As you say, I remember this discussion earlier in the thread but at the time I had no idea what it meant. Now i do, sort of. If I understand how the image is constructed, this will obviously be easier.

Cheers

Raavin ;)

Ben Syverson
August 3rd, 2004, 12:31 AM
Raavin: I totally understand. My own interest in this project is about 50% film geek, 50% computer geek. :) That's the ratio I like to maintain. I guess that adds up to 100% geek in any case. :)

Juan M. M. Fiebelkorn
August 3rd, 2004, 12:33 AM
Ben,
Do you have an aproximate ASA equivalent of the Sumix camera you got?

Obin?

Ben Syverson
August 3rd, 2004, 12:38 AM
Juan, I'd put it at between 50 and 100 ASA. It depends on whether and how you're using the LUT -- which will also affect dynamic range. But at 8 bit, your choices as to how to use the range are more limited.

There are definitely times when it seems equivilent to 100 ASA film (which I've shot a ton of), but with certain settings/light levels, it feels a full stop slower.

Please note this is a totally emotional appraisal, because there's no good way to rate a CMOS/CCD sensor in terms of ASA. These sensors are linear, whereas film has a distinctive "S" curve response to light. So an ASA/ISO rating is only a small part of the picture.

Ben Syverson
August 3rd, 2004, 01:20 AM
I just got my 10mm Schneider Cinegon f1.8 lens in the mail today... This is a gorgeous lens. It's only single coated (multicoating is overrated), but it kicks the pants off of the multicoated Computar 8mm I've been using, and it's nearly as wide. It's sharp, high contrast, and has none of the barrel distortion I feared it might exhibit. I think this will become my main shooting lens, with the 8mm used for extra wide shots, the 25mm Angenieux for close ups, and the 75mm Angenieux for telephoto shots. God I love primes. The best thing about this whole HD camera setup is getting to work with primes again, and only using zooms when I want to.

I'm anxious for Sumix to come out with their new software -- I'm ready to shoot!

Juan M. M. Fiebelkorn
August 3rd, 2004, 01:24 AM
Ben,
Do you agree with the 640 ASA rating given to the DVX100?
(I ask this because I want to make a correlation between you camera and DVX to get myself a better idea about sensitivity)

Ben Syverson
August 3rd, 2004, 01:27 AM
I haven't used the DVX100 extensively, but 640ASA is downright laughable, unless you crank the grain.

The DVX100 gives a much darker image than my GL1 at 0 gain, and I would put the GL1 at 0 gain at around 400 ASA. Shooting with the DVX100 is like shooting with about 200 ASA film in terms of the amount of light you need...

Juan M. M. Fiebelkorn
August 3rd, 2004, 01:31 AM
I compared it with a photometer, and to me that scale of 640 was right.I used it everyday for 6 weeks, with and without Mini35 adapter and Zeiss Primes F1.2 .
All the movie was shot at night with only the light coming from sodium street lights and in some ocassion the camera was more sensitive than my own eyes...
I must accept that what you get is very dependant on Gamma adjustments , pedestal etc....
I didn't use more than 0 db gain setting.

So you would say your camera is one stop less sensitive than a DVX100?

Ben Syverson
August 3rd, 2004, 01:42 AM
Juan, I haven't played with the DVX100 for months, so I can't give you a hard and fast comparison, but I would guess that SMX-150c is anywhere from a stop to 2.5 stops slower than the DVX100... Maybe more... But again, I'm not sure stop math is the way to think about the differences.

For example, once the 10bit software is available, it may be possible to shoot 100% linear, which looks dark, and then apply a gamma curve in post to bring the image out and make it look "normal." If we can do that, it will effectively make the camera a stop to two stops faster, because we'll need less than half as much light to get a good image.

But again, this is all non-linear for now, so it's rather nebulous. I'm witholding judgement until I see what an extra two bits (four times as many gray values) can get us...

Juan M. M. Fiebelkorn
August 3rd, 2004, 01:44 AM
Thank you Ben.
Very usefull info.
It was just I'm too used to film terms and scales and it is the only way I can compare things :)