View Full Version : 4:4:4 10bit single CMOS HD project



Juan M. M. Fiebelkorn
September 16th, 2004, 08:37 PM
Obin,
I'm working on it.
Don't expect me to give you results now, cause my converter is very alpha and I'm a lazy coder :).
I need to correct some problems I got with 1280x720 mode.
If you have 1920x1080 I can give you results now! :)

BTW your test3 image have its blue channel messed up (at least on my Combustion)

Frank Berndt
September 16th, 2004, 09:28 PM
> original post by David Newman
> We compressed gamma corrected bayer data to around 4:1.
> That means around 15MByes/s for a 12bit stream.

David,
Am I reading this right?
12-bit bayer -> gamma remapping (log, I assume) -> 10-bit, then a mathematically lossless 4:1 compression on a single frame of bayer data? From my own experiments, I don't see that there is that much redundancy in an image frame, unless it is exposed wrong or the lens is crap. Even the most compute-intensive algorithms stop at around 2..3:1. If you can prove your claim, then I have to congratulate you on this breakthrough !

... some time later ...

Well, I found the technology backgrounder on your website, which states that it is visually lossless, not math lossless. 4:1 lossless would have been nice :-)

Juan M. M. Fiebelkorn
September 16th, 2004, 09:39 PM
Where did you read "mathematically lossless" ??

Wayne Morellini
September 16th, 2004, 10:40 PM
David, I know we have had differences in the past, but thanks for the effort with the bayer compressor/editor I asked for this project (and also the capture, I didn't expect). I hope you do very well out of it.

<<<-- Originally posted by David Newman : Yes, CineForm has been quietly working on our capture and high bit-depth bayer-based editing solution. One day we do intend to support the Mac, yet today we can get better performance out of a PC (for our architecture -- not a PC vs Mac thing.)

We intend to support compressed capture up to 30Hz for 1080p and 70+Hz for 720p. This will be on a single CPU system over standard PCI, to a single drive (or any type.) Higher frame rates will be obtainable on multi-CPU systems. -->>>

How fast was the cpu?

Thanks.

David Newman
September 17th, 2004, 09:29 AM
Yes, CineForm's codecs are visually lossless not mathematically lossless (ML). The quality difference in all partical workflows is insignificant, yet we do base the codec on a ML transform, then we mindly quantize for bit-rate control. A ML version is possible yet at 2:1 compression the data rate and entropy encoding times start to become as inconvient as uncompressed. We find the work-flow is best using visual lossless compression.

For performance testing I'm using a 2.8GHz latest series P4 (800MHz memory system). I have only preliminary figures from this system, in a couple weeks I should be able to gave exact encoding rates straight from CameraLink.

Wayne Morellini
September 17th, 2004, 12:23 PM
Thanks I say again, extemely good news. ;} This gives the project new vigor and direction, as with such a quality profesional NLE/capture system, now everybody has a good suitable workflow chioce.

Obin Olson
September 19th, 2004, 08:42 AM
Juan how is it going with the .raw files?

Juan M. M. Fiebelkorn
September 19th, 2004, 02:33 PM
Obin,
I told you, I need to check everything again for 1280x720, and I'm a bit busy with two films I'm transferring to 35mm these days.
Also I'm a bit lazy at coding (I prefer to make things slow but right at first shot :) ).
If you could give me raw images from the 3300 at 1920x1080 I can give you results right now.

Also I'm on line here fiebelk a t hot-mail d o t com (obfuscated email link to avoid spam)

Obin Olson
September 19th, 2004, 11:02 PM
Ok what would be a good chip with lots of range for shooting 1080P black and white? I have been working with a Director that wants to shoot a feature in Black and White..any ideas crew?

Just so you know we are still hard at work coding the bayer converter tiff saver and color bayer preview side of things for CineLink

Jason Rodriguez
September 20th, 2004, 12:12 AM
Very cool,

can't wait to see the end results that you get.

BTW, what algorithm are you going to use for the final bayer conversion to TIFF?

Juan M. M. Fiebelkorn
September 20th, 2004, 12:13 AM
Well, I guess that using the SI 3300 or the SI 1920 would be a very good choice.
Remember the Monochrome versions are a lot more sensitive and usually have less noise...

Obin Olson
September 20th, 2004, 07:24 AM
Juan:

Did you send me an email? if so please read my reply

Jason:
We had a site floating about on this list for a while that had 3 or 4 algorithms on it...we will use that for a offline conversion

Steve:

Do you have the 3300 in black and white? if so is it the same as the rgb so we could use it without any more code writing?

Steve Nordhauser
September 21st, 2004, 08:44 AM
Obin:
The SI-3300 is color only - a Micron decision. The SI-1300 and SI-1920HD are both mono and color. It would be as simple as leaving off the color mask step but they chose not to offer this option.

Juan:
I completely agree that monochrome is more sensitive (you aren't putting color filters in the way that are removing 2/3 of the spectrum at each pixel site. Other than needing more gain (which amplifies noise), I'm not sure how color is noiser. Related to this, a three sensor prism-based camera is more efficient that other methods like color wheel because it splits the spectum up and sends the RGB to the appropriate sensors without filter losses. The prisms are only 60-70% efficient, but that is better than the filter loses.

Obin Olson
September 21st, 2004, 09:12 AM
thx Steve

We now have color bayer preview coded and are working on Sheer/Quicktime export

Jason Rodriguez
September 21st, 2004, 09:19 AM
Awesome Obin!

Rob Lohman
September 22nd, 2004, 02:46 AM
I've split off the "lights" discussion to a new thread:

http://www.dvinfo.net/conf/showthread.php?s=&threadid=32334

Jason Rodriguez
September 23rd, 2004, 09:20 AM
Hey,

Just curious, are any of you here proficient in Python?

If so, I'm wondering what the feasability of using PIL to process these RAW images (like .IHD) and do bayer conversions, etc. is? I'm assuming that using the "point" attribute you could theoretically make an bayer image processor, and then you'd have to write your own file importer using the raw importer, and maybe even the bit converter (for the packed bits).

BTW, to process 16-bit images, do you actually use floating point numbers or still use integers for the values of the pixels? In PIL there does not seem to be any provision for 16-value integers, just 8-value integers, the rest are floating point values, so I'm wondering if that's a typical approach taken in software design.

Rob Scott
September 23rd, 2004, 09:32 AM
Jason Rodriguez wrote:
are any of you here proficient in Python?Not proficient, but I've done a little bit. I'm not sure how feasible your suggestion is, but I suspect it will be quite slow, since Python (IIRC) is interpreted.
BTW, to process 16-bit images, do you actually use floating point numbers or still use integers for the values of the pixels?I'm using floating-point in order to preserve as much information as possible. With 16 bits, you lose some information through "quantization" errors each time you process the data. Of course, floating-point slows things down too.

Obin Olson
September 23rd, 2004, 09:34 AM
I may get a demo of CineLink today to play wiht..if so I will keep everyone posted how it's working..we are still working on QUicktime/Sheer VIdeo save ;)

Jason Rodriguez
September 23rd, 2004, 10:06 AM
Yah, I was just thinking that depending on what the options for bayer conversion are out there, I may or may not want to write something. I don't know C++, but I'm decent at TCL/TK, and Python seems to be similar to those scripting languages. With PIL I'm thinking I may be able to write my own bayer de-mosaicing app. It's not to over-ride anybody here on this list, or to say that what they're doing isn't good enough, just simple curiousity, and maybe the ability to get myself out of tougher programming situations where I don't really have the programming experience.

Jason Rodriguez
September 23rd, 2004, 10:09 AM
BTW, Obin and Rob S.,

What's the preview delay like on your apps? In other words, in quick motion, can you keep up with the subject, or is there a noticeable "lag" in the picture that's being previewed? If converting the image to color using a simple bayer de-mosaicer is slowing thing down, you may want to simply provide a way to get the black-and-white raw image to the preview screen as fast as possible. Image lag is a real killer, or at least can be when trying to be precise with your camera moves.

Eliot Mack
September 23rd, 2004, 12:15 PM
<<<-- Originally posted by Jason Rodriguez : BTW, Obin and Image lag is a real killer, or at least can be when trying to be precise with your camera moves. -->>>

This sounds like a good point. There will always be a little lag due to processing needs, but it would be great to know how many frames something can lag before it becomes noticeable or difficult to work with. I'm sure nobody will notice 1 frame, but 5?

Eliot

Steve Nordhauser
September 23rd, 2004, 12:18 PM
Jason:
I would think that a 1/4 scale preview by using quadlets (R,(G1+G2)/2,B) for color would be very fast and color accurate for the raw video - analog white balance but no gamma, saturation adjustment.

Jason Rodriguez
September 23rd, 2004, 11:03 PM
Hey Steve,

If it was possible though, I would much rather have gamma than color, because I'm not necessarily trying to use the viewfinder for color, but to adjust the exposure. With linear images being so dark, it would make it very hard to get proper exposure when you can't see what you're doing.

BTW, five frames behind would be an awful lot, almost to the point of unuseability for any fast work. One frame isn't bad, but five is not really acceptable.

That's why I'm thinking, if this becomes the case with color previews, etc. the simple thing is to just to the decimated raw image in black and white and a gamma adjustment on it or custom viewLUT (actually a custom viewLUT, even if it's a text file that the program is accessing and not a nice gui, is the preferred method of viewing. That way no matter what the exposure for the linear file, we could see what's happening. The best thing would be color preview and a custom viewLUT).

Obin Olson
September 23rd, 2004, 11:06 PM
Jason why not adjust the gamma on your screen to counter the camra raw preview? I did it here and it worked greAT...just boost the screen gamma way up and it looks like your looking at a gamma fixed image in post...

Jason Rodriguez
September 23rd, 2004, 11:09 PM
Hey Obin,

That does work to a point, but it would be nice again to have a custom viewLUT, to sort of preview how you want things to look. A custom viewLUT could simply be a custom RGB curve (that adjusts all three channels the same, not necessarily differently for each channel) that's accessed from a tab-deliminated text file for values from and to (like an initial value of 10 gets mapped to 20, etc.).

Jason Rodriguez
September 24th, 2004, 11:34 PM
Obin,

How did you latest version of CineLink turn out?

Obin Olson
September 26th, 2004, 08:38 AM
ok...we are having issues with re-draw because of the non-use of hardware for screen draw..the 1/4 screen color preview is great but the windows redraw is really bad..working on that now

Jason Rodriguez
September 26th, 2004, 09:02 AM
Are you getting any delays (you know, when you pan, etc. the screen preview is behind the camera motion) in your on-screen preview?

BTW Obin, what format is CineLink outputting? Is it a RAW format, or something else with a custom header?

Obin Olson
September 26th, 2004, 03:13 PM
not much delay at all...I think that issue is fine
RAW data output

Jason Rodriguez
September 26th, 2004, 07:21 PM
Sorry, don't mean to beat a dead horse, but would you say 1-2 frames behind in delay, or 5-7+?

Obin Olson
September 27th, 2004, 12:47 AM
best guess is 1-3 frames from what i have seen( I have been loaded with work and not much time for testing) I will do some things in the morning and see how it looks..and try the new bayer filter..we may work with DirectX for a faster screen refresh in CineLink..not sure yet..

Jason Rodriguez
September 27th, 2004, 06:37 AM
Yah Obin, do some tests . . .

if it's 1-3 frames, that's quite good, especially if it's towards the 1 frame mark :)

BTW, what's the new bayer filter?

Obin Olson
September 27th, 2004, 07:52 AM
not even sure yet!

I will see today :)

Rob or Rob..why are we getting what I call image shear with camera movement like a pan on the preview with color bayer? its like the image is sliced up when you pan the camera..what causes this and how can it be fixed?

TIA for your help

1/4 pixel quad preview works well! but this image "shear" is bad...btw it is much better at 1/15 shutter speed...is this a clue?

Rob Lohman
September 27th, 2004, 08:42 AM
I haven't got a camera so I can't see what it would be, but it
sounds like a simple rolling shutter issue? Although at 1/15 it
should be more worse, not less.

Obin Olson
September 27th, 2004, 08:51 AM
no no not that at all..it's an issue with redraw I think..

it is the screen not refreshing completely before the
picture is totally displayed and the next frame being superimposed on the
previous. I will try to force a refresh after dumping things to the screen
this might help somewhat.

...from Luc

Rob Scott
September 27th, 2004, 09:10 AM
Obin Olson wrote:
Rob or Rob..why are we getting what I call image shear with camera movement like a pan on the preview with color bayer? its like the image is sliced up when you pan the camera..what causes this and how can it be fixed?You could try double-buffering.

Jason Rodriguez
September 27th, 2004, 09:12 AM
I don't think that's "sheer", I think it's more like "tearing", like when you're trying to run a video game and it's not keeping up with the refresh on the screen.

If that's the case, then it might have to do with not using DirectX or the video hardware and trying to do too much in software.

Jason Rodriguez
September 27th, 2004, 09:13 AM
<<<-- Originally posted by Rob Scott : You could try double-buffering. -->>>

Isn't double-buffering a hardware issue though? I thought Obin said he's doing this all in software, hence no double-buffer (unless you program one).

Rob Scott
September 27th, 2004, 09:19 AM
Jason Rodriguez wrote:
Isn't double-buffering a hardware issue though?Possibly, but in software you still have to indicate when you're done with buffer A and are starting on buffer B. IOW, the system won't magically know when you're finished with one frame and are starting on the next one.

Jason Rodriguez
September 27th, 2004, 01:43 PM
Hey Guys,

Adobe's just introduced a proposed open universal camera RAW format file called .DNG

you can read about it here: http://www.adobe.com/products/dng/main.html

This might be something to look into. I know that we've been working on .IHD, but if Adobe's also coming to market with a proposed universal RAW format, then that might be something we'd want to get in on for future format compatability.

Jason Rodriguez
September 27th, 2004, 01:55 PM
Also here's a big plus to supporting Adobe's RAW format:

We get to use all those NICE Raw converters out there like Photoshop's, etc. In other words, we don't have to hack our own.

And if this file format catches on, there'll be MANY raw format converters with $$$'s behind them for R&D, while ours may be nice with open source stuff (actually .DNG is an open standard), I'm not sure how many coders out there are willing to put in the dirty work and try to get the stuff to work right. With this we can get some great algorithms, custom algorithms, etc. that will come with nice converters fueled by the money to be had with the digital photo market.

Also it supports packed bits (big endian), so we can save 10 or 12-bit files.

Tell me what you guys think, but so far I'm thinking this might be MUCH better than quicktime, .IHD, etc., because of the money behind it, and the potential for a lot of industry support.

I'm not downing anyone's hard work on this list, only that a universal RAW format that's an open standard (and based on TIFF), will greatly help us get these little projects off the ground.

Obin Olson
September 27th, 2004, 02:10 PM
maybe

Rob Lohman
September 27th, 2004, 02:12 PM
I don't see from a quick scan how this would help us much since
I couldn't see a way to implement a real-time compression system
in these formats (if they support some compression I doubt it will
be real-time and lossless).

Do these formats support more than one image per file as well?

The basic idea (at least for IHD) was to have an interim format
that suits what we are doing and work as fast as possible until
it can be transformed into something else.

As far as I know there is no NLE program yet that supports
RAW or the DNG format yet.

Jason Rodriguez
September 27th, 2004, 02:33 PM
How hard will .IHD be to interpret if we want to write our own converters-I mean just curious, how easy will it be just in case a couple years from now (6-7, maybe even further) there's no updated software support?

also the .DNG file is a type of RAW file. So instead of .IHD, you'd be using .DNG's and use raw conversion software that understands how to interpolate those files.

I'm just thinking that there are a lot of nice commerical demosaicing algorithms out there, including the one that's bundled with Photoshop that either already or will understand and work with .DNG files. So it might not be a bad choice to have the ability to use these commercial converters, and futhermore, if this format really catches on, there'll be more converters, etc. that'll take advantage of this format.

The NLE's will support the TIFF's, etc. that come out of these .DNG converters.

Jason Rodriguez
September 27th, 2004, 09:12 PM
Rob L. and S.,

What are the specs on .IHD going to be? Initally I simply thought it was going to be a RAW file with some header information for reading the file, making it fairly simple work to create your own de-mosaicer, but with the inclusion of multiple frames and compression, this sounds a lot more complicated.

Are there any planned specs for this file format, and if so, how hard will it be to "roll your own" converter?

Jason

Rob Lohman
September 28th, 2004, 01:42 AM
The format has reached version 1.0 so to speak and the "spec"
(although not in full document form) is done. It is still very simple
and contains the following:

1. a simple bare bones format to store multiple frames in bayer format in for now two formats

2. these two formats are packed bayer and a compression algorithm I am working on called IDC

Now this IDC compression is very simple, especially at the
decoding end. We are planning to release the converter source
as it stands now very soon and it will include a full C header
file for the IHD format including some documentation.

The IDC engine will not be there yet since I'm still working on it,
but it should be out if all goes well not too long after the initial
release. So everyone can take a look at it then.

As I said DNG/RAW are certainly interesting but I could not see
in the quick scan I did anything about more than one picture or
different compression algorithms.

It might be a thing we want to implement (perhaps not in the
camera head but lateron), but more research has to been before
going done another/different path.

Jason: why would multiple frames be more difficult to process?
This is what you need to do in the end anyway. And whether
you read multiple files from the file system or multiple frames
from one (or multiple) IHD files is basically the same. The main
engine to do this kind of processing is already done in the
converter engine. Basically it works with plugins to do de-bayering
and other image manipulation including output to the final format.

Just for everyone wondering how large these files can get we
have set the mark at 1 GB at the moment. This to facilitate
different (file) systems in the future.

Jason Rodriguez
September 28th, 2004, 03:36 AM
Hey Rob,

I'm not so sure that 1GB file-size limit is a good thing.

AVID Xpress Pro on Windows has to do this with their OMF files and it's an extreme PITA!

Will there be a bit(s) in the header that tells us how many files are in the 1GB "big" file, or will we simply keep skipping to the next frame till there are no more?

I like the idea of single frames from the file system a lot more if that's still possible to keep.

Plus, if I ever want to match-back to a specific frame, I'm gonna have to search through some IHD file for an obscure frame, and I can't search through the file since it has to be converted first (it's not a quicktime), etc.-in other words there is not one-file-to-one-frame that easily facilitates going back to the original RAW frame should I decide to do window burns in an offline edit. Instead I'll have x amount of files with a single window burn from file XXXX.ihd, and have to count back the frames? Gosh, that sounds like something I really don't want to spend my time hacking away at.

Please again, if you can, don't embed more than one image in single IHD file if you can help it, that would help us out a lot more on the editing end and for archival and offline-online matchback.

Rob Lohman
September 28th, 2004, 03:46 AM
As it stands now that is not the way Rob S. and myself have
designed it. The IHD system has intelligence to know which
files belong to which. Keep in mind that the IHD system was
setup as a digital negative or original negative so to speak.

It is not meant to be used for editing or any other processing
except to convert it into another format. It is not in full RGB, it
is in bayer format, which an NLE cannot handle.

The original idea was to have a convert application that converts
these original negatives to something you can work with. My
idea is to be able to output multiple formats in one go. So it
can output 16-bit TIFF sequences, and a DV file and at the same
time a highly compressed quicktime (low-resolution for example)
file to send of to other people who need to see what you
recorded with the camera.

So then you have (for example):

1. 3 files with the exact same naming convention so you know which scene it is for example

2. a low-end file to send over the web for example to other people who need to see the dailies

3. a DV "offline" file to start editing and create an EDL

4. a high-end 16-bit TIFF sequence for use in effects applications or conforming to the EDL

That's an "idea" for example. The idea is that a IHD file or sequence
(depending on length) will be created for each scene. So they have
an increasing number and we have been thinking about allowing
for custom scene numbers to be set or extra information to be
embedded. This way sets of files can be more easily identified,
BUT this will mean more work for each shoot with a camera
(you will need some way to input this).

The last thing above is not in the spec but there is enough time
to include the possability at least.

Ofcourse the way to the future is completely open. But for various
reasons (file system performance being one, which is VERY
important in our real-time camera system!) a single file per frame
was not on option in our minds. Again, this is not the final delivery
format as we envisioned it!

The 1 GB splitting is a non-issue since the convert application will
natively handle this without a problem as long as all the files are
present (which is true in a single image sequence as well!!). You
should never have to deal with this in the final format if your
system supports large files.

If the DNG/RAW format is something that is good for this system
it could first be implemented in the convert application to see how
well it works. Thats relative easy to do and performance is less
of on issue then as well. Then if it gets adopted and can perform
in a real-time environment it might be time to put it in the camera
directly. For now I personally don't see many benefits at this point
in time. But it is very interesting development!

Wayne Morellini
September 28th, 2004, 05:28 AM
<<<-- Originally posted by Jason Rodriguez : I don't think that's "sheer", I think it's more like "tearing", like when you're trying to run a video game and it's not keeping up with the refresh on the screen.

If that's the case, then it might have to do with not using DirectX or the video hardware and trying to do too much in software. -->>>

I don't know how things are programmed here, but this sort of problem is also in 3d games. In the 3d control panel tab of my ATI card, their is a wait for vertical sync slider that canbe forced on or be set to "application preferece". So there must be a directx methord of using it. Using this with double buffering, mentioned elsewhere here, helps in program work flow to.

Hope this helps.