![]() |
|
Well, I guess that using the SI 3300 or the SI 1920 would be a very good choice.
Remember the Monochrome versions are a lot more sensitive and usually have less noise... |
Juan:
Did you send me an email? if so please read my reply Jason: We had a site floating about on this list for a while that had 3 or 4 algorithms on it...we will use that for a offline conversion Steve: Do you have the 3300 in black and white? if so is it the same as the rgb so we could use it without any more code writing? |
Obin:
The SI-3300 is color only - a Micron decision. The SI-1300 and SI-1920HD are both mono and color. It would be as simple as leaving off the color mask step but they chose not to offer this option. Juan: I completely agree that monochrome is more sensitive (you aren't putting color filters in the way that are removing 2/3 of the spectrum at each pixel site. Other than needing more gain (which amplifies noise), I'm not sure how color is noiser. Related to this, a three sensor prism-based camera is more efficient that other methods like color wheel because it splits the spectum up and sends the RGB to the appropriate sensors without filter losses. The prisms are only 60-70% efficient, but that is better than the filter loses. |
thx Steve
We now have color bayer preview coded and are working on Sheer/Quicktime export |
Awesome Obin!
|
I've split off the "lights" discussion to a new thread:
http://www.dvinfo.net/conf/showthrea...threadid=32334 |
Hey,
Just curious, are any of you here proficient in Python? If so, I'm wondering what the feasability of using PIL to process these RAW images (like .IHD) and do bayer conversions, etc. is? I'm assuming that using the "point" attribute you could theoretically make an bayer image processor, and then you'd have to write your own file importer using the raw importer, and maybe even the bit converter (for the packed bits). BTW, to process 16-bit images, do you actually use floating point numbers or still use integers for the values of the pixels? In PIL there does not seem to be any provision for 16-value integers, just 8-value integers, the rest are floating point values, so I'm wondering if that's a typical approach taken in software design. |
Quote:
Quote:
|
I may get a demo of CineLink today to play wiht..if so I will keep everyone posted how it's working..we are still working on QUicktime/Sheer VIdeo save ;)
|
Yah, I was just thinking that depending on what the options for bayer conversion are out there, I may or may not want to write something. I don't know C++, but I'm decent at TCL/TK, and Python seems to be similar to those scripting languages. With PIL I'm thinking I may be able to write my own bayer de-mosaicing app. It's not to over-ride anybody here on this list, or to say that what they're doing isn't good enough, just simple curiousity, and maybe the ability to get myself out of tougher programming situations where I don't really have the programming experience.
|
BTW, Obin and Rob S.,
What's the preview delay like on your apps? In other words, in quick motion, can you keep up with the subject, or is there a noticeable "lag" in the picture that's being previewed? If converting the image to color using a simple bayer de-mosaicer is slowing thing down, you may want to simply provide a way to get the black-and-white raw image to the preview screen as fast as possible. Image lag is a real killer, or at least can be when trying to be precise with your camera moves. |
<<<-- Originally posted by Jason Rodriguez : BTW, Obin and Image lag is a real killer, or at least can be when trying to be precise with your camera moves. -->>>
This sounds like a good point. There will always be a little lag due to processing needs, but it would be great to know how many frames something can lag before it becomes noticeable or difficult to work with. I'm sure nobody will notice 1 frame, but 5? Eliot |
Jason:
I would think that a 1/4 scale preview by using quadlets (R,(G1+G2)/2,B) for color would be very fast and color accurate for the raw video - analog white balance but no gamma, saturation adjustment. |
Hey Steve,
If it was possible though, I would much rather have gamma than color, because I'm not necessarily trying to use the viewfinder for color, but to adjust the exposure. With linear images being so dark, it would make it very hard to get proper exposure when you can't see what you're doing. BTW, five frames behind would be an awful lot, almost to the point of unuseability for any fast work. One frame isn't bad, but five is not really acceptable. That's why I'm thinking, if this becomes the case with color previews, etc. the simple thing is to just to the decimated raw image in black and white and a gamma adjustment on it or custom viewLUT (actually a custom viewLUT, even if it's a text file that the program is accessing and not a nice gui, is the preferred method of viewing. That way no matter what the exposure for the linear file, we could see what's happening. The best thing would be color preview and a custom viewLUT). |
Jason why not adjust the gamma on your screen to counter the camra raw preview? I did it here and it worked greAT...just boost the screen gamma way up and it looks like your looking at a gamma fixed image in post...
|
All times are GMT -6. The time now is 09:06 PM. |
|
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network