Ben Syverson
August 8th, 2004, 03:33 PM
I should add that unless the image coming directly off the Altasens sensor is as bad as the IBIS5 or Microns, you shouldn't need to even use 12bit -- and 10bit may even be overkill if the image is nice enough.
View Full Version : 4:4:4 10bit single CMOS HD project Pages :
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[28]
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Ben Syverson August 8th, 2004, 03:33 PM I should add that unless the image coming directly off the Altasens sensor is as bad as the IBIS5 or Microns, you shouldn't need to even use 12bit -- and 10bit may even be overkill if the image is nice enough. Juan M. M. Fiebelkorn August 8th, 2004, 04:01 PM Logarithmic you mean? Les Dit August 8th, 2004, 04:23 PM Here is that dollar bill image grab from my JVC. I think to really compare, we need to post a slightly moving camera image, as the potential for aliasing and crawl is high with the details on the bill. I put Ben's grab on there too, for convenience. http://s95439504.onlinehome.us/dollar-bills.tif The tif is about 3 meg. -Les Jason Rodriguez August 8th, 2004, 04:37 PM Hey Ben, I want the full 12-bits off the Altasens. My Canon D60 is 12-bits and it looks phenominal, with plenty of room to modify the image any way I want. BTW, if you're recording a frame every N milliseconds, can't you get off a bit if you're not recording at the begining of every frame? Say I end up grabbign at the middle of a frame, won't that screw things up? Or will it wait till the begining of the next frame, but again, you're probably going to drop a frame somewhere like that. Ideally you would grab both frames and then just dump one from the image buffer to disk. Also the camera has to run at 48Mhz initially, with that info streaming into the card, unless they're doing something on the camera to knock that down. But unless they're padded bits, like you said, it's going to be 16-bits transferring across the cameralink cable. Rob Scott August 8th, 2004, 05:16 PM Jason Rodriguez wrote: BTW, if you're recording a frame every N milliseconds, can't you get off a bit if you're not recording at the begining of every frame? Say I end up grabbing at the middle of a frame, won't that screw things up?With the EPIX SDK, this won't happen because the SDK handles breaks between the frames for you.Ideally you would grab both frames and then just dump one from the image buffer to disk.Yes, I'm planning to support straight 24 fps and "48 fps-drop-every-other-frame." Ben Syverson August 8th, 2004, 06:15 PM Jason, I'm not sure what the methodology is, but I mentioned the need for arbitrary FPS to Sumix, and they seemed to think it wouldn't be a problem. - ben Juan M. M. Fiebelkorn August 9th, 2004, 01:24 AM Obin, The expose-to-the-right isn't so right!! Be carefull with that, they make some asumptions about how the binary scale works that are not correct... Anyway the rest of the info is good...So don't expose too much to the right...Right? :) Ben and others, What do you think if, using a 12 bit sensor, I apply through LUT a conversion to 10 log on the RAW Bayer? Would it be a nonsense? It would give us a data reduction of around 5 Megabytes per second at 24fps for a 1280x720 source and around 12 Megs for a 1920x1080 same Fps.. Wayne Morellini August 9th, 2004, 03:59 AM <<<-- Originally posted by Jason Rodriguez : Hey Wayne, can you share that with the group? -->>> Not much, except things are developing as I predicted, and if we wait for the next cameras, or Rob's software, we should have a wider range of good options in cameras, main-boards, and maybe compression, to choose from. It really doesn't pay to rush out before suitable components are available, unless you want to experiment. I personally wouldn't mind a $1000 camera head, I'm still waiting to find out how much that $2K?? 3 chip Altasens 1080i/720p JVC HDSDI camera head will cost. You know there are multi mega-pixel still cameras for prices close to $100 (none really any good that i know of) but does a good 720p/1080i camera head really have to cost $1000-$3000 dollers? <<<-- Originally posted by Obin Olson : that would be GREAT Jason..lets hope!! I see that the VariCam has lots of range even though it's 8bit...I wonder how they did that? it has way more range then run-of-the-mill video cameras variCam is HEAVY...I wonder if I should ad weight to our camera at some point? sure makes for a steady shot -->>> The problem is range has little to do with the amount of bit's used to represent it. 12 bits will go into 5 stops or 50 stops of range, the bit's represent the maximum amount of levels in the overall range, and you won't get those unless the camera is sensitive enough and noise doesn't wipe it out. Hence all my questions of sensitivity, range, noise and response curves for each colour (add highest resolution as well) for different gains, and target luminances (3 levels of each should do it lowest, medium and highest). If we had Only that (and lack of blooming smearing) can complete reveal the performance of a camera, even without a test image. Thanks Wayne. Jason Rodriguez August 9th, 2004, 04:55 AM that would be GREAT Jason..lets hope!! I see that the VariCam has lots of range even though it's 8bit...I wonder how they did that? it has way more range then run-of-the-mill video cameras You can read about how they did it with this paper at http://www.hpaonline.com/i4a/pages/index.cfm?pageid=236 Basically in a nutshell they've remapped the output of the CCD's so that it's not so high-contrast. They also added automatic gain in the darks, so you have to watch out when you CC'ing the stuff, since the shadows already have gain added and you don't want to add any more. Wayne Morellini August 9th, 2004, 05:55 AM For the "flash" among us a new card standard, 120MB/s and 2 Terabyte capacity, USB2.0 compatable, hide it behind a large stamp etc etc, now if only they were out and we could afford one ;): http://www.digitimes.com/news/a20040805A4013.html Toms have reveiwed the "sticky pod", for us to mount our cameras onto moving vehicles (though if I would mount a $3K camera to such a thing, I don't think so). http://graphics.tomshardware.com/video/20040806/index.html They have done niose tests on small cases, ouch that is a lot more niosey than my tower case, which is itself too nioise for portable apps. http://www.tomshardware.com/howto/20040804/barebones-38.html Thanks Wayne. Jason Rodriguez August 9th, 2004, 06:08 AM Crap! Thats new card is smaller than Compact Flash and stores up to 2TB at 120MB/s!!! Of course they're finalizing the spec next year, so I don't think we're going to see that until 2006 with our 6Ghz dual-core Pentiums and PPC's. Juan M. M. Fiebelkorn August 9th, 2004, 06:08 AM WOWWWWWWWWWWWWW!!!! Is this the death of Hard drives?? BTW, could someone point me to a raw capture sequence of at least 3 frames, to begin testing of a convertion and compression applet I'm working on? Rob Scott August 9th, 2004, 06:52 AM Juan M. M. Fiebelkorn wrote: Is this the death of Hard drives??I wouldn't get your hopes up too high (yet). It doesn't use any new technology that I could see, it appears to just be another packaging of standard Flash. The 2TB and 120 MB/sec are just attributes of the interface. A 1GB Flash card costs around $300, and I don't see this affecting the price. 2TB at that price would be around $600K (!) Wayne Morellini August 9th, 2004, 06:53 AM "Mass production for the new card will begin in early 2005." but it is still going to cost way too much, unless, you backup to tape or something. That 2TB is probably going to take years to get hear. Also reading the new Scientist lately somebody has got the 3D 1cm cube, with 2000 Gigabyte, the stated limitation was reading and writing speed, which has been the limitation for many years stopping it going to market Somebody posted this on that news article: 10 to >100 Terabytes of Storage http://www.nanonewsnet.com/index.php?module=pagesetter&func=viewpub&tid=4&pid=5 It's a bit hard to glance through but it still looks like 3 to 5 years away, but then we can finally do Surround Super Ultra HD cheap ;) Now, back to the final minutes, of the final episode, of The Apprentice. Jason Rodriguez August 9th, 2004, 04:45 PM BTW Wayne, those small enclosures you're talking about have big AMD or P4 processors in them, so they are going to be very loud. Try something with Pentium M, that should be quiet. Obin Olson August 9th, 2004, 08:33 PM good stuff..today we have found our problem with the SDK and are now on a goo dpath to a capture programm...ill keep you posted...automatic GAIN in teh darks!?!?!?!? that scares me bad Juan M. M. Fiebelkorn August 9th, 2004, 08:35 PM Obin, could you give ame a three frame RAW capture to work on? Jason Rodriguez August 9th, 2004, 10:26 PM automatic GAIN in teh darks!?!?!?!? that scares me bad Yah, they have to do this because of their trying to cram 10-11 stops on an 8-bit tape. Not only do they add gain to the shadows, but 18% grey is suppose to lye at 24IRE!! So all your images are SUPPOSE to look really underexposed, but then with certain images when you try to restore the contrast range (like you would with any "dark" file like the linear files we're working with), you always end up adding even more gain to the blacks (I've tried my best to isolate the values under 12 IRE and it becomes a really big PITA), and getting some real, real nice large compression macro-blocking. Looks so good ;-) I would only shoot Panasonic if I didn't have enough money for Sony post production. For off-speed shooting I would try to get an uncompressed recorder for the Panasonic, like maybe the RaveHD, that would solve a lot of the problems of F-REC cause then you're recording a 10-bit uncompressed output from the camera. Jason Rodriguez August 9th, 2004, 10:28 PM BTW Obin, Are you able to get locked frame-rates with your capture app (this question also applied to Rob too)? At least at 24fps acurately so that there's sound sync? Rob Scott August 10th, 2004, 03:49 AM Jason Rodriguez wrote: Are you able to get locked frame-rates with your capture app? At least at 24fps acurately so that there's sound sync?I cannot answer your question directly, because I have not yet used it in any sort of production. I do anticipate being able to get a very accurate frate rate. Also: update to my development blog (http://www.obscuracam.com/wiki/static/DevelopmentBlog.html). Wayne Morellini August 10th, 2004, 07:48 AM <<<-- Originally posted by Jason Rodriguez : BTW Wayne, those small enclosures you're talking about have big AMD or P4 processors in them, so they are going to be very loud. Try something with Pentium M, that should be quiet. -->>> I didn't say I agreed with it, but some people want this sort of thing (but now Pent M is here maybe that will change, but even then the M&M combination of good Pent M may be outweighed by processing hungry Microsoft Windows or C). I can backtrack it, I'm running a fairly low niose system for hometheatre (and I find it a lot easier to work with low niose). I think it is likely to be around the same as mine if a 2.4Ghz Pent M was available, still too loud but tolerable for a transportable case) around 23-26db , and you really need 17db or less for shoulder mount. Bingo, a tansportable case with in built 720p/1080i capable lcd. I forgot about them, BSI computers was a top maker, but there are others (I think Dolch was one). With a modern LCD it could be very good. They can fit standard MB's and cards, they are around the size of a small desktop (not as small as cube), you would sit a reference monitor for shooting (this is not a portable solution but for those with big rigs). If anybody wants to do lot's of res, and lots of drives (or just flash disks) then you can go better with the YY (Yeong Yang) legendary Cube Case (something like a 43cm square black cube server case). And before anybody says so, thats not most of us. Looked them up, www.bsicomputer.com, styling looks worse, not 16:9, not stereo speakered ;(. But they do have singleboard computers (aroudn the size of a full length add in card, rugged portables, and panel PC's, if that interests anybody. The tabvlet model looks really nice. Some of the models almost looks like pro video equipment in styling. Ohh yes, an Award from Nasa, that sure blows hopes of it being as cheap as a laptop, at least when you see an astronaut float by with one on TV you can tell your family it might be a BSI model ;):) You know there is a local tender center (Cairns) that has to compaq SCSI2-Wide raid drive towers (I was offered one for something like $30). Are these things any use for us, and anybody interested near me interested?. Jason Rodriguez August 10th, 2004, 09:40 AM and you really need 17db or less for shoulder mount Not really true. The Aaton S-16 XTRProd is rated at 19db, while the A-minima is 29db, and those are both sync-sound cameras, and used for sound shooting all the time. Wayne Morellini August 10th, 2004, 10:36 AM I am not familiar with those products. I know if you can shoot the niose away from the user (which I plan to do) then it shouldn't do too much to the ambient niose level and also be suitable for shoulder mount. As a mounted camera, or box on the floor, there shouldn't be any problems with 19db, even 29db will fade in a niosy environment or open space, but I would like to aim for good natural sound sampling in quiet areas for film and doco's. Wayne. Joonas Kiviharju August 10th, 2004, 11:29 AM I wrote a little program the other day. It does debayering in the way described by rob, or was it ben? It is written in C++ and it's probably very slow (this isn't because of C++, but because of bad/fast coding). I don't really know, because my computer is 500MHz. It works on 8 bit TGA files, so no 16 bit at the moment. Sorry about that- The interesting bit is that it can open Obins .RAW files, debayer them and save them to .TGA. I'm not quite sure about Obins RAW files, but I used chars to open the data and to my knowledge chars are 8 bit. So maybe I'm missing something here... Also the program works from the commandline, and it has this totally useless SDL/OPENGL preview, which you can disable ofcourse. The downside to all this is the fact that, since my computer broke down I haven't had Windows. So I wrote it in Linux, but it should be totally portable and compile fine on any compiler, and a person who knows how to use one. The only external library you need to compile it for Windows is LibSDL <http://www.libsdl.org> So if someone could get it to compile, you could use it. When you get it to compile you can use it on a sequence of files named picture0001.raw, picture0002.raw... When you run it you get some kind of a help, describing the syntax of the commandline instructions. You can get the GPL:d sourcecode from <http://pupuedit.sourceforge.net/camera/pihlajadeb.zip> And remember that GPL means that you can't use the source code in your proprietary programs, but you can use it for anything, as long as you make the result GPL. The code could be a lot faster, but the debayering looks quite good to me. It doesn't have any softening so it has some jagged pixels (or what do you call them...). But I actually think the result is exactly the same as Ben's method. Another thing is the mounting of cameras. I found this interesting future thing: <http://www.four-thirds.org/en/index.html> It could be the future sensor size, and the mounting system of choise in a couple of years. Joonas Kiviharju Steve Nordhauser August 10th, 2004, 11:54 AM I finally did some testing with the SI-3300. I put some images in this directory: http://www.siliconimaging.com/Samples/SI3300/ This includes a 10 frame raw sequence at 1920x1080, 12 bit. This was done with Epix so the camera is 10 bit, padded two to the right, 4 to the left with zeros. Some warnings: I'm not too sure about the number of bits of the images - PaintShop pro seemed confused by them but opened them as 8 bit. The file sizes seem to indicate larger files for the 12 bit files. The color images were colorized by the Epix software - no promises for what Bayer algorithm used. The lens was a Canon zoom. OK but not great. There was no correction done on these at all - no white balance, black offset or gain. I purposely left some hot spots in the image so you could see the lack of smearing. The Epix software was smart enough to know this camera doesn't come in monochrome so I switched to a different model after setting up the image. This gave me less control (OK, no control). The monochrome images have some trash around the 1920x1080. I also posted a couple of 3.2Mpix images. Remember all, this is a 1/2" format camera with small (3.2Micron) pixels. It will run at a *max* of 24fps at 1920x1080. Rob Scott August 10th, 2004, 12:03 PM Koonas Kiviharju wrote: It does debayering in the way described by rob, or was it ben?That was Ben. So you're using linear interpolation? If I can get it to work with a REAL (floating point) data type I might try using it in the Convert app (which I am also releasing under the GPL). Interestingly, I was planning to use libSDL for that one too. Thanks! --- EDIT --- By the way, my comment about a real data type wasn't meant to be insulting. I just meant real numbers vs. integer (I can hear it now -- dang it, we use REAL numbers, not those toy "char" things! :-) Les Dit August 10th, 2004, 01:17 PM Steve, I took a quick look at the 3Mpix 16 bit uncorrected, and it looks like it may have less than 8 bits. Does this camera have 2 taps? The reason I ask isi that the histograms for rgb have 2 distinct and seperate shapes to them, like we are looking at 2 pictures interlaced or something! I like the resolution. Maybe the image can be flattened with software, but leaving how many bits is the question. -Les <<<-- Originally posted by Steve Nordhauser : Some warnings: I'm not too sure about the number of bits of the images - PaintShop pro seemed confused by them but opened them as 8 bit. The file sizes seem to indicate larger files for the 12 bit files. ->>> Juan M. M. Fiebelkorn August 10th, 2004, 08:56 PM I'm with you Les Dit. I was expecting a dark image like the ones from SLR (just debayered not gamma corrected or anything else) Steve ,Do you remember the example Jason posted a long time ago of a Digital SLR image? Joonas Kiviharju August 11th, 2004, 03:16 AM <<<-- Originally posted by Rob Scott : So you're using linear interpolation? If I can get it to work with a REAL (floating point) data type I might try using it in the Convert app. ... By the way, my comment about a real data type wasn't meant to be insulting. I just meant real numbers vs. integer (I can hear it now -- dang it, we use REAL numbers, not those toy "char" things! :-) -->>> Yes, I understood you about the real data type. I think it is very easy to just make the internal data floats, and maybe get an external library to write 16 bit TIFFs. I might be doing that myself, if I'll get the time... Later on I plan to put it all in my pre-alpha non-linear editor called Pihlaja. It will propably be using GStreamer, so it will be a plugin for that. But this is propably a couple of years from now (because I don't do it professionally)... I don't really know about linear interpolation. I'm really not a programming professional so I don't know the terms. (I think I used linear interpolation when I was making a 3d engine that filled polygons.) As hinted in the Bens wiki entry about debayering, I just averaged the 2 nearest neighboring pixels to get the values for RGB for every pixel. First horizontally, then vertically. I'm not sure if there is some kind of a better method than just pure averaging: [i-1] + [i+1] / 2. And I'm not sure if Ben's method in the plugin actually is this simple. So I meant that, my method is the same as in the wiki. Rob Lohman August 11th, 2004, 04:21 AM This includes a 10 frame raw sequence at 1920x1080, 12 bit. This was done with Epix so the camera is 10 bit, padded two to the right, 4 to the left with zeros.Are you sure of this? The file is 62.914.560 bytes long. Or 31.457.280 pixels. If you divide this by 10 (for frames) you get 3.145.728 pixel image. That is much more than 1920x1080 (which is only 2.073.600 pixels). The math doesn't add up any way I try it. If you divide the filesize by 1920 x 1080 x 2 (4.147.200) you get 15.17 frames which is fractional ?! Also, I looked in the file with a hex-editor and I find numbers that have their last bit turned on! The first row is: A9 00 8B 00 A9 00 95 00 - 9F 00 95 00 A9 00 8B 00 That's: 0x00A9 = 169 (?) 0x008B = 139 (?) 0x0095 = 149 (?) 0x009F = 159 (?) How can this be two bits shifted to the left? It can't be 0xA900 either because that would mean the high 4 bits are set as well which can't be either. So neither the encoding format nor the filesize seem to match up! Rob Lohman August 11th, 2004, 04:26 AM Just after posting the previous blurb I figured it out. This is not 1920x1080 but the full chips resolution of 2048x1536! 2048 x 1536 = 3.145.728 So it is 2048 x 1536 for 10 frames. BUT, that still leaves the pixel packing/encoding "problem"... Steve: could you read this chip out at 2048 x 1152? Jason Rodriguez August 11th, 2004, 04:46 AM BTW Rob, What algorithms are you still planning on using? Last I remember it was Variable Number of Gradients, Pattern Recognition, and I guess Linear would be a nice addition if you need something quick for preview/offline. For anything being blown-up or put on a big screen though, I think we need a good heavy-duty algorithm like Variable Number of Gradients (which from the Stanford paper will interpolate fully both Red, Green, and Blue channels to the best of it's ability). Rob Lohman August 11th, 2004, 05:57 AM Jason: what algorithms in what regard? To do what? Are you talking about de-bayering? Basically it doesn't matter. A de-bayer algorithm is fairly quickly developed and integrated and we will probably have quick and more complex ones. Ben has made one, Rob S. and myself have made a near neighbour and half resolution model (preview) etc. The main focus is working with the camera and getting that code to run as fast as possible and get everything stored to disk etc. Rob S. and myself are also working on the convert/processing application but that will just be a basic version when it hits the GPL/opensource state and will be developed futher. Anyone can join in on that to develop things like bayer algorithms and whatnot. My personal spear points is working out the digital-negative format, seeing if we can incorporate at least some lossless on the fly compression to lower bandwidth usage and getting the data to disk. After that I'll turn to the post-processing algorithms and other stuff if others haven't finished that yet. Wayne Morellini August 11th, 2004, 06:29 AM <<<-- Originally posted by Joonas Kiviharju :Another thing is the mounting of cameras. I found this interesting future thing: <http://www.four-thirds.org/en/index.html> It could be the future sensor size, and the mounting system of choise in a couple of years. Joonas Kiviharju -->>> Joonas, thanks for posting this. It looked disappointing at first, but as I read further, about straightening oblique light rays (at what cost to light) and making the three primaries focus at the same distance not the three film layers (is that good for the Foveon x3), it looked promising, and further on it looked disappointing again. This looks like a cheapened excuse not to pay for, and manufacture, 35mm sensors, and to get us to upgrade our lens systems. How bogus can they get, and we (the consumer) will probably have to be dragged along with it. They claim that the lens needs only be half as long to get the same image size and brightness, that is an indication of a sensor that is half the size getting half the light (but over half the area it is the same. But what of DOF, or what of convergent lines etc? Their graphics depict it as neatly nestled between different abilities (which means flat out compromise). Now to get more confusing, they mention a number of formats. 4/3 (four thirds) what? 4/3rd inches (a inch and a quarter or 34mm) that would be good compared to 35mm film, nope, it is 18mm across thats nearly 19mm about 3/4 of an inch. No it's aspect ratio, 4 across and three down 18*13.5mm. But if they said 18mm format people would immediately tweak it is inferior to 35mm, and they should have developed a Digital-35mm format instead. Here I was under the naive notion that 2/3rd lens meant 2/3rd and inch, but is much smaller (anybody with a link to a good format guide, I need to brush up). One small step for us, one great leap for camera manufacturers profits getting us to upgrade our lenses. Why don' t they do Digital 35mm and digital 18 (16 better) instead, and make the 35mm conform to one of the existing SLR lens mounts with lens refinements for digital use. With an adaptor to condense and straighten the D35MM rays and feed it to the D18mm, or a straight 35mm format adaptor, and adapt it for pro video use, and cinema camera use. Another words a twin standard across film and video. But of course that would flatten profits from pro video and cinematic lenses. Anyway maurandering rant over (I just don't like slick marketing stuff that doesn't really help). Wayne Morellini August 11th, 2004, 08:45 AM This post is only for completness, as it does not suit some people. I have been chatting to VIA about the ITX products. From what I can gather there is nothing really suitable for us until next year. So the Pent M ITX boards are still the best for now. But he has told me of a chipset that could be used as a development platform until then. It has twin 8 bit video inputs, the chipset also mentions a 12-bit capture, but my source only mentioned 8-bit and DVI input but I don't know where ever they are HD or not. If it is HD then it is good for acquiring third party 8 bit footage to use in programs. It has 4 drive sata Raid, 8 Gigabyte memory support, gigabit ethernet support, and the processor maxes at 1.4GHZ (Though I expect more before the end of the year but I don't know where ever this board would take it, my comment, not their's). It's is a consumer HDTV type thing, so has all that support and a dsp for Mpeg4/2 decoding and deblocking, not compression but some sub funtions may help. http://www.via.com.tw/en/c-series/cn400.jsp This chipset has been mentioned before, and is not really up our alley for live lossless compression (unless the dsp's are reprogrammable and helpful). Jason Rodriguez August 11th, 2004, 09:11 AM BTW guys, that 5.25" board that I was looking at from Axiomtek with Penitum M (Dothan), PCI-X, and SATA is $489. I was also looking around at powersources, and it seems like the best bet for the size/weight is the the Anton Bauer HyTRON 120 which can sustain 175W output for 120W/hrs at 9-16V (normal operations at 14.4V). So that should easily power this board with all the fix'ns :-) Wayne Morellini August 11th, 2004, 09:20 AM Good on you Jason, we need more of this. With the 51/4 inch computers, car computers, ISA card upgrade computers, and Mini/Nano ITX single boards we have a wide selection of chioces. That VIA chipset mentioned something about 2.5W power consumption as well, but I've closed the window now. Thanks Wayne. Jason Rodriguez August 11th, 2004, 09:23 AM Also I was doing some number crunching, and you guys will have to tell me if I'm wrong or right on this. Basically with 12-bits linear, you can cram in a maximum of around 10 stops without banding. 9 stops to play it safe and keep the noise levels down depending on how noisy the chip is. Hopefully the Altasens will have hardly any noise, giving us around 10 stops of total dyanamic range (1000:1). This is based on 10 stops being around 1000:1 contrast ratio. When you split that up between 4096 levels, you get around 409 leves for the bottom 100% of the image, which is where the black and white cards lie (90% of the total image is the white card, with the extra 10 percent of perceptual brightness being superwhites, but those superwhite areas actually should occupy 90% of a digital linear image). At 409 levels, you're sort of playing it close with banding issues and noise, so if you increase that to 800 levels for the 100:1 contrast range, then you have removed a stop from the top, so what was at 500:1 is now at 1000:1. So you've dropped from covering 10 stops to covering 9 stops. If the bottom stop is too noisey, then you'll have to increase exposure some in the capture process, or set the black-point higher, so you are now at 8 stops. With 10-bits, you only have around 100 levels for the 100:1 contrast range in a linear image. This will not do, so you have to double it twice to get to 400, and if you want 800 steps, you have to double that three times, so you've lost 3 stops. That now gives you 7 stops captured in a 10-bit linear file. Since the Micron 1300 has the streaking problem, you have to crush the blacks, so on the bottom you're losing another stop or two, which leaves us with the 5-6 (maximum) observed stops that we're currently seeing. If anyone is looking for reference to what I'm talking about, read "Digital Compositing for Film and Video" on the Chapters concerning Log-Linear conversions and Log versus Linear files/encoding issues. Steve Nordhauser August 11th, 2004, 11:47 AM OK, I went to the clue shop and got about half a clue. I've put new images at: http://www.siliconimaging.com/Samples/SI3300/ The 10 frame sequence (12 bit mono, 1920x1080) is now 40MB, which makes sense. The color images are 12MB (2Mpix * 2 bytes per color * 3 color, I hope) and the monochrome (color camera in mono mode) is 4MB per pixel. The only correction done on these (all done in the sensor, not postprocessed so I think of these as RAW) was to add blue gain to balance the Bayer response and adjusting the black level offset. I used a Canon zoom and a bizarre mix of tungsten, halogen and florescent lighting. I'm shooting a calendar so there is some limitation there (I don't have my Macbeth right now). What is confusing me is that PSP7 still thinks that these are 8 bit images, but is also says the size is on 2MB on the 4MB file. Spencer Houck August 11th, 2004, 01:36 PM Hey everyone, this is my first time chiming in on this giant (and amazing) thread. For starters I guess I should explain that I consider myself quite "techy", but this stuff is way beyond me. So excuse me for any ignorance to the complexities involved, but has anyone figured out the amount of storage required to make these cameras an option in the field? I would assume that one would have a sort of cartridge system of hard drives...possibly in raid pairs?...that they would use interchangeably just as they do with tapes. At night the raw footage could then be transferred to a storage raid array and the recording HDDs reused the next day. Does that sound about right? Also, I'm just starting to wrap my head around all the HD formats and the hardware/software needed for editing with them. Reading through the various trade magazines, Post, Film&Video, DV, etc. about the workflow of the film (i should say video) Collateral has been quite informative. Shot on 2 Vipers and a couple Sony F900s, Collateral used hard drive recording in some instances, but preffered using sony's SRW-5000 decks, which introduced compression to the Viper's 4:4:4 RGB. The result was still perceptually lossless and therefore was an acceptable compromise for the flexibility it allowed the DP and the art of the project. Now I saw the film, video.... whatever, in theaters last weekend projected on regular 35mm film (nowhere near had a DLP projection of it) and it looked fantastic. So, I guess my question is where is the point of diminishing returns when it comes to quality/ease of use/affordability? When do our dreams of perfect 4:4:4 uncompressed 12bit become an "unreality" for people of our monitary/equipment situation? I read in these magazines about smoke, quantel, and avid systems prepped for uncompromised HD, as well as the hardware necessary for such a thing and its all a tad overwhelming...especially in cost. How do we plan to do all this on our standard PCs and Macs, and will we have the same output capabilities of those huge, way out of budget systems? The reason I'm bringing all of this up is that myself and a large crew are gearing up to pitch a feature film to investors which we hope to shoot next summer. We figure the budget will allow video, but we hope to have the option of going to film if distribution is found, so miniDV is not our preffered format. So, for a group of independent film making college students, (we're actually video-production majors, not film guys), what is the feasibility of shooting, capturing, storing, editing, and outputting these massive formats? The workflow is certainly a scary one, and I need to be able to pitch this as a viable, and doable option. I am the would be editor for the project and am trying to initiate as much learning/planning for the format choice early on in pre-pro so we are set up to go to whatever format our hearts desire (depending on our budget of course. ;) ) Sorry for that huge post, I hope I made some sound arguements/questions? Oh yeah, and thanks goes to all of the DVinfo community...this is certainly a mind-blowing idea, and it seems to actually be working!! Keep it up guys. Spencer Houck EDIT: I should add that i'm a pc guy workin' with Adobe Premiere Pro currently, but am unhappy with its current stability. Do you think a Mac with FCP HD should be included in the budget to work with this media, or will I be ok with my PC and possibly a future version of Adobe's PPro? I don't have any experience working with Quicktime files in Adobe on my PC, is that all good to go? Jason Rodriguez August 11th, 2004, 01:37 PM Also forgot, We'll probably loose 1/2 to a full stop from the 12-bit image trying to white-balance, so it looks like we'll be around 9 2/3 stops, same as the Viper before white-balance, so theoretically we should get a little better than the Viper dynamic-range wise (the Viper's S/N ratio is only 54db whereas the Altasens is supposedly 68db). Jason Rodriguez August 11th, 2004, 01:44 PM Well this isn't going to be like editing video-tape, but I don't think it'll be quite the beast that it looks like initially. If you store your files as RAW files with unpadded bits, you actually have fairly small file-sizes. 1080/24p will only be 74.6MB/s at 12-bit, around half-that for 720/24p. So if you're going for 720p, then you're looking at around the same storage needs as uncompressed 10-bit 4:2:2 NTSC. There will be a lengthy "processing" time, to convert the raw files to a 16-bit TIFF, but that's not any similar to developing film neg, so while this isn't a "shoot and here's your tape" workflow, it's not quite that bad either. Once in 16-bit TIFF, go out to whatever quicktime/avi format you want. The only problems I see right now are some sort of house-keeping functionality you'll need to incorporate to keep track of your file sequences. But once you get to a Quicktime format (such as the 10-bit Uncompressed RGB codec from Blackmagic), you should be fine. Just have plenty of fast disks and lots of offline firewire/USB drives. Rob Scott August 11th, 2004, 02:29 PM Jason Rodriguez wrote: Once in 16-bit TIFF, go out to whatever quicktime/avi format you want.16-bit TIFF is only the first format that we'll be supporting directly. It was the simplest, for one thing :-) Soon I hope we can directly output to any QuickTime codec you like. There's no reason to use hard-drive-space-sucking 16-bit TIFF intermediates when you don't want TIFF as your final format. Aaron Shaw August 11th, 2004, 02:40 PM Just curious - I have never quite understood how this was supposed to work. Clearly we will need to assemble the images into a movie file of some sort once captured. How do we go about doing this, especially if we use a raid system where these files may be sequentially written upon various drives? Rob Scott August 11th, 2004, 02:44 PM Aaron Shaw wrote: Clearly we will need to assemble the images into a movie file of some sort once captured. How do we go about doing this, especially if we use a raid system where these files may be sequentially written upon various drives?With a real RAID system, this isn't an issue. With a "simulated" RAID (like I'm supporting in my Capture app), each frame has a global frame number on it. The Convert app then reads all the files, notes the the frame numbers and reassembles the clip properly. Rob Lohman August 12th, 2004, 02:32 AM The general idea is to have the camera as a seperate system from your processing computer so you kind of download the things you shot from your camera (much like you capture now) and then clear the harddisks for your next recording. Let's consider this. If you where to record in 1280x720 @ 24 fps in 10 bit the camera is sending us 16 bits which gives us a frame rate of 42.19 MB/s. However, we will at least pack this before it goes to the harddisk and this will yield 26.37 MB/s. Now you put a 200 GB harddisk in your system (or two 100 GB drives for exampl) and you can then record for 7766 seconds or 129 minutes which is just over 2 hours. I am working on a fast and simple compression algorithm to hopefully lower the bandwidth a bit to perhaps allow recording without raid and a bit longer recording times. Just to give you an idea of the massive amount of data: if we where to decode this to FULL RGB without ANY COMPRESSION then you where looking at a datarate of 126.56 MB/s or for 200 GB recorded will expand to 960 GB [2 hours]. So that basically rules out uncompressed for final edit use as well and we will need to look at some form of compression there as well. Some people have opted to use real-time bayer in a codec, but I'm not sure about the quality vs. speed on that one. Rob Lohman August 12th, 2004, 02:34 AM Spencer: personally I would not pitch such a system at this point in time. It isn't ready and it will not be in a while to come. It is all in the development stages and nobody has even shot a movie with any of this. It is not ready for primetime at this point in time. As you correctly identified the workflow is still somewhat of a problem. Steve: regarding the SI-3330, can you read this chip out at a resolution of 2048 x 1152? Steve Nordhauser August 12th, 2004, 04:25 AM Rob: Native is 2048x1536 so anything less can be read out. Frame rate is the problem. You would have to run the chip at over 70MHz to get to 24fps. I'll try it later today if I can. Laurence Maher August 12th, 2004, 07:31 AM Spencer, For what it's worth, I was editing on a PC platform for 7 years and just now bought a G5 with FCP HD . . . and let me tell you . . . . It's like I died and went to heaven. Stability is rock solid. User-friendliness is awesome. Capabilities are awesome. Speed is awesome. Macs are awesome, only 2 months with it and I am a complete and total convert. If you can afford it, the Mac is WELL WORTH IT. Just IMHO. Rob Scott August 12th, 2004, 07:56 AM Spencer Houck wrote: Do you think a Mac with FCP HD should be included in the budget to work with this media, or will I be ok with my PC and possibly a future version of Adobe's PPro? I don't have any experience working with Quicktime files in Adobe on my PC, is that all good to go?I haven't used a Mac extensively in several years, but I've heard very good things about FCP and it supports 10+ bits per channel directly, so it's a good match for a camera like this. In fact, this weekend, a cousin of mine (actually a cousin-in-law, who oddly enough is also named Rob!) is going to be in town and we're going to try out some clips with FCP HD. I'll let you know how it goes. Unless i can get native QuickTime support in there in the next few days, we'll probably be transcoding from 16-bit TIFF sequence into QuickTime. --- EDIT --- Minor update to the development blog (http://www.obscuracam.com/wiki/static/DevelopmentBlog.html) |