![]() |
Ben,
Maybe we should pay attention to the tablet PCs. Is there a very small one? Losing the keyboard entirely and gaining a touchscreen is probably the best way to go. I can't imagine an easy to use camera attached to a full laptop. There are some really nice LCD panels for around $400-$600 w/ full VGA or better displays. The RS-170 panels are way down in price but other than framing a shot they would be pretty useless. I think a key point is the real-time compression. If you don't do it, you probably need a RAID for anything over25MB/sec with a standard drive and about 45MB/sec with a SATA drive. If you put in the processing power for RT compression, you can write to a small drive but probably dropped 100W of power to get there. |
The smallest tablet is the Sony VAIO U70. But the micro HD inside will never keep up with the data.
These small screens can be had at 800x600 or even 1024x768 res. I have an older Sony U3 and its 6.4" screen is 1024x768... The biggest problem with small solutions is that they don't have 2.5" drives in them, so you can't ramp up to 7200rpm. You need a full notebook/tablet for that. In my book, a tablet is every bit as cumbersome as a notebook, perhaps more so. At least you can put a laptop on a surface and read its screen -- a tablet will lie flat, forcing you to stand directly over it or pick it up. If you want internal RAID, you are essentially fscked when it comes to notebooks. However, I've been thinking lately that it should be possible to daisy chain a few 2.5" Firewire drives together and do software RAID. Since these drives are powered over the FW bus, you wouldn't need to plug them in. Each FW bus is supposed to deliver 45w, but even if it's less, it's probably enough to drive four 2.5" drives. The trick would be figuring out the power/speed tradeoff. If you have enough power, you could do 4 7200rpm drives. If you aren't getting much power, you could swap them out for 4200rpm drives... - ben |
Realtime compression
I think it's totally feasible to do realtime compression. Here's how it would work for a 1280x720 image.
1. Read in the bayer image off the camera. 2. March through the pixels, separating the image into 3 channels -- R & B at 640x360, and G at 640x720. 3. Compress each channel losslessly. 4. Throw that data into a file. In reality, steps 2 + 3 would be happening at the same time. If you're shooting 1280x720 at 24fps at 10bit, you're only using 26.3MB/sec. You might not need compression or RAID to keep up with this if the Hitachi 7200rpm drive is fast enough. But if we assume we can get at least 1.5:1 compression on each channel, that drops us to 17.5MB/sec. We can definitely handle that. Lossless compression is very fast -- if you built it into the frame-writing code, it wouldn't take too much longer that writing an uncompressed frame. - ben |
Steve do you have a reply for the above?
|
does anyone know if recording to ram is a good idea? this would allow a laptop to be an all in one capture device?
|
Eric,
I think it's a great idea, if you can handle the limitations. The max ram a laptop can hold is 2gb. You'll want at least 256-512 allocated as "real" ram, and then you can make the rest into a RAM disk. That leaves you with about 1536mb. Recording at 26.3mb/sec (1280x720 @ 24fps, 10bit), you'll have just under a minute of shooting time. Then you'll have a delay as it writes out to disk. A better idea might be a portable RAID. I suggested this in another thread earlier today, and I've since done a bit more research. The basic idea is to strap three or four 2.5" external Firewire hard drives together and daisy chain them. Because laptop drives can be powered from the FW bus, it could be totally portable. 2.5" laptop hard drives draw a maximum of 5 watts during start up (5.5 in the case of the Hitachi 7200rpm drive) and around 2.5 - 2.7 watts during use. The Firewire bus should be able to supply a maximum of 45 watts -- I don't know if you get less on laptops. Regardless, if you use three drives, your maximum power draw during operation will be around 8-9 watts. Hopefully the Firewire bus on laptops would provide you with that much. Even a 4200rpm drive can write at about 10MB/sec in the worst-case scenario (random drive locations), and around 20MB/sec in the best-case (sequential drive locations). With software-based RAID 0 for 3 drives, I don't think there would be any problem reaching 30MB/sec. Burst rates (which are improved the most by RAID 0) would probably be faster than the bus could handle (50MB/sec), unless you were working with FW800 enclosures. Steve, what do you think of all this? - ben |
Ben and Steve,
The Eden processor consumes less than a laptop. If you don't believe me go check the specs of Eden and Pentium M or Athlon Mobile. About the compression system you are right, that's the way!! (I've been saying that since I entered this board, and now I'm trying to develop a solution for that.) Be aware that for every normal drive you add tho the system you add around 15 watts to your power requirements, so keeping one drive and going compression is a more power friendly solution. About displays, there are many 7 inches Touchscreen LCD displays with VGA connector, with a resolution of 1024x768 for around $300.Mini-ITX website My Maxtor Disk, 7200 rpm says: aound 900 miliampers for 12v and 630 milliampers for 5v. Explanation of huffyuv internals http://home.pcisys.net/~melanson/codecs/huffyuv.txt |
By the way, I just got a Mac G5 with OSX and FCP HD . . . you bet your butt I'd buy that camera!!!!!!
|
|
I jsut bought a 8in touch-screen I will keep you posted when it arrives!
|
On the SI-3300:
Yes, you will get a small imaging area - probably only useful with c mount lenses or ground glass- it would be 1920 x 3.2 microns in length. This is not meant to be a great solution, just a cheap one - this camera is only $300 more than the SI-1300. That would be good if the smearing goes away and you can do 1920x1080@24fps @10bit. At the high res, the imaging area is about the same as the SI-1300 so *I think* the DOF should be the same. Juan: Yes we will have something better - the Altasens SI-1920HD. This is 1920x1080 up to 60fps, full 12 bit. 5 micron pixels. On storage: Someone with good system sense, armed with a compression processing benchmark needs to review: - how much CPU is needed to do lossless and visually lossless compression - How much power, space and $$ that represents - How much power and $$ that saves on the disk drive/array - How much space that saves I'm thinking that the CPU might have to be 2+GHz for real-time compression requiring a larger mobo. You lose the extra hard drive maybe so there is a space, power and $$ savings. At the least, a few systems (not individual parts) need to be compared. Maybe an eden with a fw raid, a micro-itx with a powerful CPU and single drive, a shuttle/Epox. The answer may be different for different people, but it could spell things out a bit. |
Altasens? at the FG company? sounds like that is moving along at a good pace
also I see a ITX MOBO for p4! this would take care of the size issue and have enough power to do compression in real-time...hmm what your saying Steve is that you can't do some sort of pixel binning on that 3300? |
Quote:
|
I've been noticing questions and issues previously covered, but forgotten, here is a summary of some answers:
History: Originally Steve I in the viper thread got Sumix interested in making a camera for us. In the meantime Obin got this own version of this project going with the Silicon Image camera in the Russian film camera case, and gratefully got Steve N's support. The main project boils down to this: Is to make a low cost camera system that is suitiable for independent film production and low end professional, and prosumer, video production. The aim is a system that consists of any camerlink box type HD camera connected to a portable computer system, preferably in shoulder ENG and handheld casings. Rob was doing software to make the capture, compression, and storage transparent, professional and simple, with universal codec support for transparent file format, transmission and standard NLE video editing. By using this glue software, and working out the best parts, we hope to make a well integrated simple to put together and use system, not a hack. At the moment we are all focussing on prototyping on specific cameras interfaced to normal computers with specific codecs, compression and NLE's. Sumix is planning a compression based camera, as well as a 3 chip. Silicon Imaging would like to do a compression based camera, if somebody else provides the finished FPGA desaign. I have another manufacturer looking at the compression issue, and Obin (I think) has also approached somebody. Sumix and SI currently think that Altsens chips are the best. Silicon Imaging, Sumix and many others, have non Altsens cameralink cameras. Our own compression, codecs and FPGA, are future projects after the software is setup. Many alternatives have been discussed and suggested and there is a seperate Cinema camera FPGA thread. Gigabit Ethernet is the what we are looking at instead of Firewire. It also is forwards compatible with 10 Giga Bit Ethernet which is way above Firewire 3.2Gbit/s optical. I have also suggested the cheap consumer HDMI (5Gbit/s DVI in USB type plug) standard, USB 3, PCI-E Desktop. With USB2, and Gigabit Ethernet, standard drivers are inefficent and won't get near the max data rate, you need custom drivers to get close to the bus bandwidth. USB2 has been discussed extensively with Steve N of SI. The problem is that you get lost frames because the USB hardware requires a lot of extra processing power, pixels are packed in 8 or 16 bits at a time, and the burst frame bandwidth is controlled by the shutter speed. When you read 10 bits it is sent accross as a 16 bit value (I know, really poor efficency). When you use a 1/48sec shutter requires double the bandwidth, with overheads that gets close to satuation. Alltogether unreliable. We are looking at ITX, because of cheap consumer based mini-itx, and nano-itx formfactors, and very low power requirements. There are faster processors and extra processing capabilities being developed by VIA, that might negate the use of a P4. Or something like that. Sounds right guys? You will find more information about components and configurations here: 3 channel 36 bit 1280 X 720 low $ camera - Viper? 4:4:4 10bit single CMOS HD project Home made camera designs? The detailed guide to this project is presently at: www.obscuracam.com I have setup some additional threads if anybody wants to use them in future: Home Made HD Cinema Cameras - General Discussion Home Made HD Cinema Cameras - Problems and Performance Home Made HD Cinema Cameras - Technical Discussion |
Obin:
Yes, we have running cameras for the SI-1920HD. The SI-3300 does have subsampling but only in integer steps - 2048 x 1536 1024 x 768 682 x 512 and so on. The Altasens is pretty unique. It has a an interline mixing mode to get down to 1280x720 at the full FOV. |
So the SI1920HD is a camera based on the Altasens???
I'm getting lost.... Anyway the SI 3300 looks quite good for me looking for a 1920x1080 camera at 24 fps, if it is cheap enough Anbd can hace a 1/24 exposure it is right for me, I like the mechanical shutter solution. So, What is the problem with SI 3300? Low sensitivity?? I'm not getting it, sorry, got lost with your other posting.... If the problem is sensitivity, can't any of this things be used to improve the camera for our requirements??? http://www.o-eland.com/faceplate.htm |
Wayne, >Thank you. You say: "Is to make a low cost camera system that is suitiable for independent film production and low end, and prosumer, video production...
As you know we have NOW the chance to make a movie with those camera. My company had make a real working 35mm DOF Solution for HDTV and we have a focus follow system for every still camera lenses, also we can make a movie-camera-like-case. This is our vision of a low-cost camera design: HD camera head as part one. The second part is the controller+computer+HDD+power unit. This two parts can connected together to a single unit, but if used on a steady cam, it is a very low weight steady system, if the controller+computer+HDD+power unit is used as the counterweight. This two-units-design brings some advantages. The head (with SI-1300 inside) with all optical and mechanical parts can made NOW. It will have a 35mm movie-camera-look. With this unit the cameraman will work. This design will not change. The second unit is at the beginning a PC on the end of a 10m cameralink cable. This unit are changed, but with this system they can start make the movie NOW. We will work together with all people here. But my problem is, we need a working system now. As i say, it can be a PC, but what hardwre? Who had make test with a industry camera (SI-1300), except obin? Silicon Images sell the SI-1300 camera with the Epix PIXCI-CL1 grabber card. This card have no memory. There also sell a card from matrox, with 32MB. What is about lost frames? What hardware will work. I need answers. And i need short videos, not still frames, to see what picture quality the software can write on HDD 24fps/10Bit). |
Juan,
The processor itself consumes less than (for example) a Pentium M, but I think the motherboard as a whole consumes more than a laptop motherboard. Laptop components are lower-power and higher-price.... - ben |
Quote:
I believe Obin has posted some footage. Quote:
|
Re: Realtime compression
Quote:
|
Lots of non FPGA discussion happening here, a fair bit covered in the other threads. So I'll have to drop in here as well. The Cinema threads are available if you want to keep non FPGA discussion out of this and the 10bit thread.
Have a good day. Thanks Wayne. |
<<<-- Originally posted by Rai Orz : Wayne, >Thank you. You say: "Is to make a low cost camera system that is suitiable for independent film production and low end, and prosumer, video production... -->>>
Thanks for pionting out that error, it should have been "low end professional, and prosumer". As for the rest, it is a work in progress. I don't think much thought, and research has gone into investigating which disks and setrup produce the best performance. the problem is that working out this for real systems was supposed to happen when the software was out in a few months. If you want to do something talk to the Rob's and Obin privately for now. If you are a month away talk to Steve N, if you are ends of the year (which it doesn't look like) talk to Sumix. Then do research to find the best Motherboard and highest speed disks. It will require detailed look aty the specs and reall world tests from reveiws. Fortunately gamers and overclockers are obsessed about this, so many reveiws are on the web. But it also pays to have an assesment done by a server oriented site. Another unrecognised problem here is that modern Hard Disks are now using plastic busings, they will fail quickly compared to models a few years ago. The fail times will probably be thousands, maybe tens of thousands of ours, but check the rated fail times. When using this for raw capture, virtual memory does not help, so you will be using the HD hundreds of times more than normal. For a production that should not be a problem, but upto a year maybe a different matter (that is why warrantees moved down to the year). I think major productions, like yours, don't really need portability, so you can select very fast convient hardware, and power sources, until portable options become available. If you do go portable, for a battery pack can I suggest a vest with batteries inbuilt (easier to balance than a belt), but not acid batteries (for obviouse reasons). Maybe there are some people who could volunteer to scientifically research system parts and disks for you and the rest of us. Thanks Wayne. Previously written replies: Quote:
From your other post I gather that you want to put this camera project in the credits. Could I suggest that you have a picture of your camera on black with the title/sub-title "Shot with" whatever name and logo we come up with, or "DVinfo.net Alternative Imaging HD Cinema Camera project" with link and DVinfo logo (provided they agree). Quote:
|
Quote:
Quote:
Quote:
Quote:
http://www.digitalanarchy.com/micro/micro_faq.html This compression algorithm, posted earlier, is claiming: "it's completely lossless, but gives you file sizes that are about 2-4 times smaller than most similar lossless codecs" <<<-- Originally posted by Ben Syverson : Thanks Les! I've read the paper, but I'm going a different route. I'm developing a logic-based (as opposed to mathematic) de-zippering routine. I'll posts some results in the next hour or so. - ben -->>> Yes, Logic, not slight of hand maths. This is my design approach too. Even though Ting (the paper's author) has done integer based Minimial Instruction Set computer processors (I have one here), he has all ways had a maths bent. Thanks Wayne. |
Quote:
|
Compression Idea.
OK, lets call it. Rob can we use onboard AGP 3D coprocessors to replace the need for FPGA? Still can we use a AGP card with DVI input (for a DVI camera)?
I would like to find the lowest cost Cameralink card to. Rob's and Steve, I have an idea to easily achieve comrpession on a 3chip camera. Steve told us sometime ago that colour channels follow each others in real life (meaning main adjustment is in the luminance), allowing Bayer to achieve good results. What about if we save 3chip input as a bayer pattern with variations from the pattern. So what we end up with is: GR BG Then the variations, which can be any combination of following selection bits that best suites the individual frame: 1 bit variation exists yes/no .- 3 bits variation exists in R, G or B channel. ....- 3 bits*2 (RB) which pixel is the variation in ....- 2 bits (G) which pixel is the variation in then .......- variations: 1 bit negative/positve variation or full/small variation. + Variations (full=full value (ie. 10 bit) small= 5bit (or whatever). (Negative=all bits required to subtract to 0 value, positive = all bits require to get to full value). Or whatever system makes more sense (I've been up all night). With this hopefully we can reduce bandwidth to half easily. What do you think? Thanks. |
Re: Compression Idea.
Quote:
Quote:
Quote:
--- edit --- I looked at some of the resources on the GPGPU site, and it appears that a GPU won't do what we want. Transferring data from GPU into main memory appears to be too slow to be useful for real-time applications. |
Sorry, some consumer cards (more in the HD future) have DVI input.
I am talking about saving the three chip output as a Bayer pattern (as alledgedly hue changes less then luminance) for three times less data. So the bayer pattern is now used to predict the value for each channel, as the blue, red, or green hue component would stay the same accross multiple pixels. If there is a variation from the predicted bayer value, then we record that an individual channel on an individual pixel is different, and by how much. The layers of bits is to reduce the wastage of having a bit for each channel not addressed at each bayer pixel to register variations from the predicted value. The layers also allow smaller values to be sued to represent the variation value. The indentation represents the inner nesting of the data. Oh...the stupid forum board has taken away my nesting. If, statiscally, hue remains the same most of the time we save storage space. But then again it could all be rot, and what I originally thought was true (that it is rot). A bit complex, but maybe usefull. What you end up is simple lossless compression. |
Re: Re: Compression Idea.
I had my programmer implement a bi cubic rescale to 1280 x 720 in the GPU of a late model Nvidia card. He was only able to get it running at about 4 fps, for some reason.
He did however learn how to code the GPU, for what it's worth. Programming it was a Beech, he said. -Les <<<-- Originally posted by Rob Scott : I looked at some of the resources on the GPGPU site, and it appears that a GPU won't do what we want. Transferring data from GPU into main memory appears to be too slow to be useful for real-time applications. -->>> |
Bicubic is a pretty processor-intensive operation. You need a neighborhood of at least 4x4 pixels to compute each pixel, and the weighting function is pretty complex.
A simple linear interpolation doesn't need to see a neighborhood. Even with my edge logic, I'm positive it can be done in real-time, especially if you leverage the GPU, either via pixel shaders or CoreVideo. But the real question is: why bother to do it in real-time? I'd rather capture footage 100% raw, so that the CPU can focus on channeling data to the hard drive. Then afterwards you can do your post processing... If you spend all your time optimizing code to be realtime, you don't have any time to make films. |
BTW, how come you never seen any mention of bicubic interpolation for Bayer images? I've heard and seen Bilinear (doesn't look good), but nothing about bicubic (like Photoshop, etc.).
Is this something too hard to do (slow), or is it simply not good looking/impossible because of the bayer pattern? |
Bicubic is crap. That's probably why. :) Bicubic sharpens (edge-enchances) the image as it interpolates. I don't know about you, but nothing says "video" to me like a sharpened image.
Much better is spline interpolation. There's also Mitchell, Catmull-Rom, Sinc, etc. Check out this shootout of the Cubic, Spline and Sinc algorithms for a visual idea of the differences between them. The green channel really doesn't need anything but linear (not even bilinear) interpolation, since there's twice as much information than R or B. Spline interpolation would be nice on the R or B channels, but personally it's not worth the processing time to me unless it's a greenscreen shot... |
Hmm . . .
that spline interpolation does look nice :-) I'm already doing your linear filter on my G5 in less than a second, how much longer do you thing it would be if you made an optional interpolation plug-in that did spline interpolation on all three channels for green-screen type applications? For the highest-quality work, even if I'm at 4 seconds per frame, I don't think that's too high a price to pay. A good-quality algorithm like you have right now for quickie stuff, and then for the stuff that we either plan to blow-up big or to use for special effects-and I was hoping that was something these cameras could be used for since they are uncompressed-or for simply the highest quality, a good spline-based interpolation algorithm that processes all three channels: red, green (I know you said the green doesn't need it, but if we're going to take the computational hit, we might as well go all-out), and blue, so that there are no compromises. Does that sound like a good idea? |
Ben I think your right on CAPTURE NOW and do your compression later..for now anyway...this will allow for a 1.5ghz 7watt VIA cpu on a itx 5" x 5" mainboard to be all we need for speed...and then a plugin/background process that does compression ...this is the first approach I think I will take anyway
we now have raw color images showing on screen with our capture software and full camera control working...next up is disk writing...and doing multi threading I think i will use the ITX mainboard with the fastest VIA chip we can get and 2 SATA disks for capture of (I hope) 60fps 1280x720 8bit and 48fps 1280x720 10bit I placed the order for a 1024x768 touch screen and am waiting for it to arrive BTW we have named the capture ware CineLink..what do you guys think? can we get 60fps from 8bit ?? what will the datarate be? |
Yeah -- that's what I'm planning -- within a couple versions of linBayer I'll build in a popup menu for the R&B interpolation with Linear and Spline.
The reason why it doesn't make sense to do spline interpolation on the green is that we're doing all this logic-based stuff on top of the interpolation. So the quality of the initial interpolation doesn't really matter. But a 16 (4x4) pixel spline interpolation will be super nice on the R&B channels... |
Quote:
how 'bout "GorillaCam" ;-) Seriously, CineLink does sound nice, and have some sort of professional ring to it. |
Do a google search for CineLink -- ten bucks says its a registered trademark. Why not come up with something unique?
Like GorillaCam/GuerrilaCam? :) |
Quote:
For good green-screening, you'll want the best possible algorithm on the green-channel without cutting corners. Looks like CineLink is some sort of Bosnian film festival. |
good name but it is NOT going to LOOK as good as WinAmp...I don't have money for SEXY UI stuff!!! LOL....well maybe i can feed the programmer lots of Oreo's and weed in exchange for a SEXY UI!? ;)
hahhah: http://sff.ba/10SFF/program/eng/cinelink.htm oh well,,,,I guess we will just share names! |
Jason,
A 0.5 pixel (or even 0.25) is all you need to erase those artifacts -- just apply it after linBayer and before the keyer. The reason why I don't build any softening into linBayer itself is that it's designed to deliver the sharpest possible image. Unfortunately, if your image is noisy, the logic has a hard time putting the image back together. The options we built in should kill 90% of the "gridding," but the remaining 10% is the price you pay for superior sharpness... - ben |
Quote:
|
All times are GMT -6. The time now is 06:25 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network