View Full Version : 4:4:4 10bit single CMOS HD project
Steve Nordhauser July 20th, 2004, 07:34 AM On the SI-3300:
Yes, you will get a small imaging area - probably only useful with c mount lenses or ground glass- it would be 1920 x 3.2 microns in length. This is not meant to be a great solution, just a cheap one - this camera is only $300 more than the SI-1300. That would be good if the smearing goes away and you can do 1920x1080@24fps @10bit. At the high res, the imaging area is about the same as the SI-1300 so *I think* the DOF should be the same.
Juan: Yes we will have something better - the Altasens SI-1920HD. This is 1920x1080 up to 60fps, full 12 bit. 5 micron pixels.
On storage:
Someone with good system sense, armed with a compression processing benchmark needs to review:
- how much CPU is needed to do lossless and visually lossless compression
- How much power, space and $$ that represents
- How much power and $$ that saves on the disk drive/array
- How much space that saves
I'm thinking that the CPU might have to be 2+GHz for real-time compression requiring a larger mobo. You lose the extra hard drive maybe so there is a space, power and $$ savings. At the least, a few systems (not individual parts) need to be compared. Maybe an eden with a fw raid, a micro-itx with a powerful CPU and single drive, a shuttle/Epox.
The answer may be different for different people, but it could spell things out a bit.
Obin Olson July 20th, 2004, 08:10 AM Altasens? at the FG company? sounds like that is moving along at a good pace
also I see a ITX MOBO for p4! this would take care of the size issue and have enough power to do compression in real-time...hmm
what your saying Steve is that you can't do some sort of pixel binning on that 3300?
Rob Scott July 20th, 2004, 08:20 AM Obin Olson wrote:
what your saying Steve is that you can't do some sort of pixel binning on that 3300?IIRC, to do pixel binning, you need an even number of pixels -- unfortunately, 1280 doesn't go into 2048 evenly. To use the full chip area you'd have to interpolate. (Again, IIRC.)
Wayne Morellini July 20th, 2004, 08:21 AM I've been noticing questions and issues previously covered, but forgotten, here is a summary of some answers:
History:
Originally Steve I in the viper thread got Sumix interested in making a camera for us. In the meantime Obin got this own version of this project going with the Silicon Image camera in the Russian film camera case, and gratefully got Steve N's support.
The main project boils down to this:
Is to make a low cost camera system that is suitiable for independent film production and low end professional, and prosumer, video production.
The aim is a system that consists of any camerlink box type HD camera connected to a portable computer system, preferably in shoulder ENG and handheld casings. Rob was doing software to make the capture, compression, and storage transparent, professional and simple, with universal codec support for transparent file format, transmission and standard NLE video editing. By using this glue software, and working out the best parts, we hope to make a well integrated simple to put together and use system, not a hack. At the moment we are all focussing on prototyping on specific cameras interfaced to normal computers with specific codecs, compression and NLE's.
Sumix is planning a compression based camera, as well as a 3 chip.
Silicon Imaging would like to do a compression based camera, if somebody else provides the finished FPGA desaign. I have another manufacturer looking at the compression issue, and Obin (I think) has also approached somebody.
Sumix and SI currently think that Altsens chips are the best.
Silicon Imaging, Sumix and many others, have non Altsens cameralink cameras.
Our own compression, codecs and FPGA, are future projects after the software is setup. Many alternatives have been discussed and suggested and there is a seperate Cinema camera FPGA thread.
Gigabit Ethernet is the what we are looking at instead of Firewire. It also is forwards compatible with 10 Giga Bit Ethernet which is way above Firewire 3.2Gbit/s optical. I have also suggested the cheap consumer HDMI (5Gbit/s DVI in USB type plug) standard, USB 3, PCI-E Desktop.
With USB2, and Gigabit Ethernet, standard drivers are inefficent and won't get near the max data rate, you need custom drivers to get close to the bus bandwidth.
USB2 has been discussed extensively with Steve N of SI. The problem is that you get lost frames because the USB hardware requires a lot of extra processing power, pixels are packed in 8 or 16 bits at a time, and the burst frame bandwidth is controlled by the shutter speed. When you read 10 bits it is sent accross as a 16 bit value (I know, really poor efficency). When you use a 1/48sec shutter requires double the bandwidth, with overheads that gets close to satuation. Alltogether unreliable.
We are looking at ITX, because of cheap consumer based mini-itx, and nano-itx formfactors, and very low power requirements. There are faster processors and extra processing capabilities being developed by VIA, that might negate the use of a P4.
Or something like that. Sounds right guys?
You will find more information about components and configurations here:
3 channel 36 bit 1280 X 720 low $ camera - Viper? (www.dvinfo.net/conf/showthread.php?s=&threadid=25296)
4:4:4 10bit single CMOS HD project (http://www.dvinfo.net/conf/showthread.php?s=&threadid=25808)
Home made camera designs? (http://www.dvinfo.net/conf/showthread.php?s=&threadid=25705)
The detailed guide to this project is presently at:
www.obscuracam.com
I have setup some additional threads if anybody wants to use them in future:
Home Made HD Cinema Cameras - General Discussion (http://www.dvinfo.net/conf/showthread.php?s=&threadid=28799)
Home Made HD Cinema Cameras - Problems and Performance (http://www.dvinfo.net/conf/showthread.php?s=&threadid=28780)
Home Made HD Cinema Cameras - Technical Discussion (http://www.dvinfo.net/conf/showthread.php?s=&threadid=28781)
Steve Nordhauser July 20th, 2004, 08:42 AM Obin:
Yes, we have running cameras for the SI-1920HD.
The SI-3300 does have subsampling but only in integer steps -
2048 x 1536
1024 x 768
682 x 512 and so on.
The Altasens is pretty unique. It has a an interline mixing mode to get down to 1280x720 at the full FOV.
Juan M. M. Fiebelkorn July 20th, 2004, 09:01 AM So the SI1920HD is a camera based on the Altasens???
I'm getting lost....
Anyway the SI 3300 looks quite good for me looking for a 1920x1080 camera at 24 fps, if it is cheap enough Anbd can hace a 1/24 exposure it is right for me, I like the mechanical shutter solution.
So, What is the problem with SI 3300?
Low sensitivity??
I'm not getting it, sorry, got lost with your other posting....
If the problem is sensitivity, can't any of this things be used to improve the camera for our requirements???
http://www.o-eland.com/faceplate.htm
Rai Orz July 20th, 2004, 09:40 AM Wayne, >Thank you. You say: "Is to make a low cost camera system that is suitiable for independent film production and low end, and prosumer, video production...
As you know we have NOW the chance to make a movie with those camera. My company had make a real working 35mm DOF Solution for HDTV and we have a focus follow system for every still camera lenses, also we can make a movie-camera-like-case. This is our vision of a low-cost camera design:
HD camera head as part one. The second part is the controller+computer+HDD+power unit. This two parts can connected together to a single unit, but if used on a steady cam, it is a very low weight steady system, if the controller+computer+HDD+power unit is used as the counterweight.
This two-units-design brings some advantages. The head (with SI-1300 inside) with all optical and mechanical parts can made NOW. It will have a 35mm movie-camera-look. With this unit the cameraman will work. This design will not change. The second unit is at the beginning a PC on the end of a 10m cameralink cable. This unit are changed, but with this system they can start make the movie NOW.
We will work together with all people here. But my problem is, we need a working system now. As i say, it can be a PC, but what hardwre?
Who had make test with a industry camera (SI-1300), except obin?
Silicon Images sell the SI-1300 camera with the Epix PIXCI-CL1 grabber card. This card have no memory. There also sell a card from matrox, with 32MB. What is about lost frames? What hardware will work. I need answers.
And i need short videos, not still frames, to see what picture quality the software can write on HDD 24fps/10Bit).
Ben Syverson July 20th, 2004, 10:15 AM Juan,
The processor itself consumes less than (for example) a Pentium M, but I think the motherboard as a whole consumes more than a laptop motherboard. Laptop components are lower-power and higher-price....
- ben
Rob Scott July 20th, 2004, 10:22 AM Rai Orz wrote:
Who had make test with a industry camera (SI-1300), except obin?
And i need short videos, not still frames, to see what picture quality the software can write on HDD 24fps/10Bit).I have been getting live video capture with my own software, but I haven't concentrated (yet) on putting it into a standard format.
I believe Obin has posted some footage.
Silicon Images sell the SI-1300 camera with the Epix PIXCI-CL1 grabber card. This card have no memory ... What is about lost frames?It uses system memory to buffer the frames, not on-card memory. As long as the software is efficient enough, there should be no lost frames. The hard drive is the most critical piece there (for captures over a few seconds, anyway).
Rob Scott July 20th, 2004, 10:29 AM Ben Syverson wrote:
Here's how it would work for a 1280x720 image.I'm already doing all four of those steps, Ben, but I don't currently have a suitable lossless compression algorithm, and I'm not experienced at writing them myself. I've looked at LZO (http://www.oberhumer.com/opensource/lzo/) but I'm not sure if it will be usable for this application. Any ideas?
Wayne Morellini July 20th, 2004, 10:41 AM Lots of non FPGA discussion happening here, a fair bit covered in the other threads. So I'll have to drop in here as well. The Cinema threads are available if you want to keep non FPGA discussion out of this and the 10bit thread.
Have a good day.
Thanks
Wayne.
Wayne Morellini July 20th, 2004, 11:28 AM <<<-- Originally posted by Rai Orz : Wayne, >Thank you. You say: "Is to make a low cost camera system that is suitiable for independent film production and low end, and prosumer, video production... -->>>
Thanks for pionting out that error, it should have been "low end professional, and prosumer".
As for the rest, it is a work in progress. I don't think much thought, and research has gone into investigating which disks and setrup produce the best performance. the problem is that working out this for real systems was supposed to happen when the software was out in a few months.
If you want to do something talk to the Rob's and Obin privately for now. If you are a month away talk to Steve N, if you are ends of the year (which it doesn't look like) talk to Sumix. Then do research to find the best Motherboard and highest speed disks. It will require detailed look aty the specs and reall world tests from reveiws. Fortunately gamers and overclockers are obsessed about this, so many reveiws are on the web. But it also pays to have an assesment done by a server oriented site.
Another unrecognised problem here is that modern Hard Disks are now using plastic busings, they will fail quickly compared to models a few years ago. The fail times will probably be thousands, maybe tens of thousands of ours, but check the rated fail times. When using this for raw capture, virtual memory does not help, so you will be using the HD hundreds of times more than normal. For a production that should not be a problem, but upto a year maybe a different matter (that is why warrantees moved down to the year).
I think major productions, like yours, don't really need portability, so you can select very fast convient hardware, and power sources, until portable options become available.
If you do go portable, for a battery pack can I suggest a vest with batteries inbuilt (easier to balance than a belt), but not acid batteries (for obviouse reasons).
Maybe there are some people who could volunteer to scientifically research system parts and disks for you and the rest of us.
Thanks
Wayne.
Previously written replies:
Originally posted by Rai Orz : Rob, Obin, Joshua and all others,
It was my idea to shoot a movie with those silicon images Cam. I know the work time for the movieprojekt that just now beginns. And so, it also was my idea to shoot now, at the beginning, with the unsuitable silicon images software and a pc in a car. I know/hope a better software will come soon and also small hardware. I spoke with producer and director, and they say okay, it can be good for movie marketing (and i think also for CAM marketing), but it must work. And now, they want see pictures.
Yes, this is why I am eager to see you here. We haven't figured a name or logo yet, but I have suggested privately to the Rob's and Steve I, the name Computer Cam, or Custom Cam. People here might want to suggest Cinema Cam instead.
From your other post I gather that you want to put this camera project in the credits. Could I suggest that you have a picture of your camera on black with the title/sub-title "Shot with" whatever name and logo we come up with, or "DVinfo.net Alternative Imaging HD Cinema Camera project" with link and DVinfo logo (provided they agree).
Originally posted by Steve Nordhauser : Wayne:
Norpix is getting a facelift fairly soon to make the GUI more intuitive. We shall see - I suspect this group will be way beyond there by the time it is ready.
Here is a suggestion. Rob's, Steve, why not approach these capture software companies with a sample pro video capture front end interface graphic, and ask them to implement it? Piont out that we are trying to make a new market, and maybe they will do it.
Wayne Morellini July 20th, 2004, 11:32 AM Originally posted by Eric Gorski : does anyone know if recording to ram is a good idea? this would allow a laptop to be an all in one capture device?[
With Rob's idea of processing during pauses it is very good, saves can be done during filming and pauses. Ram is also much cheaper and faster than normal flash.
Originally posted by Ben Syverson : Les wrote:
People willing to play with non standard and somewhat self made hacked together hardware are more likely PC people.
Yes, most definitely. I'm talking directly to the vendors: if you wanted to, you could develop an HD system that didn't need to be "hacked together," one that could be a much more mainstream solution. Think of all the FCP jockeys out there begging for a better format than DV or HDV. The filmmaking market is at least as big as the scientific market -- trust me. And a lot of those people are Mac users.
- ben
I've been trying to discuss with the Robs and Steve I about approaching vendors and sponsors, but have not heard back anything yet.
Originally posted by Obin Olson : are all using $100 agp cards that could do EVERYTHING in realtime even HD ..yet not a program uses them but games...shame...it would be Apple that changes that ;) and I am a windows person
I have suggested this to Rob. The current generation of Direct X cards shaders can accelerate compression, but future generations of Direct X will result in complete compression capabilities. Some versions of these cards also have DVI input, which is HDMI compatible, which would be a good alternative to cameralink and Gbe.
Originally posted by Obin Olson : jason I am working on design for a standalone camera very "Kinetta" like but simple and easy to use ...do you have any idea what that new microboard chipset is? i need hardware that will run with Linux for the UI of this stand-alone Rig ;)
If your doing a Linux program, why not approach the Cinelerra NLE, I'm sure they do capture, they might even do all the Cameralink modifications for you free just to support us.
http://www.digitalanarchy.com/micro/micro_faq.html
This compression algorithm, posted earlier, is claiming: "it's completely lossless, but gives you file sizes that are about 2-4 times smaller than most similar lossless codecs"
<<<-- Originally posted by Ben Syverson : Thanks Les!
I've read the paper, but I'm going a different route. I'm developing a logic-based (as opposed to mathematic) de-zippering routine. I'll posts some results in the next hour or so.
- ben -->>>
Yes, Logic, not slight of hand maths. This is my design approach too.
Even though Ting (the paper's author) has done integer based Minimial Instruction Set computer processors (I have one here), he has all ways had a maths bent.
Thanks
Wayne.
Rob Scott July 20th, 2004, 12:09 PM Wayne Morellini wrote:
...The current generation of Direct X cards shaders can accelerate compression, but future generations of Direct X will result in complete compression capabilities.This project looks interesting -- http://www.gpgpu.org/
Wayne Morellini July 20th, 2004, 01:11 PM OK, lets call it. Rob can we use onboard AGP 3D coprocessors to replace the need for FPGA? Still can we use a AGP card with DVI input (for a DVI camera)?
I would like to find the lowest cost Cameralink card to.
Rob's and Steve, I have an idea to easily achieve comrpession on a 3chip camera.
Steve told us sometime ago that colour channels follow each others in real life (meaning main adjustment is in the luminance), allowing Bayer to achieve good results.
What about if we save 3chip input as a bayer pattern with variations from the pattern.
So what we end up with is:
GR
BG
Then the variations, which can be any combination of following selection bits that best suites the individual frame:
1 bit variation exists yes/no
.- 3 bits variation exists in R, G or B channel.
....- 3 bits*2 (RB) which pixel is the variation in
....- 2 bits (G) which pixel is the variation in
then
.......- variations:
1 bit negative/positve variation or full/small variation.
+ Variations (full=full value (ie. 10 bit) small= 5bit (or whatever). (Negative=all bits required to subtract to 0 value, positive = all bits require to get to full value).
Or whatever system makes more sense (I've been up all night). With this hopefully we can reduce bandwidth to half easily.
What do you think?
Thanks.
Rob Scott July 20th, 2004, 01:28 PM Wayne Morellini wrote:
Rob can we use onboard AGP 3D coprocessors to replace the need for FPGA?Possibly. I'll investigate it when I'm done with this phase of the project.
can we use a AGP card with DVI inputDVI input? I thought most cards had output only.
3chip input as a bayer patternSorry, Wayne, I didn't follow at all. Are you talking about a 3-chip system or a 1-chip Bayer system? Or are you talking about saving the 3-chip data as a Bayer pattern? Why would you want to do that?
--- edit ---
I looked at some of the resources on the GPGPU site, and it appears that a GPU won't do what we want. Transferring data from GPU into main memory appears to be too slow to be useful for real-time applications.
Wayne Morellini July 20th, 2004, 01:56 PM Sorry, some consumer cards (more in the HD future) have DVI input.
I am talking about saving the three chip output as a Bayer pattern (as alledgedly hue changes less then luminance) for three times less data. So the bayer pattern is now used to predict the value for each channel, as the blue, red, or green hue component would stay the same accross multiple pixels.
If there is a variation from the predicted bayer value, then we record that an individual channel on an individual pixel is different, and by how much. The layers of bits is to reduce the wastage of having a bit for each channel not addressed at each bayer pixel to register variations from the predicted value. The layers also allow smaller values to be sued to represent the variation value.
The indentation represents the inner nesting of the data. Oh...the stupid forum board has taken away my nesting.
If, statiscally, hue remains the same most of the time we save storage space. But then again it could all be rot, and what I originally thought was true (that it is rot).
A bit complex, but maybe usefull. What you end up is simple lossless compression.
Les Dit July 20th, 2004, 03:42 PM I had my programmer implement a bi cubic rescale to 1280 x 720 in the GPU of a late model Nvidia card. He was only able to get it running at about 4 fps, for some reason.
He did however learn how to code the GPU, for what it's worth.
Programming it was a Beech, he said.
-Les
<<<-- Originally posted by Rob Scott :
I looked at some of the resources on the GPGPU site, and it appears that a GPU won't do what we want. Transferring data from GPU into main memory appears to be too slow to be useful for real-time applications. -->>>
Ben Syverson July 20th, 2004, 03:55 PM Bicubic is a pretty processor-intensive operation. You need a neighborhood of at least 4x4 pixels to compute each pixel, and the weighting function is pretty complex.
A simple linear interpolation doesn't need to see a neighborhood. Even with my edge logic, I'm positive it can be done in real-time, especially if you leverage the GPU, either via pixel shaders or CoreVideo.
But the real question is: why bother to do it in real-time? I'd rather capture footage 100% raw, so that the CPU can focus on channeling data to the hard drive. Then afterwards you can do your post processing...
If you spend all your time optimizing code to be realtime, you don't have any time to make films.
Jason Rodriguez July 20th, 2004, 04:09 PM BTW, how come you never seen any mention of bicubic interpolation for Bayer images? I've heard and seen Bilinear (doesn't look good), but nothing about bicubic (like Photoshop, etc.).
Is this something too hard to do (slow), or is it simply not good looking/impossible because of the bayer pattern?
Ben Syverson July 20th, 2004, 04:22 PM Bicubic is crap. That's probably why. :) Bicubic sharpens (edge-enchances) the image as it interpolates. I don't know about you, but nothing says "video" to me like a sharpened image.
Much better is spline interpolation. There's also Mitchell, Catmull-Rom, Sinc, etc. Check out this shootout of the Cubic, Spline and Sinc algorithms (http://www.path.unimelb.edu.au/~dersch/interpolator/interpolator.html) for a visual idea of the differences between them.
The green channel really doesn't need anything but linear (not even bilinear) interpolation, since there's twice as much information than R or B. Spline interpolation would be nice on the R or B channels, but personally it's not worth the processing time to me unless it's a greenscreen shot...
Jason Rodriguez July 20th, 2004, 06:30 PM Hmm . . .
that spline interpolation does look nice :-)
I'm already doing your linear filter on my G5 in less than a second, how much longer do you thing it would be if you made an optional interpolation plug-in that did spline interpolation on all three channels for green-screen type applications? For the highest-quality work, even if I'm at 4 seconds per frame, I don't think that's too high a price to pay. A good-quality algorithm like you have right now for quickie stuff, and then for the stuff that we either plan to blow-up big or to use for special effects-and I was hoping that was something these cameras could be used for since they are uncompressed-or for simply the highest quality, a good spline-based interpolation algorithm that processes all three channels: red, green (I know you said the green doesn't need it, but if we're going to take the computational hit, we might as well go all-out), and blue, so that there are no compromises.
Does that sound like a good idea?
Obin Olson July 20th, 2004, 06:34 PM Ben I think your right on CAPTURE NOW and do your compression later..for now anyway...this will allow for a 1.5ghz 7watt VIA cpu on a itx 5" x 5" mainboard to be all we need for speed...and then a plugin/background process that does compression ...this is the first approach I think I will take anyway
we now have raw color images showing on screen with our capture software and full camera control working...next up is disk writing...and doing multi threading
I think i will use the ITX mainboard with the fastest VIA chip we can get and 2 SATA disks for capture of (I hope) 60fps 1280x720 8bit and 48fps 1280x720 10bit
I placed the order for a 1024x768 touch screen and am waiting for it to arrive
BTW we have named the capture ware CineLink..what do you guys think?
can we get 60fps from 8bit ?? what will the datarate be?
Ben Syverson July 20th, 2004, 06:35 PM Yeah -- that's what I'm planning -- within a couple versions of linBayer I'll build in a popup menu for the R&B interpolation with Linear and Spline.
The reason why it doesn't make sense to do spline interpolation on the green is that we're doing all this logic-based stuff on top of the interpolation. So the quality of the initial interpolation doesn't really matter.
But a 16 (4x4) pixel spline interpolation will be super nice on the R&B channels...
Jason Rodriguez July 20th, 2004, 06:36 PM BTW we have named the capture ware CineLink..what do you guys think?
Naw,
how 'bout "GorillaCam" ;-)
Seriously, CineLink does sound nice, and have some sort of professional ring to it.
Ben Syverson July 20th, 2004, 06:38 PM Do a google search for CineLink -- ten bucks says its a registered trademark. Why not come up with something unique?
Like GorillaCam/GuerrilaCam? :)
Jason Rodriguez July 20th, 2004, 06:39 PM The reason why it doesn't make sense to do spline interpolation on the green is that we're doing all this logic-based stuff on top of the interpolation. So the quality of the initial interpolation doesn't really matter.
Are you sure? As of right now, I don't think I could pull a nice green-screen from the linBayer plug-in. Not without doing a "blur" filter of some kind first to smooth out that faint gridding pattern. I know you've incorported the other logic-based stuff to reduce the gridding, but greenscreening has a nasty habit of pulling out all the invisible artifacts in a frame.
For good green-screening, you'll want the best possible algorithm on the green-channel without cutting corners.
Looks like CineLink is some sort of Bosnian film festival.
Obin Olson July 20th, 2004, 06:41 PM good name but it is NOT going to LOOK as good as WinAmp...I don't have money for SEXY UI stuff!!! LOL....well maybe i can feed the programmer lots of Oreo's and weed in exchange for a SEXY UI!? ;)
hahhah:
http://sff.ba/10SFF/program/eng/cinelink.htm
oh well,,,,I guess we will just share names!
Ben Syverson July 20th, 2004, 06:46 PM Jason,
A 0.5 pixel (or even 0.25) is all you need to erase those artifacts -- just apply it after linBayer and before the keyer.
The reason why I don't build any softening into linBayer itself is that it's designed to deliver the sharpest possible image. Unfortunately, if your image is noisy, the logic has a hard time putting the image back together.
The options we built in should kill 90% of the "gridding," but the remaining 10% is the price you pay for superior sharpness...
- ben
Jason Rodriguez July 20th, 2004, 06:53 PM A 0.5 pixel (or even 0.25) is all you need to erase those artifacts -- just apply it after linBayer and before the keyer.
I know, but I was just thinking that it wouldn't be that much harder to incorporate spline interpolation (if it's that much better than linear) into the green channel while you're also offering that option for red and blue-like I said again, just so there's no compromises, added blur filters, etc, we can know that each channel is getting the best possible interpolation algorithm applied to it.
Ben Syverson July 20th, 2004, 06:53 PM CineLink(tm) is security software for DLP digital projection. The technology was created by TI. Check out this product page (search in the page for CineLink):
http://www.dlp.com/about_dlp/about_dlp_images_dlpcinema.asp
Obin Olson July 20th, 2004, 06:57 PM Ben, do you need more RAW footage to test with? I am not sure what I have posted on this site for download..let me know if you you need it...do you think we could use your bayer filter in our capture software? maybe for some sort of trade ;) or??
it will work on RAW files right? so we could use it as we convert from RAW data on disk into a format like avi or others?
Ben Syverson July 20th, 2004, 07:03 PM Obin, I'm using some of the images you've posted here, and I'm getting some of your images via Jason. :) I've also been working with footage from other cameras to be sure that I'm not over-customizing the software.
I've tested it on both RGB and RAW images, and once my camera gets here (tomorrow? thursday?) I'll finish up the last remaining things I want to do with it.
I'd be happy to pass along my algorithms as long as your software winds up being 100% free. If you charge for it, we'll have to work something out.
Actually, if you could shoot a test in front of a greenscreen (flourescent green posterboard from the grocery store is fine -- don't worry about the lighting too much) we could see how my software fares with keying... all I need are a couple RAW 16-bit B&W bayer images.
Jason,
What I'm saying is that even if we do spline interpolation for the G channel, we still have the logic-based de-zippering going on. So we'll have a beautiful spline interpolated G channel (with zippering) which we'll then etch over with linear patterns like we do now. The result will be the exact same amount of gridding... Having written a few keying algorithms, I don't think a post-blur will be absolutely necessary, but it will depend on how poorly the shooter has calibrated the G1 and G2 gain. Getting the green gains lined up is key with linBayer...
Jason Rodriguez July 20th, 2004, 07:08 PM Yah, the gains on the camera,
that brings up a good point.
Once you have the G1 and G2 gains balanced, do you have to adjust them for each scene, or is it a set-once and forget type thing?
Ben Syverson July 20th, 2004, 07:12 PM That I don't know yet. Steve? Obin?
Wayne Morellini July 20th, 2004, 10:04 PM Originally posted by Wayne Morellini :
Yes, this is why I am eager to see you here. We haven't figured a name or logo yet, but I have suggested privately to the Rob's and Steve I, the name Computer Cam, or Custom Cam. People here might want to suggest Cinema Cam instead.
What about it?
Originally posted by Les Dit : I had my programmer implement a bi cubic rescale to 1280 x 720 in the GPU of a late model Nvidia card. He was only able to get it running at about 4 fps, for some reason.
He did however learn how to code the GPU, for what it's worth.
Programming it was a Beech, he said.
Originally posted by Rob Scott :
Transferring data from GPU into main memory appears to be too slow to be useful for real-time applications.
Yes, programming it can be. In future implementations of direct X, the on chip co-processors are to be combined combined and upgraded into a more general purpose CPU for graphics with logic branching.
Rob, your slow GPU transfer rate to memory. AGP works on extreme transfer rates with block transfers. How is this transfer being done? I can understand that some ways might be slow, but there should be a fast way to write to card memory and then transfer out in large blocks to main memory (individual word access by the main board CPU would be slow).
I had a look at your developement Blog, about the machine code, that seems to be a classic example of how much C can slow you down, but it seems a little bit large.
Nice assembler link, thanks.
Juan M. M. Fiebelkorn July 20th, 2004, 11:15 PM The problem with AGP is that it is very fast but only on one way, from system memory to Graphic card, not from Graphic card to the motherboard's memory.It isn't a symmetrical bus.
This has been solved with the arrival of PCI-E.
If someone here is interested about what I'm saying,the new graphic cards from Nvidia require two molex power connectors and need around 100 watts at full speed.So a P4 3.2 GHZ with a GeForce 6800 sum up to 200 watts, add a Raid 3 or 0 or 1+0 or what you want and you may end with a 300 watts power requirement.
http://support.intel.com/support/processors/mobile/pm/sb/CS-007983.htm
Pentium-M 2 GHZ needs 21 watts and it hasn't got the same horsepower as a P4 2 ghz...
So I can't still understand why people keep saying a laptop needs more energy than an Eden system, unless you were going to use a 1 GHZ Pentium-M...(which isn't so more powerfull than an Eden)
Ben Syverson July 20th, 2004, 11:40 PM Juan: thank you. Finally someone with some common sense. Pentium M is the way to go. It has the horsepower, and it doesn't draw too much power. The GPU is an interesting post-production possibility, but in no way should we rely on it for an in-camera system.
Just use a laptop. If you absolutely have to put everything in one box, take the laptop out of its case, ditch the LCD, and cram it all into a box.
Kristof Indeherberge July 21st, 2004, 02:15 AM Hi there peeps,
Kinda lost track of this thread, and just recently picked it up again. Exciting to see how things are going! Def learned a lot too, by reading Les' and the two Robs' posts (just to name a few).
Now, noticed the wavelet section over at the Wiki page and that reminded me of an old link:
http://www.maven.de/code/wavelet.zip
I'm not the person to judge if this is something that might jumpstart you guys into implementing something similar. But just maybe it does.
Keep it up!
Xtof
Wayne Morellini July 21st, 2004, 03:31 AM Juan, about the AGP thing, thanks, thats good. We can allways target cheap low end, low powered, versions of PCI express 3D cards, and integrated chipsets, they should have more than enough power. That cloud does have a silver lining.
So I can't still understand why people keep saying a laptop needs more energy than an Eden system, unless you were going to use a 1 GHZ Pentium-M...(which isn't so more powerfull than an Eden)
At the moment the via processors are for the cheapest, lowest powered raw 720p alternative. The base level. When they are upgraded (maybe when we get 1080 cameras) they might do 1080. Plus if you want to put it into a handheld case, or a small sholder case, you will have to get a small laptop break it apart, recable it (to fit new shape), maybe replace the laptop drive (if it is slow)and the power hungry LCD (shortly this should not be a problem), or recable it as an external monitor.
Re-edit
--------------------
OK I'm sold on the laptop idea (except for battery life). It is expensive, but if you factor in the display as a external monitor it is not so expensive. Basically you could even extend the LCD cabling (risky, get an expert to do it) to use as an external monitor in a case. Now to make the smallest 720p system we could use the small VIA Antuar based laptops, but none would have Gigbe yet.
The Windows Tablets are expensive, in part, due to (this was a year or so ago) MS charging something like $100-200 each for the OS.
Ben, if you can get that USB camera working smoothly, let us know, I know where I want to use it.
----------------------
But people can use whatever they want on their systems to suit their needs. For compressing, 3 chip, 1080+ etc you are going to need even more than a Pent M. If we can find a way to ring out performance from the integrated 3D GPU than we could still use the Pent M, at no extra cost.
Thanks.
Wayne Morellini July 21st, 2004, 04:06 AM <<<-- Originally posted by Ben Syverson : The smallest tablet is the Sony VAIO U70 (http://www.dynamism.com/u70/). But the micro HD inside will never keep up with the data.
-->>>
I think, Transmedia might have done a smaller one that is being sold, called the qbit or something.
Actually the VIA handheld gaming platform (they also did tablets) has a screen and buttons, and support for 1GHZ processor (when somebody uses it). That could be broken down, using the buttons for recording.
If you want to get a better drive, attach a external box to the back with drive and extra battery/camera. You could use it sharp cam style, or lengthways with a small wideangle mirror angled to veiw the screen.
Jason Rodriguez July 21st, 2004, 05:04 AM A 2Ghz Pentium M is very fast when you turn off all the lower-power settings. A 1.6Ghz Pentium M was out-doing many 2.4Ghz P4 Desktop systems on AE benchmarks, Cinebench, etc., so at 2Ghz, I'd expect even better performance.
Obin Olson July 21st, 2004, 06:49 AM can you put a P M on a standard micro atx or itx board?
FedEx came with the 8inch touch screen ! looks very nice! I think it will work very well for this camera and capture software
Ben Syverson July 21st, 2004, 07:43 AM I have serious doubts as to whether a 1ghz processor from VIA (whose processors don't perform quite as well as Intel's at the same mhz) can handle the data being shoved at it.
Maybe there really isn't that much processing going on, and it's 90% what chipset and hard drive setup you're using. Rob, do we have any hope of running your software on a slower-than-normal 1ghz machine?
I have no experience with VIA's I/O chipsets and their real-world performance. Anybody?
Rob Scott July 21st, 2004, 07:54 AM Ben Syverson wrote:
Rob, do we have any hope of running your software on a slower-than-normal 1ghz machine?I doubt it. My software runs at about 85-90% CPU utilization on my laptop, running a ~2 GHz Athlon XP-M. I think the Eden CPU could be suitable for handling a pre-compressed signal, but I'm not sure about this much data handling. On the other hand, a lot of the performance issues may have to do with memory bus speed. To really tell, we'd need someone to build one and try it out.
Ben Syverson July 21st, 2004, 08:15 AM @Obin: "can you put a P M on a standard micro atx or itx board?"
Probably, but that defeats the whole purpose of the Pentium M architecture. The whole idea is that you have the Pentium M, which was designed to talk to specific chipsets such as the ICH4-M.
Also, why put a Pentium M in a micro atx board when a laptop motherboard has it built-in -- and is already slimmer, uses lower-power components, and has a batter system pre-built? I guess if you simply must have the camera be all-in-one, you're going to have to find some way to cram a motherboard into your camera housing. It's going to be about equally difficult with a micro/nano-ITX and a laptop's innards.
At the risk of repeating myself ad nauseum, I say keep it simple -- a camera connected to a laptop. If you want to upgrade the sensor, just get a new camera. Want to upgrade the processor or laptop? Go for it. Putting it all in one box discourages these kinds of changes and upgrades.
@Wayne: "At the moment the via processors are for the cheapest, lowest powered raw 720p alternative."
Wayne, as we were discussing on other threads, I don't think the VIA systems can handle raw 720 @ 24fps. At least not yet. Maybe they'll come out with a kickin' 2ghz system at some point, and we can take another look. But for now, their 1ghz fanless nano boards aren't even out yet and they will be underpowered for our purposes.
Anhar Miah July 21st, 2004, 08:16 AM To Wayne Morellini:
Have any of you guys considered miniture PC?
These would be way better than a laptop, take a look:
http://www.littlepc.com/
"Rugged LCDs
Industrial Grade NEMA Rated LCD Monitors with Touch Screen Options. Designs include Panel, Rack, Open Frame & High Bright Sized from 6.4” to 21” with options and custom designs available."
AND
"LittlePCs
Small pc computers with the power of a desktop PC in the palm of your hand.
Available in P4, P3, Fanless and expansion slot configurations. PCMCIA, Multi-LAN & Compact Flash Custom options available."
Thanks hopes this helps!
Ben Syverson July 21st, 2004, 08:24 AM @Anhar: "These would be way better than a laptop, take a look"
They say they have a "Pentium 4 Fanless" model, but it just seems to be a 2ghz Celeron. I have no idea whether 2ghz of Celeron is enough -- maybe it is.
I guess I missed how these would be better than a laptop -- you still have to provide 12VDC current, and you still need to add a display that can handle DC power.
These things aren't light, either -- around 6 pounds. Add in a battery and monitor, and you're at around 8 or 9 pounds.
Obin Olson July 21st, 2004, 11:03 AM LittlePC - crap - I can build a pc with the same specs for 1/2 the money they want $1400 for the bottom line unit with NO pci slot! I can build a microatx with pci P4 2.8ghz 256ram 400gb raid sata disks 20gb system disk case PS etc for $760
seems more and more this is what I need to do...
Jason Rodriguez July 21st, 2004, 12:04 PM Aren't good P M laptops kinda $$$$??
|
|