View Full Version : Home made camera designs?


Pages : 1 2 3 4 [5] 6 7

Les Dit
June 9th, 2004, 12:37 PM
I think these cameras are more like computer equipment, really. If you want to resell it, it will be a big price drop as the new stuff will always be cheap. Not like in the old days for film camera stuff, it held it's value for a considerable time.

On the usability of the image stream for these industrial cams, I put it in the same category as scanned film. Not easy to use with a NLE but with some tools things can be shoehorned into some tools.

Obin, does your grabber software allow you to snap multiple frames in a sequence? I'm still curious about the noise factor on two frames .
-Les

Obin Olson
June 10th, 2004, 08:30 AM
yes it allows you to snap frames

Obin Olson
June 10th, 2004, 08:34 AM
so Rob and Rob what is the word? do I need to throw down bug bucks for sofware(streampix) that's not even made for the production industry? or can I buy your software soon? I really want to wait till you guys have somthing thats way better then what you can buy today but I need this thing to work soon for HD projects...timeframe Rob and Rob??

Rob Scott
June 10th, 2004, 08:54 AM
<<<-- Originally posted by Obin Olson :
timeframe Rob and Rob?? -->>>

Sorry for the delay Obin, but I'm still trying to make some decisions. Can you help me with some information ...

How much is the Streampix software?
What is your timeframe?
What are the absolute minimum capabilities you need in version 1.0?

Steve Nordhauser
June 10th, 2004, 04:48 PM
Sorry if I gave people a wrong answer on getting the docs for the Epix framegrabbers. I posted them on my site:
http://www.siliconimaging.com/Specifications/XCLIB.HTM
This is only the docs!

In terms of universal coding, not tied to a frame grabber, you will be more tied to our cameras. Camera commands are sent over a serial port embedded in the camera link interface. Kind of a send string/get string that is common to all frame grabbers. There might be a little difference in how memory buffers are handled, but very common. The camera itself has registers to be written. Some are sensor specific, such as exposure control, and some are camera specific, such as setting clock rates.

Obin Olson
June 10th, 2004, 04:57 PM
streampix is $1200 maybe? not sure Steve should know

what I need now:

1: preview on desktop that can be used on a 2nd monitor/tv for the camera...like the dual head graphics cards have

2: capture at given framerate 24, 30, 48, 60etc

3: shutterspeed in NORMAL production style, 1/60th sec etc

4: capture to disk that WILL NOT DROP FRAMES with the RAW black and white image to save datarate to disk

5: low quality COLOR image for preview/monitor out on dual head card

6: plugin that can do a highquality bayer remover after capture and output to tiff files

7: normal vcr style controls that can play the clip that was just captured in color

gota run be back later

Steve Nordhauser
June 10th, 2004, 05:43 PM
Streampix sells for $1495. We bundle it with our cameras for less. It does one thing - record - and it does it very well. Nice architecture with the ability to do on-the-fly compression (probably not at the rates you want). They really understand system bandwidth issues and tradeoffs.

Laurence Maher
June 10th, 2004, 09:46 PM
check it out,

$65 c-mount to 35mm slr adapters for nikon, universal, olympus, canon, etc.

http://www.edmundoptics.com/onlinecatalog/displayproduct.cfm?productID=1459&search=1

Obin Olson
June 10th, 2004, 10:15 PM
that is cool but if your using c-mount and 1/2 inch chip or even 2/3 inch chips FOV without an adaptor with a reducer will be very narrow

Rob Lohman
June 11th, 2004, 02:28 AM
Steve: I've merged your seperate thread with this one and
removed your post regarding that. All should be "ok" now.

Rob Scott
June 11th, 2004, 07:34 AM
<<<-- Originally posted by Steve Nordhauser : Streampix sells for $1495. -->>>

Steve -- did you get my e-mail? I'd like to try to make a decision about the software by Monday, so please let me know.

(Apologies to everyone for posting this here. I'm not sure if Steve is getting my e-mail messages; you never know when a spam filter could be tossing real stuff into a bit bucket.)

Rob Lohman
June 11th, 2004, 12:45 PM
Obin: Rob and myself do not have a camera yet (whether or not
we are going to get one depends...). Then we both or one of us
needs to start writing software. So I don't see how you can have
something like you want anytime soon (it needs to be tested,
programmed, tested, fine tuned etc.). It is a complete new system
that needs to be learned and programmed.

To what format does Streampix record?

Rob Scott
June 11th, 2004, 01:01 PM
<<<-- Originally posted by Rob Lohman : So I don't see how you can have
something like you want anytime soon (it needs to be tested, programmed, tested, fine tuned etc.). -->>>

I hope to start on something next week and hope to have a better idea about a time frame.

Laurence Maher
June 14th, 2004, 03:56 AM
Just thought I'd put out that Juan on his thread is talking about ext2 or ext3 for recording to his uncompressed dvx-100 mod. Maybe this should be considered for our cameras?



COPIED MESSAGE:


Yes you are correct I believe that with this application the mac mounts ext3 as ext2 (no journaling). Perhaps someone is working on this though.

Its up to Juan what file system he wants to use. I think there is a strong case for Ext3/Ext2. Its the logical choice.

However, as we saw in the case of VHS vs. Beta the market will sometimes choose a system because its ubiquitous and "open" rather than the best solution available.

The system chosen by the market or by market leaders is not always technologically superior.

The bottom line is that Juan wants to sell his mod. If Juan goes the logical route, people who buy the mod may not understand why ext3/ext2 was chosen.

Users might prefer ext3/ext2 when they actually go to use the mod; however, in choosing to buy they might not understand why a Linux file system was chosen when they want to download the files on Windows and Mac only.

However, a file system that is likely to fail (fat32) cannot be good for long term sales. So perhaps the use of ext3/ext2 has to be sold as a feature and not an unecessary complication. One also has to worry whether the application that allows MAC OSX to read ext2 files will be maintained in future versions of MAC OSX.

The rumour is that Apple is prepared to make LINUX a part of their business plan if that ever makes sense for them. OSX is partly an effort to position themselves to be able to port Linux applications easily. Its not so easy to port linux appliations to windows.



DETAILS:


http://www.versiontracker.com/dyn/moreinfo/macosx/%2018619


Maybe this can help us?

Wayne Morellini
June 14th, 2004, 07:47 AM
OK Guys

9th

There has been a numnber of questions for me over the break, and things I can answer, so here we go. Forgive the overlap in answers, as I am writing this as I read through the backlog of posts.

pionted at me Richard, from the 9th, your not anti-business, your pro business competition, thanks for revealing this too us. The price of Tapes is too much. I imagine very major camera company is allready monitoring this movement. When we turn up on set with cameras that do RAW 4:4:4 people will start to question the value of overpriced Tape systems like HDV (it will take years though). The Panasonic solid state pro cameras are really a big load of nonsense. They raise the price of storage so much that low compression tape still looks good, and RAW looks impossible. I aim to look into a tape alternative sometime (using backup tapes). One of the biggest benefits of Disk is variable compression, but using a computer to buffer content we should be able to packet out the contents to tape evenly enough. In anyways we still need tapebackup to store content (rather than buying HDD after HDD). It should also be more robust than HDD.

Steve, your post of the 9th about the mbit/byte mixup, that wasn't me, I was quoting somebody else.

Laurence, about Matrix, and vectorscopes etc (I think ENg cameras use it to). Some of these issues are over our heads, but we can suggest and leave it to people like Sumix and SI to decide wherever it is really nessacary and how to do it (as there maybe a cheaper better way to do it). I am interested in throwing all these things back at the engineering professionals who know what to do with it.

The Linux suggestion is just to catter for a cheap entry piont for those who can use it (it is being used more and more on the high end production). Also a number of Open source projects are done for both Linux and Windows, so there is a possibility, if they want to, that Linux OS projects could be converted over. But apprat from that I'm also advocating Windows (and maybe MAc) versions for what others are allready used to.

Blackmagic is also supposed to have a pretty good free codec, I forget wether it is Lossless or wavelet or something. For their capture cards we would need 1 (2 for 1080) HDSDI's, a single HDSDI is not far infront of USB2.0 (which is free). The two HDSDI is still behind camera link. But for a live feed into another system (or live broadcast, except for component out) it is the way to go. I'm not worrying about it myself, I intend to do file transfer and component instead.

Jason, It was Laurence that was talking about Matrix, not me.

Rob great comment, if we want a true blue pro cm,aera with pro interfacing there is allreday $10K+ systems comming to market.

10th

Laurence and Rob, I'll say this, in six months we could have a very freindly camera (with optioinal HDSDI capture cards). Have a look at my previouse posts, if you follow that sort of plan you have a universal easy to get into a system. Using the cheap cameralink, G Ethernet, and USB2 does not stop anybody adding pro interfaces (like HDSDI) afterwards (it would also be very costly to put dual HDSDI in the head and the system (according to previouse posts) and probably no cheaper than having cameralink at the head and adding HDSDI in the system afterwards. Now if you want an expensive version hook the head to whatever you may want, I think we can have the best of both worlds. But we must start with something that has good optical properties for the next 5 years (most pro companies won't wan the low end quality to get too much better). Hey, once we work out a universal system in the opensource domain we could be looking at replacing the system, software and camera head at will. Within a couple of years we might have only to worry about replacing the camera head, as the software will be standard, and the hardware will be quick enough to handle all HD/SHD res. Eventually the cost of ownership, upgrade, and longterm value will be good. We will not need to upgrade the capture software for anything except for a new camera interface or computer platform, or the hardware (until it breaks down) because it breaks down, and eventually even the camera head will be good enough to keep.

What Oblin is doing at this moment might also be possible for Feature work at the moment. If we got specail prices on editor and capture software, all we need is some live simplified controll software (maybe to exteernal custom buttons, firewire controls or touch pannel), and software convertors to the format we want, Plugins to do live conversion/colour correction/editing could also be done. What the Rob's are doing could incoporate these features into the capture software and add support in editors etc. So maybe it is there really quickly (but a lot of people don't want to spend extra thousands to make it happen) what do you think Steve and Oblin?


Steve, I don't know about Indie sales (though I think they can make you rich, as long as the rest of the system is worked out simple enough) but I am also promoting the prosumer, and low end local production markets (and DOCOS) and I think a number of companies can be rich out of that.


11th

So basically Steve, the differences in cameras from different companies resolve downto commands, registers, and memory handeling (not to mention format of data). So all the RObs' have to do is make plugin profile (data) files that tell the software how to handle the new camera, and change the software to understand and execute on these values? Anybody can then make a new profile file by reading the specs and testing out appropriate values, which will help future proof future the system for future cameras?

To Obin Olson's list of requirements, can I add, functionality to take 3 chip RGB or bayer, do bayer remover, then plugin a desired standard codec (so you don't have to write it) and transcoders to convert to desired format and compress with desired compression scheme (of the targeted editor). Built in format conversions would help to. This should suit all the different users here. Only straight Tiffs would be a bit stiff.

I have another suggestion. I only really need RAW for thearactical work, and maybe bush scenes. Otherwise I would go for something with the detail equivalent of 50Mb/s DVCPRO, or 100Mb/s DVCPROHD (400Mb/s would be good too). So could I suggest a software control to switch between codecs, scene to scene, and on the fly (using either a container file with index, or index and seperate files) so I can film outside in high quality and inside in higher compression, or switch on the fly. This will save heaps on HDD space and for at least half the people compression would be the normal mode.

In a year or two, small computers will probably have enough power to handle the above compression features and we will want them, so we should plan ahead now.

Obin, those cmount adaptors is what I was talking about. To get over the FOV problem I think people have suggested using and ultrawide 35mm lense, but if you make a intermediatory reducer inbetween you can get the full FOV of the 35mm, but not the DOF, on a standard prime (test outr for image abbreviations though).

12th

Robs', about the software developement, can you develope it remotely by sending Oblin test software for his camera setup (I understand your rebuilding existing frame software)?

14th

Laurences post ties in with what I wanted to say. I saw an advertisment for Apple's notebooks last week, saying that the OS (which is now based on Unix) will run on any of the old Unix programs for a variety of systems (including SGI). It took me back, and lead me to realise that maybe the new Mac OS is a real chioce. After I thought about wether it supported Linux, now it looks like there maybe something to it. A guy offered me a G3 and FCP for $800, maybe I should have taken him up on it, but I did not know what I was going to get (he thought I was strange to turn it down).

Now back to the other Viper thread where I'm going to be talking about some of my research plans.

Rob Lohman
June 14th, 2004, 08:43 AM
I might be able to develop remote, but that will take a long long
time. It's far easier to test yourself etc. Anyways, I'm sure Steve
and I can work something out. At this moment I'm weighing
options and talking about design before going ahead with
everything.

Rob Scott
June 14th, 2004, 08:57 AM
Robs', about the software developement, can you develope it remotely by sending Oblin test software for his camera setup (I understand your rebuilding existing frame software)?I will be ordering a camera system today to begin development. Actually, to continue development, since there is plenty I can do until it comes -- Bayer filters, architecture, etc.

Steve Nordhauser
June 14th, 2004, 05:13 PM
Wayne,
Every frame grabber company seems to work with some kind of camera configuration file that defines the camera interface to more generic software. It will be up to Rob if he wants to go that far (quite a bit of initial overhead) in isolating the camera or just do a set of wrapped routines (adjust exposure, gain, etc) for each camera. If I play nicely (don't get greedy) the only reason you would want to go to other cameras is if someone did a camera we didn't want to - like the micron 10 tap. And we are talking about that.

On the bits/bytes thing - I'm very aware that there are both lurkers and people going back and reading our gems here. That was more for posterity and clarity.

Obin Olson
June 14th, 2004, 07:45 PM
Wayne, if you get FOV you lose DOF as you said sooo I think I will stick with c-mount HD resolution lenses, I think I will get me a set of f1.3-f1.6 c-mount primes maybe a 12mm 25mm a 50mm and a 75mm and shoot with that...it will be like shooting on a 16mm Bolex! I see no reason for a zoom unless you like the style of zoom then get an adaptor for a 35mm zoom and loose your dof...what we really need is a LARGE CHIP!!! Steve are you reading this?? that should be the next thing to start thinking about even BEFORE thinking 1080 we all should start thinking bigger chip for a more cinematic DOF feel that is the last tale-tale sign your not shooting film..it makes a HUGE impact if you get 35mm level DOF..I made a 35mm adaptor with ground glass rear projection for our dvx100 and the images are VERY VERY filmlike..it's amazing even on the little DV camera

Valeriu Campan
June 14th, 2004, 09:04 PM
A larger chip is a larger chip!
Even a 2/3 one is less than 16mm. It would be nice to have one closer to the 35mm film frame (~22mm wide) with larger photosites and the chance of achieving higher ISO equivs with less noise.
From my stills experience there is a HUGE difference between compact point & shoot cameras and DSLRs. A 4 Mpixel DSLR gives a better image quality than a 5-6 Mpixel compact. Not all pixels are equal and their numbers are sometimes irelevant in this game. Are there any affordable larger chips able to sustain fps we want? How hard is to manufacture them?

Steve Nordhauser
June 15th, 2004, 05:18 AM
The question is, what are you willing to trade off? I know of a full frame shutter chip coming out in a few months that can do 2Kx1K @30fps with 12 micron pixels. HUGE. Unfortunately, it is only being released in mono. That means a 3 chip camera. The noise is speced at 45 electrons. The Micron is 15 and the Altasens at 3. This is more like the IBIS5. It is going to be expensive - probably more than the Altasens so a 3 chip camera will be over $10K. It doesn't have TrueSnap so the exposure time doesn't overlap readout. This means even at 24fps, the exposure times will be short - no natural lighting for this unless you are on the beach.

I will look for others. One problem is that the DSC companies can afford to make a proprietary sensor that never sees the sensor market.

Obin Olson
June 15th, 2004, 09:01 AM
even 2/3 would be a big step up from 1/2

Les Dit
June 15th, 2004, 12:26 PM
Steve, 45 electrons may be pretty good, what's the well capacity?
For 12 micron , it is probably over 100K , right?

-Les



<<<-- Originally posted by Steve Nordhauser : The question is, what are you willing to trade off? I know of a full frame shutter chip coming out in a few months that can do 2Kx1K @30fps with 12 micron pixels. HUGE. Unfortunately, it is only being released in mono. That means a 3 chip camera. The noise is speced at 45 electrons. The Micron is 15 and the Altasens at 3. This is more like the IBIS5. It is going to be expensive - probably more than the Altasens so a 3 chip camera will be over $10K. It doesn't have TrueSnap so the exposure time doesn't overlap readout. This means even at 24fps, the exposure times will be short - no natural lighting for this unless you are on the beach.

I will look for others. One problem is that the DSC companies can afford to make a proprietary sensor that never sees the sensor market. -->>>

Wayne Morellini
June 15th, 2004, 12:51 PM
Thanks Steve for your comments. My hope is that the Robs' software will become true open so any camera can be used in future. I would have though most of the routines' would be pretty generic but reequiring some different values for each camera.

Obin with that methord you also can gather more light and have the same DOF as the cmounts. I am not sure, but I think the fstop rating of a lense indicates how bright it will be over a particular area, and as 35mm film target area is so much larger than the cmount target area, when you reduce the image down a F1.6 slr lense might act like a f1.0 cmount lense or something like that ;). Anybody, am I right, or have I got that wrong?

To the Robs, because Apple is doing something with Linux?? and allready has Unix support, then a Linux version might be very convertable to Apple platform.

Well guys I'm going to be here weekly for a while, as last months internet bill was $118 (ouch), so I'll probably see you next week. Also check out Obin's thread (if there is anybody left that hasn't), the new people are getting answers to a lot of good questions.

Rob Scott
June 15th, 2004, 01:05 PM
My hope is that the Robs' software will become true open so any camera can be used in future. I would have though most of the routines' would be pretty generic but reequiring some different values for each camera.That is certainly a goal. The software will probably be more "married" to the line of frame grabber cards than to the specific camera. Though if the communications with the different camera modules are significantly different, that could make it more difficult.
... Apple ... already has Unix support, then a Linux version might be very convertable to Apple platform.To be pedantic, Apple uses BSD (similar to Linux in some respects) and they have their own graphics/windowing engine on top. There are several cross-platform UI toolkits available to make it much easier to support Windows, Linux and Mac OS with a single set of code. That is going to be the goal, at least with the "Convert" part of the software.

Wayne Morellini
June 16th, 2004, 09:22 AM
Go to the Viper thread the Sumix info is in, and it is all good (except for the initial single chip thing).

Laurence Maher
June 18th, 2004, 02:18 PM
Say guys, I copied this from another thread:

Not sure if this will help or not but I found a codec for free that supports 16 bit per channel as well as an alpha channel making a 64 bit video codec. It works on mac and pc with just quicktime 5. Best of all it is free. They even have a lossless codec that can get 6:1 compression with no loss but that codec is $99.00. I know it isn't 12 bit per channel but it might be an easier way for people to manage files opposed to a series of stills. Besides right now the tiff files will need to be 16 bit anyways.

http://www.digitalanarchy.com/micro/micro_none16.html

Wayne Morellini
June 19th, 2004, 10:50 AM
Have a look over at the Viper thread, I've summarised some new technologies, and potential camera configurations. Steve I also found some good fast cheap interface information and camera network compression idea that may help with your camera line. I also found reference to big tape backup, and low powered processing arrays that can be used for camera head compression, that canbe reprogrammed in C.

Steve Nordhauser
June 19th, 2004, 09:39 PM
Wayne:
That was an interesting post. I forwarded a copy around to a few other people at SI for future discussion. We are watching 10Gbit right now, always tied to camera link and looking for others. We don't intend to be a sitting target on either the sensors used or interfaces.

Wayne Morellini
June 20th, 2004, 07:02 AM
Agreed, that is what I thought, good business sense.

Wayne Morellini
June 20th, 2004, 11:40 AM
I have an interesting idea, why not eliminate Cameralink, and send camera link commands and data over Ethernet. I think some of your customers would prefer a Ethernet interface to USB, Cameralink with Ethernet adaptor, or cameralink. Would that make a cheaper camera (certainly would help get rid of the capture card). Am I right that the throughput of 10GB Ethernet will be around 1GByte/s on it's own line?

Thanks

Wayne.

Steve Nordhauser
June 20th, 2004, 09:14 PM
Right now, gigabit ethernet does about 250Mbps with the windows driver and about 800Mbps (100MB/sec) with custom drivers. Making a rash assumption or two, 10GigE should give about 1GB/sec of transfer rate, yes. 10GigE is not quite ready for the embedded mainstream but will be in a year or two. We are watching it.

The external GigE boxes we do right now are a temporary solution. We are in layout for an integrated GigE camera design.

Steve Ipp
June 21st, 2004, 04:45 AM
Wayne, Les, Robs, Obin, people...
I've sent Steve the questions below, but taking into consideration the amount of time he has to spend responding to all of our posts, I don't really expect him to elaborate on the points. Could you possibly clear things up?

Software for the new SI proposed Altasens again.

1. According to EPIX (SI frame grabber supplier), they have a low grade version of XCAP shipped with every FG they sell. They also mention that every FG is intended for use with specific cameras. Does it mean that if I choose to buy a 64 bit FG from another manufacturer, it will not capture images from the cam?

2. If this is the case, could you possibly info me on the signal output from the camera head?

3. Another thing is camera's firmware. Is it TWAIN compatiable?

4. All the image capturing soft I dealt with before provided easy access to camera settings determined by the drivers. Let's say I have software like VirtualDub or a 3-d party package mentioned on EPIX website under imaging software (like Image-pro Plus, VisionGauge, etc.).
Suppose I use a camera link FG from another manufacturer.
Will this soft see your camera?

Really sorry for making you duplicate some info posted here previously.

Rob Scott
June 21st, 2004, 07:17 AM
Steve Ipp wrote:
Does it mean that if I choose to buy a 64 bit FG from another manufacturer, it will not capture images from the cam?Another card would be able to communicate electronically with the camera module, but since each card has its own software interface, the software would have to be modified to support the card.VirtualDub ... Suppose I use a camera link FG from another manufacturer.
Will this soft see your camera?No. Since CameraLink is not a standard used widely outside the industrial imaging market, standard software would not be able to capture from the card/camera without driver software (which, AFAIK, is not available).

Even if driver software was available, it's doubtful it would support the full capability of the camera -- nearly all video capture software supports only 8 bits per channel, so you'd be throwing away a great deal of the signal. Plus you'd still need Bayer filtering.

Rob Lohman
June 21st, 2004, 07:28 AM
TWAIN is for pictures (like scanners and digital camera's). It does
not apply to things like video camera's. VirtualDUB and all other
capture applications will NOT see this camera as Rob S. explained.

Steve Ipp
June 21st, 2004, 08:46 AM
Thank you, guys; a lot.

Wayne Morellini
June 25th, 2004, 07:32 PM
<<<-- Originally posted by Steve Nordhauser : Right now, gigabit ethernet does about 250Mbps with the windows driver and about 800Mbps (100MB/sec) with custom drivers. Making a rash assumption or two, 10GigE should give about 1GB/sec of transfer rate, yes. 10GigE is not quite ready for the embedded mainstream but will be in a year or two. We are watching it.

The external GigE boxes we do right now are a temporary solution. We are in layout for an integrated GigE camera design. -->>>

Good. let us know when it is ready, as the extra price of a camera head is offsett in the shorterm by not having to buy a capture card.

Thanks

Wayne.

Laurence Maher
June 26th, 2004, 12:06 AM
This may be silly to ask, but Wayne, did you just post what I think you did? Are we talking an HD camera being created here that will illiminate the need for a video capture card? Please give details if so. (Not sure how this works). What type of files would it create? How exactly does it work?

Rob Lohman
June 26th, 2004, 04:35 AM
Laurence: well yes and no. Capture card is such a wrong term for
anything digital (including DV, Camera Link we are using now and
the GigE mentioned above).

It is all INTERFACES. Just different ones. For DV we are using
firewire. For webcams (and a low resolution camera) we are
using USB2.

Now Silicon Imaging is using Camera Link as their native connection
for their camera's. You will need a "capture card" or frame grabber
in your PC since a PC does not natively have a Camera Link
interface like most have USB2 and Firewire.

The problem is bandwidth. USB2 & Firewire are not fast enough
to handle the data coming of these camera's at full speed.

Now SI also has a Camera Link -> Gigibit Ethernet (GigE)
convertor. Which paired with a special driver allows any PC
(no Mac) equipped with a gigabit ethernet port (which is just
another INTERFACE) to capture from the camera with THE RIGHT
software!

Gigabit Ethernet ports are much more common and therefor
more easily used. However, this STILL does NOT support the
full bandwidth needed at 60 fps which is 105 MB/s. Gigabit
will probably bail out at around 80 MB/s or something which
gives you a max of 40 - 48 fps (at 1280 x 720, 10 - 16 bits)

Now the most interesting development is 10 GigE or 10 gigabit
Ethernet. This theoratically allows for 1 GB/s (which for now is
way way below that). Hopefully an integrated mainboard with
10 GigE and a special driver could easily do 105 MB/s and maybe
even 237 MB/s for a future 1920x1280 @ 60 fps. But that's a
long way for now.

So if Silicon Imagine could integrate 10 GigE on the camera head
that would definitely be very interesting since you will still need
specialized software, but no funky hardware. Just get a mainboard
with 10 GigE or a PCI-66 mainboard with any 10 GigE adaptor.

All of these solutions still require "custom" / specialized software
since there is no industry standard for normal PC's for such
kind of equipment.

Laurence Maher
June 28th, 2004, 03:39 AM
Well what does Firewier 800 run at? It's is it 800 Mega bytes or Mega bits per second? If mega bytes, couldn't we just use firewire 800 or something?

Juan M. M. Fiebelkorn
June 28th, 2004, 04:16 AM
Mbits, sorry.But it at least transfers 80 megabytes more or less :)
It has smaller transfer rates than Gigabit ethernet, think about the possibilities of 10 G ethernet, around 800 Mbytes directly to your machine!!!! ( although I don't know of a normal computer which can handle this datarate )

Wayne Morellini
June 28th, 2004, 08:26 AM
<<<-- Originally posted by Laurence Maher : This may be silly to ask, but Wayne, did you just post what I think you did? Are we talking an HD camera being created here that will illiminate the need for a video capture card? Please give details if so. (Not sure how this works). What type of files would it create? How exactly does it work? -->>>

Well it has been mentioned before, but mostly as Rob said. Firstly you can do away with the Cameralink PCI card (most small boards don't support the PCI-66 format, and not the more primitive PCI-mini) and use it for something more productive, like good sound (I will be posting an update over at the viper thread sometime). It has cost and conveince benefits to. As Ethernet specs are standard the support is standard, the Cameralink data then canbe packed and sent down the gigabit stream to be read at the other end by cameralink compatable software (modified or fead by driver transcodec) as done allready. I think the 80MB/s is a big restriction, but for a 2Mp*24fps 8 bit bayer pattern that is 48+MB/s. The advantage I see is that even if we can't get 10GB's Ethernet Mainboards today we can use it at 1GB/s (as long as pixel combing is used) until the 10gb/s mainboard are available.

Still maybe I am missing something here, how much are the SI camera link to Gigabit Ethernet adaptors anyway? They look a bit big for a small case but maybe there will be a smaller version comming out? I still think that HDMI format is very good alternative, but life is soo much politics.

Alex Monita
June 28th, 2004, 03:59 PM
I have an idea that I will be testing out this week, Let me know what you think. Its based on using two 3ccd cameras mounted side by side to produce a wideangle image. One camera would capture half of one frame and the other cam would capture the other. Here is the idea>>>> http://img20.photobucket.com/albums/v60/anigma/split_image_cam_copy.jpg

I realize that recording two images on two seperate tapes and joining them later is very inconvenient, and the risk of the camera set up being off by one or two pixles would be vissible in the composite image so it's tedious. But im going to try it out.

My question is, would it be possible to make a program that records these two video streams and joins them together and dumps them on to a hard drive?
Or is there a program out there that could do this in HD as maybe a split image trasition in realtime?

Also... If this were to work, I want to use a 35mm or cine lens as the main lens BUT... would the DOF be preserved after being deflected through the mirrors? can this be worked with?

I'll be doing a really rough test at work today and hopefull post my results soon.


Alex

Jay Silver
June 28th, 2004, 04:22 PM
Alex,

Just make the two overlap a bit and shoot an easily alignable image (like a focus star or something) in the overlap space. A focus star clapboard would be even better.

Assuming you've got the two rigidly connected, you should only have to line up the star at the beginning of the shot in, say, After Effects and the rest of the shot should align fine.

I have my doubts about how perfect the colour match would be, though.


-j

Rob LaPoint
June 28th, 2004, 08:36 PM
In regards to strapping to cameras together for an HD frame. I actually had the same idea about a year ago. I am not going to say that it is impossible, it certainly is possible, but it is impractical. In order to do this right what i found was that an image would have to be projected onto ground glass and then either filmed directly or split with a beam-splitter or two mirrors at exactly a 90 degree angle with a perfectly 'sharp' edge and then reflected back into the camera lens. The beam-splitter looses too much light, and the 90 degree mirrors cost about 500 bucks. If you don't record an image you have to be at close to full zoom before the image depth is crushed enough to merge the images. Again the idea is possible but considering the best you can get is a 'close to 720p at 2.35 aspect' that would still have to be upressed and color depth is only 4:1:1 its just not worth it. Also it would only work with a progressive camera because there is no way to match up the interlacing. So good luck, I'm not saying that ive considered everything but to the best of my knowlege it just isn't worth it. You might want to check out stereoscopic beam-splitters its basically the same idea except backwards, but again you need to project the image on GG.

Alex Monita
June 28th, 2004, 08:58 PM
Yes, I've had trouble matching white balance on a couple of two camera shoots but I dont think it will be a problem to cc because I am joining them in After Effects.

I was thinking that the image that a lens projects focuses at a certain distance from it, and if it were to pass through a beam splitter or mirrors, maye the distance can be adjusted so that the focal point becomes the "END" of the "Y" aperatus that would be projected directly on the CCD head.

In other words push the lens closer to the "Y" so that the focus would be right where the light hits the CCD's?

I picked up two 35mm mirrors from two old slr cameras and Im getting my hands on two vx2000's to try this out on.

Laurence Maher
June 29th, 2004, 03:14 AM
I'm not exactly sure what I'm missing here. I guess I don't understand how using 2 cameras side by side and combining into 1 image is worth it whatsover. Can't move the camera at all really. Maybe pan/tilt, but your project would be EXTREMEMLY LIMITED creativity wide, wouldn't it? What's the point. Can someone tell me what I'm missing?

Steve Nordhauser
June 29th, 2004, 12:52 PM
1/10 gigabit:
We are certainly watching the 10gig technology but it is still a few years away from practical (low cost) integration. The GigE interface will run up to about 800Mb/sec continuously. The current price is about $1K over the base camera link camera price with all power supplies and cables. We are going to be releasing GigE native cameras (one box, a little longer than the current box) fairly soon.

For the costs involved in going to GigE, I think a 2:1 lossless compression in a cheap FPGA (or integrated into our GigE FPGA) would be the solution. Let me know if someone finds and easy to implement (for VHDL/Verilog source code) for a lossless compression CODEC.

Wayne Morellini
June 30th, 2004, 05:35 AM
If we had two GBE links on each MB that would solve a lot of problems, but we will have to go with compression. Still that clearspeed is bloody fast in inline reprogrammable C code, maybe even a clearspeed cameralink to gbe ethernet comnpression "credit card". Pound for pound, couldn't we get 2:1 to 50:1 compression today? If 10 GBE FPGA is years away we might as well go to mass market HDMI now (is there a 10Gb/s version)?

Steve Nordhauser
June 30th, 2004, 07:04 AM
Wayne, I think you are mixing metaphors. 10 gigabit is rapidly becoming real. The support structure (switches, cards, etc) will take a little while to become affordable (out of the backbone and into the office network). This is just a transmission medium, like camera link and HD-SDI. Of course you need a PCI bus to keep up with this. 1Gb ethernet can move data at about 100MB/sec. That is the full bus bandwidth of PCI-32. I'm still a camera link fan for cost since it doesn't add too much to the camera and our bundles are $500 for capture at 32 bits and probably $1K at 64/66. HD-SDI requires the video processing up front. Certainly dual GigE would give you twice the bandwidth into a 64 bit machine, but why not camera link - you won't be straining it in the least.

FGPAs are just hardware that is reconfigurable. The difficulty is that although it looks like programming, it is hardware design. As Scott has pointed out, there are some public domain or licensable solutions. I looked at the clearspeed website. They have a 64 parallel processor CPU. The big thing with parallel processors is that you need to parallel-ize the application to gain any benefit. Since they say you can just program away in C, they must be solving that problem during compile. The SDK was $25K with $1K/chip in volume.