|
|||||||||
|
Thread Tools | Search this Thread |
April 25th, 2005, 06:16 AM | #31 |
Major Player
Join Date: Jan 2005
Location: St. John's, NL, Canada
Posts: 416
|
Okay, i know exactly where you are going now with the standards idea. Gige would be the best for the easy to use, but right now that is beyond my experience :( I do feel confident with the recoding of the bayer and possibly debayering, and when i do the programming for that maybe i'll be experienced enough to take a look at gige.
Actually, i spent most of yesterday trying to figure out that stupid timing chip and now i think i'm warming up to it. The problem i was having was trying to figure out how to connect it to the ccd, (ccd only has 2 vertical and 2 horizontal registers and this thing outputs 6) but then once I realize that the programming aspect just needed a more deep read through i began to figure it out. So, its looking up (again, everything is on paper so far) but unfortunately the timing chip needs to be reprogrammed everytime the frame rate changes, which makes changing framerate a little hard to do. But i'm thinking i can incorporate the timing chip programming with some button interfaces to the fpga and some programming. I moved the website to indiehdp.0catch.com (the cjb one now has a link to it)but be warned, use firefox or somthing that stops popups, i didn't think it was bad until i tested it in IE. At least i can update it via ftp now, unlike with cjb.net. |
April 25th, 2005, 07:31 AM | #32 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Hmm, all this stuff is tricky and requires some hard work and learning.
I have had a look at the Kodak sensor and compared it to the Ibis5a and tried to compare it to the Micron. Problem is that Ibis uses a lot of basic noise values, I could only find one of them on he Kodak chip, and Micron roles them up into db values (but not in the high speed sensor). It looks like that the Ibis has over 50% more latitude (well capacity) and less noise, but I don't know how to interpret these noise values to find how they interfere with effective true latitude. In comparison with the Micron it appears better, but again, I couldn't compare overall noise factors. Maybe you can have better time at it. http://www.fillfactory.com/htm/produ...S5A_1300_5.pdf http://www.fillfactory.com/htm/produ...is5/ibis5a.htm Forgot to mention this: Based on the Kodak sensor: http://www.imperx.com/cameras/megapi...ion/index.html All these camera has embedded RISC processor and FPGA. |
April 25th, 2005, 08:53 AM | #33 |
Major Player
Join Date: Jan 2005
Location: St. John's, NL, Canada
Posts: 416
|
Their is a lot to try and consider when trying to compare cmos and ccd's.
The full well capacity may be higher on the cmos, but you will normally only get full well in super bright conditions, and for that amount of charge to build up in a photosite your going to need a lot of light energy. The full well capacity is related to the latatude, but not as much as you might be thinking. More how well it works in very bight conditions, but as most people know, we don't always have very bright conditions. The dynamic range and the s/n ratio are much more important. If the s/n ratio is higher that means that the signal is cleaner and the difference between one level of brightness to another will be greater. But even still, that plays second to the actual light gathered. But what you also need to look at is really how much light is getting to the sensor, This will determine a lot. The kodak has larger imaging sites which means they can gather more electrons. So lets do a comparision and say the ibis5 gathers 10000 electrons @ 30% qe. Because the kodak has larger pixels it gathers [(6.7x6.7)/(7.4/7.4)x10000] so it gathers about 12200 electrons. But the kodak has higer QE so now it gathers (0.36/.3) 14600 electrons. In black and white. Now lets make it really bad, at 450nm wavelength (blue) the kodak has absolute QE of over 36%, so lets leave it for 14600 electrons. The IBIS has .42 relative, so relative to peak it only gathered 4200 electrons. Green (550nm ) : IBIS = 8000 vs kodak = 13400 Red (650 nm): IBIS = 10000 vs kodak = 12200 As you can see the kodak sensor in the same lighting will have over anywhere from 1.2 - 3.5 times more electrons gathered. so the ibis will be about 1/2 as bright as the kodak. Do you really think that full well will really mean anything if takes 2 times more light just to get near the kodak full well or 4 times more light just to fill its well to max. Also, did you look at the ibis colour filter , the ibis colour filter response is poor at best. When blue filter picks up a green light wave on ibis it is about 1/2 a blue so will affect the blue brightness even though it is green that is striking the sensor, but with kodak is about 1/9. The kodak filters are clearly superior - unfortunately for most people. I'm not ragging on the ibis, its a good chip that when used properly and it can really give a nice picture but the colour filters remain poor at best and this is where the IBIS gets it poor colour. In a 3 cmos setup the ibis would be amazing because you could control the filters. I'm not saying that the ibis is no good and throw it away, Just when i was doing my research and checking availability the numbers i crunched showed kai-2093 was a better sensor, but was going to prove very hard to work with. If i took an ibis right now i could literally jump to fpga programming and not have to worry about the issues that i do now like timing, but when the numbers show that the kodak will give more accurate colours and a better s/n ratio because it needs less light that was where i decided to start. I did see that imperx camera before along with one from red lake. I'd love to get my hands on one but I just don't have the money. The redlake one was about 3500 euros, so that would be about 5k in canada plus a cameralink card. I'd assume the same for the imprex one also and i'm really trying to avoid cameralink. I'll be honest, a production KAI-2093 is 1200 USD, while a ibis is somewhere around 300 USD. It might cost just as much to build a 3 cmos setup or be cheaper than what i have designed so far. That really makes me sad now that i think on it. Wonder if their are and 3 cmos ibis cameras around, that would be cool. |
April 25th, 2005, 01:29 PM | #34 |
Major Player
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
|
why don't you just make a 3 sensor CMOS using IBIS.Add pixel shift.
So we can end up with a great Global Shutter CMOS camera with any possible resolution from 1280x1024 upto 2560x1024.....:) |
April 25th, 2005, 01:54 PM | #35 |
Trustee
Join Date: Apr 2005
Posts: 1,269
|
Ok, I was thinking. Wouldn't it work if you just hook one of these HD cameras to a PC and use a NLE like Vegas or whatever would capture that to the HDD? I know you would have a 2 piece system and not very practical for location work, but would that work? Would that maybe solve many of the problems people are having with trying making a camera?
|
April 25th, 2005, 02:19 PM | #36 | |
Silicon Imaging, Inc.
Join Date: May 2004
Location: Troy, NY USA
Posts: 325
|
Michael, I think a lot of people here started with this line of thought. If you take one of our gigabit cameras, use a 20m cat-6 cable, power and a buffered VGA for a viewfinder going back to the camera, you have a very small, tethered camera head. With a VGA splitter, the director can see the shot also. Maybe use a Shuttle or Epox (?) mini enclosure and a small RAID and you are there. The rest is just software. Simple. (OK, I ducked when I typed that in case anything was thrown at me).
Quote:
__________________
Silicon Imaging, Inc. We see the Light! http://www.siliconimaging.com |
|
April 25th, 2005, 03:33 PM | #37 |
Major Player
Join Date: Jan 2005
Location: St. John's, NL, Canada
Posts: 416
|
Juan, that is what finally occured to me at the end of writing my giant comparison speal. I've spent a few weeks now working out schematics for the KAI-2093 and it seems almost a waste when i wrote that one sentence.
One problem is the alignment of the sensors, this would be a pain without special equipment, or could just do a "good enough" job and sort or the offset with software. Focus is the other problem, its harder to get the sensors onto the right focal plane with the extra glass blocks in the way. But beyond that, well, it shouldn't be so bad so maybe i'll start considering it. Since i'm doing fpga anyway, i'm also debating doing the timing for the ccd in that, since the vhdl program only has to define a few timings rather than the infinite timings on the KSC-1000. Michael, your right in you assumption that connecting it to a computer would work and Steve is right with his quick summary (minus the software aspect). I'm just going another route, and many will think, a hughly stupid route and way to complicated, but then i have a lot of control. What i'm doing is pretty much doing what SI does, build the camera head, but i'm trying to get to component and hd-sdi output rather than cameralink. So i'm trying to build a POV camera like ikegami hdl-40, just for less than 20 grand and then deal with the computer aspect |
April 25th, 2005, 04:31 PM | #38 |
Trustee
Join Date: Apr 2005
Posts: 1,269
|
if it's a computer, so you could use maybe Vegas to capture. What type of file those cameras out put?
|
April 25th, 2005, 07:27 PM | #39 |
Major Player
Join Date: Jan 2005
Location: St. John's, NL, Canada
Posts: 416
|
What most are outputting are cameralink, which is not compatible with nle's and that is what obin and rob s. are working on, software to use cameralink cameras.
Even the firewire cameras will not work with nle's because the aren't using any variation of minidv, they are using dcam (IIDC) which is a protocol for uncompressed raw data transport. Same goes for usb. And beyond that their is the camera issues, but that is just a whole different discussion. Well, my design kind of had some changes and i'm trying to work out what exactly I'm trying to output. Since i'm programming an fpga solution for my camera I can almost have it output whatever i want, within my skill (essentially nothing that requires a protocol[This means i can't do usb, firewire, ata, sata, gige]). So, i'm thinking I might try and go for both component and HD-sdi out of the camera, and then work on an fpga capture solution. (HD-sdi hard drive capture deck? That would be cool). Right now I don't have any idea how to do debayer in the camerahead without some ram connected to it, then I'm really stretching my abilities. The sumix altasens based camera is going through testing at the end of this month, so maybe in a couple of months they might ship a couple and then we could have a camera that could be easily put together and rival stuff like kinetta. |
April 25th, 2005, 07:38 PM | #40 |
Major Player
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
|
Well about the positioning don't think it is so difficult.
You can buy intrumental/tools for that at Edmund optics.Just a simple device using ball screws or even a Palmer's Screw ;) (BTW micron accuracy).They cost less than 1 K. Then just need the prism and some adhesive + a clean environment. Focus doesn't need positioning itself because that is given by the prism surfaces themselves. I also guess that Monochrome IBIS are probably cheaper than the color one. |
April 26th, 2005, 12:22 AM | #41 | |
Trustee
Join Date: Apr 2005
Posts: 1,269
|
Quote:
Vegas is supports scripting, couldn't anybody write a script to capture the needed files? |
|
April 26th, 2005, 06:20 AM | #42 |
Major Player
Join Date: Jan 2005
Location: St. John's, NL, Canada
Posts: 416
|
What i was talking about focus was the reason you don't have pl adapters for b4 bayonet cameras. The nominal focus plane is so far behind the lens with b4, but with pl it is pretty close. When a prism is used the length to the ccds is increased by maybe an inch. This isn't so bad for zoom lenses but for primes it can be a little pain. But it will still work like you say, just a certain range at one of the ends will not be able to focus i suspect.
Camera isssues pertaining to to getting a specific camera to do what you want it to. Like i explained earlier, IBIS looks to have bad colour filters, so trying to compensate for this is really hard. Also many cameras used only work in a rolling shutter mode, or they just can't get it into global (not sure if this was solved yet, don't think it was), which is okay for sationary objects, but when something moves in scene it looks like it becomes askew. Depending on how powerful the scripting language for vegas is it might be able to interface with a cameralink camera, but i honestly have my doubts. Most scripting languages are designed for doing things in a program, not doing hardware capture. After being compiled the script will likely be very slow and processor intensive i suspect |
April 26th, 2005, 06:40 AM | #43 |
RED Code Chef
Join Date: Oct 2001
Location: Holland
Posts: 12,514
|
Michael: please take your time to thoroughly read through all the threads etc.,
there have been a lot of talk already about different options, possabilities and what would or would not work. These systems are highly complex and an NLE (even one like Vegas with its powerful scripting) is of no use capturing from such devices if they do not follow a standard it understands. Cameralink is not something that is used in the "normal" video world. Besides, Vegas scripting does not prodive access to capturing, neither would it see any cameralink board or any camera that is not connected to DV or an analog SD capture board. If it was as simple as that we would have a lot more finished camera's already.
__________________
Rob Lohman, visuar@iname.com DV Info Wrangler & RED Code Chef Join the DV Challenge | Lady X Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors |
April 26th, 2005, 06:43 AM | #44 |
Trustee
Join Date: Apr 2005
Posts: 1,269
|
It all seems incredibly complex. I wonder how the Drake folks did it. Their camera seems to record to an in camera HDD and it all runs on batteries. Since it's uncompressed 4:4:4 720p, it seems even PC could have a hard time capturing it without dropping frames. It would take a very powerfull machine. Now, how the Drake does it all in camera? Just the power consumption for the computer and the HDD, specially if it's a raid, would be above most batteries. Very intriguing.
|
April 26th, 2005, 06:46 AM | #45 | |
Trustee
Join Date: Apr 2005
Posts: 1,269
|
Quote:
|
|
| ||||||
|
|