July 7th, 2007, 05:50 AM | #91 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Jose,
The codec would be extremely simple, it basically is packed (12/14 bit) bayer pixels. The headers (per image) include things like: - width - height - bit depth - color model (bayer rggb, bayer grbg, rgb) - LUT (integer to half) - color conversion matrix - orientation (horizontal and vertical flip) - camera euid (for loading calibration data) - left and right average dark current (measured from dark frames (shutter = 0.0 sec) just before the recording starts or from the left and right light covered zones). - nr frames since midnight - frame / second It needs both rgb and bayer modes, as Final Cut Pro only works fast when the codec for reading is the same as the codec on the timeline. So the result of internal Final Cut Pro renderings will be stored as lossless RGB with the same codec. It is a simple file format so that it would be easy to implement in a recorder, but as it is a proper QuickTime codec, you can also save it as a QuickTime file (which would happen inside Final Cut Pro). My current capture software is Boom Recorder (with a Mirage Recorder license) it handles things like buffering and viewing. And of course it also does all the old stuff like timecode, high quality audio, metadata and auto file naming. Currently I am only implementing IIDC cameras, but it would not be extremely difficult to add GigE once it is finished. The MacMini may be a little to light for doing this stuff, especially if you want to view the result. Mirage Recorder uses quite a lot of GPU power to draw the images on the screen. CPU usage is quite low 30-40% but with only one CPU this would be 60-80%. The internal disk is also not fast enough, and there is not a good interface for an external one either. I am writing it for the MacBook Pro, using the firewire800 with the Pike and a eSATA ExpressCard connected to a disk. One last thing, you can not configure the Pike to do exact 24 fps, I am planning to clock the Pike externally with a frequency divider (2000) on a audio word clock. This will make the Pike also lock to audio. Cheers, Take |
July 7th, 2007, 06:00 AM | #92 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Take.
I had an look at that picture, it seems like an form of column noise. The Sanyo HD1 had it worse, I can't remember the explanation, there was an explanation over in the Sanyo HD forum, in one of the threads I was in. I remember it was an thread about the problem (mis-named problem too). See what the firm ware does, it might be the only hope. What you describe is an different response at different levels, which really requires an non linear fixed pattern to partly solve, which might be as simple as defining the noise at an few levels and mapping an mathematical function to estimate it at any point, you know what I mean. We rejected CCD's and firewire an long time ago because of various problems. I think, the situation might have been, that you can get CCD's to perform as well as the best CMOS, but at an price (cooling, power consumption, expense). There are some fill-factor advantages in CCD's, but the more complicated it is the less fill factor, so the more reliance on micro-lensing, and less widest aperture. For us, in todays market, there is little reason to look past CMOS Altasens, and some of the other, newer, sensors. Though my knowledge is to limited of present technology (in those days details of the Altasens technology was not even available). |
July 7th, 2007, 06:37 AM | #93 |
Major Player
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
|
Ok, so Mac Mini is a no go.
We stay with camera head, GigE interface and a laptop. |
July 7th, 2007, 07:04 AM | #94 | |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Jose, I was not referring to you. The Epix suggestion is good. But we have to look at fitting in between the cracks left in the market by the SI, Red (coming out with smaller camera) shake up with Canon HV20, and Sony XDCAMHD EX, attacking the low end of the market.
As for format, it depends on taste, extremely fast aperture 2/3inch, or 16mm, should give you descent depth of field (as Drake did) but another format for lenses (non cinema, is 4/3rds) which is an bigger sensor compromise for digital sensors and DOF. Apart from those,we are back to 35mm lens adaptors again. What some people do is use wider angle lens to get an descent field of view without lens image adaptor, but that means using the lens with the best aperture to get the DOF (expensive). You have an 720p system, what is it, if you don't mind me asking? Quote:
Reading your next posts. 720p benefit, is 50fps (if available) and marketing, it doesn't challenge the 1080p market as much, so can afford to be less. The internal analogue to Digital converter in the Ibis has been very poor and noisy, some additional external circuit design also improve performance, as Drake designer had revealed to me. Actually, somewhere, I have the reference to the original camera that the Drake was based on. Problem is many camera companies go the easy way, and just use the internal ADC. The other problem is, is when are they going to replace the Ibis5a, they have additional technology with the buy out of FF and Smal by cypress to replace it, but I think for the last 3 years not much happened. But performance may well be improving in the background through technological refinements. Ibis has less advantage now that other sensor companies have latitude extension technologies. According to macosrumors there is an gaming Mac Mini coming, actually they were talking about gaming Imac last year, but makes some sense. The GPU in the standard PC Mac Mini is not the best. So an newer version would be best. Heroine Warrior does an Cinelerra Open Source edit capture solution, that has been used on major productions to capture raw 4:4:4 1080p for years. http://www.heroinewarrior.com/cinelerra.php3 Jamie, Good idea about the gumstix. www.ambarella.com have an camera control/h264 codec chip, that can write to an IDE interface (and many others, even has USB). Such an chip can be directly connected to the sensor, even to USB camera, I image. H264 Intra frame codec is the new professional work codec, and would be quiet desirable for your project. About the ambarella chips, more advanced versions are more powerful, but all work around an large parallel array of SPARC RISC microprocessors. So I imagine it might be able to be reprogrammed to do what ever compression/code you like. There is also now two rival software development tool companies offering tools that take normal code and convert it into parallel code effectively, even FPGA code (forget names). Daniel I tried to talk reel-stream into releasing the board for more general solutions in times past. I think we could get better results than the HVX200 mod. I wonder if they would be more interested now. Well, I wish you guys the best with this, I am just waiting for an affordable solution nowadays. |
|
July 8th, 2007, 07:37 AM | #95 |
Major Player
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
|
Hi Wayne,
About the Ambarella chip. It sounds great but we can't use it with the USB micron board. We'd get the same speed that I get when capturing to laptop. The main problem here is finding a camera head with a good 2k sensor and an interface fast enough to deliver 24fps at full res or finding someone who can create a different board for the Micron sensor. When we have that, we can do whatever we want, like having it attached to the Ambarella chip or connect it directly to the computer. Jamie, the full datasheet for the MT9P031 chip is at www.framos.co.uk Just register and download it. I'd go for full 2k. I don't care if I have to build the adaptor. In fact I already have all the parts. 2/3" 720p sounds very good, but I prefer 2k. |
July 8th, 2007, 06:27 PM | #96 |
Major Player
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
|
I've gone through 30 or 40 google pages trying to get info about cmos sensors we don't know. No luck so far. It looks like all interesting options have already been discussed.
Jamie and Take, what do you think about this solution? Micron camera head - Ambarella chip - FPGA board or similar with USB interface to the laptop - Simple recording software and camera control under OSX to go directly to Final Cut. I suggest USB because the laptop captures an already encoded H.264 clip, so needed bandwidth is much smaller. If the Ambarella chip has an interface to connect it to the computer, we wouldn't even need the FPGA. That way we could send 2k H.264 clips to the laptop... Do you think it's possible? |
July 8th, 2007, 07:34 PM | #97 |
Regular Crew
Join Date: Jun 2006
Location: Columbus Ohio
Posts: 36
|
The biggest problem I see with this setup is this line from the Ambarella website: "The codec supports both interlaced 1080i format, as well as progressive 720p." found here: http://www.ambarella.com/technology/compression.htm
And personally I would like to avoid H.264 since I plan on doing extensive chroma keying. I am not saying this setup won't work, its just it may not give us the quality we are all looking for :-( |
July 9th, 2007, 02:51 AM | #98 |
Major Player
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
|
I just sent an email to Ambarella asking if it would be possible to have 1920x800 (near 2k but 1:2.40 ratio) progresive encoded.
I'm also very interested in vfx, and if H.264 can't be used for chroma keying it won't be an interesting solution. Well see. Maybe we can find a lossless encoding chip. |
July 9th, 2007, 05:30 AM | #99 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Why do you want to do lossless encoding?
A single 7200 rpm SATA harddisk is fast enough to sustain bayer 1920x800@24fps,14bits uncompressed. |
July 9th, 2007, 11:54 AM | #100 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
The ambarella, from memory has, storage (I think some form of ATA) and USB, and sensor interfaces. So, sensor->Ambarella->storage interface (card/hard disk/USB). Because it is not an PC, it should deliver close enough to full USB (2.0, I assume) speed to enable 50fps 720, fully buffered. You could do USB camera->ambarella, but unless the camera is buffered you will loss some bandwidth because of the peaking during readout. Still compared to an PC, it might be better.
H264 Intra is the replacement for Mpeg Intra in professional work-flow. It is available in lossless compression and down. What ambarella will handle is another matter, but they do offer professional versions for broadcast flows (from what I can tell). But if it is re-programmable, even just for h264 Bayer compression, it would be nice. This should be an very powerful chip compared to the fastest PC's, from memory. it is not something for most of us to hassle Ambarella about, they certainly don't give me answers to all my questions, but as we have an engineer here, it is worth looking at. Still, we already have Juan or reel-stream, and Elphel with possible engines suitable for an camera recorder. There are an few more sensor chips, refer to the Elphel thread, I post them sometimes, hoping that Andrey will relent and see an better sensor for his cameras (but changing suppliers, and designs, is an good reason for not changing, as an major hassle). Only one chip I don't mention, not really higher performer (probably better than Micron) but cheap, something for private development. |
July 9th, 2007, 12:49 PM | #101 |
Regular Crew
Join Date: Jun 2006
Location: Columbus Ohio
Posts: 36
|
Just incase anyone has missed it there is a thread over in the AVCHD forum about the Apitek GO-HD (http://www.dvinfo.net/conf/showthread.php?t=95227) This camera uses the ambarella chip to do 720P. While the footage is pretty good you can tell the compression is quite high, but as Wayne mentioned it depends on exactly how programmable the chip is to see if it is usable or not.
|
July 9th, 2007, 01:39 PM | #102 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Going for a chip design could be quite cool.
Say for example you take a IIDC or GigE camera like the Pike, that already does all the communication with the sensor. Both IIDC and GigE broadcast their data, which is interesting. Now have a chip which has a Firewire800 or GigE and a eSATA interface. Simply record all the frames received on the bus and send it to the disk. You don't even need a filesystem if you like. Just think of the disk as a single file that you write linear. Now this chip does not need to be that fast either. A second chip with a Firewire800 or GigE and a DVI/VGA and USB interface. Its job would be to do a simple debayering and zooming, and show some things like numbers and icons. It would listen to a USB HID device, so that you can control the camera like the shutter time, and send commands to the recording ship to start and stop recording. The nice thing of this arrangement that it would allow a fully distributed system. You could have a second recorder chip, to record to multiple disk for safety, or have a second preview chip for the director. Once the recording chip works it would not need more development afterward, this will make the system stable and reliable. Which is nice in an already expensive production. Maybe at some time you want to add playback capability. I do expect continued development on the preview chips, adding features, increasing preview quality. It would be an interesting project, but I think right now I'll stick with the software route. |
July 9th, 2007, 05:28 PM | #103 |
Major Player
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
|
It sounds great! We need to know the price for the chip though, and also what's needed for the whole system to work. I mean, it wouldn't be as easy as connecting the sensor to the chip, the chip to a sata disk and that's all, would it? We need to study the hard/soft for the controller/viewfinder part too.
I wanted to ask something else. It's not Ambarella chip related. Is there any way to overclock an usb interface? I mean, the bottleneck when capturing directly to the computer is the usb. The sensor actually goes like twice faster at full HD. If there's a way to make the interface go just a little faster, we wouldn't have any problem capturing 1920x800. This is really getting better with each new post. I'm seriously thinking about keeping the demo board. I was going to return it because of the usb. We've got a week to find out if the board-chip-disk solution is actually possible. By the way, Wayne... You say the ambarella chip has a sensor interface. Does it mean you just need to plug the actual single sensor and that's all or you have to connect the whole demo board? |
July 9th, 2007, 06:59 PM | #104 |
Regular Crew
Join Date: Jan 2006
Location: West Country, UK
Posts: 141
|
I've been busy and haven't had much time to follow the threads lately: good to see this debate still going on with a feeling of real purpose. I feel excited again with some of the possibilities people are talking about here. Solving the low cost solution has taken longer than I (and most other people) have thought, but I still keep the faith in home made HD, if only because I still can't get the images I want from a commercial camcorder that I can afford to buy.
As usual the main problem is getting HD down onto the HDD, so perhaps GigE will finally solve it. Until then I am still a fan of Bayer RAM recording, though this seems to be going out of fashion at the moment! True, you have the restricted duration of single shots, so if you want to make a movie with lots of long dialogue shots, this recording method is hopeless. But what of you don't want to make a movie like that? The sort of durations I am using are between 30-45 seconds, which is surprisingly long enough to get most shots (modern cinema grammar has speeded up, perhaps thanks to MTV). It's longer than the single takes I got with my spring-driven Bolex H16 in the early 1980's! And it's good discipline to plan out a shot and shoot it within that time span if possible. Each take is automatically split as a single clip, so at the end of the session you convert them to colour avi, assess the clips, and delete the takes you don't want (save HDD space with these large clips). The durations I use are slightly less than I can actually get, but the ultimate durations gave a slight "kick" towards the end of the shot which lost sync with the sound. I assume this is the Sumix software not being able to handle the larger clips from 2 GB of RAM. It seems from earlier posts here that Sumix may be updating their software to accomodate large RAM so the safe maximum duration might get extended. Rob Scott coded his own RAM-recording software for the M73 camera which extended the RAW recording duration using RAM buffering (writing data to HDD while still capturing to RAM). See Rob's thread here on Alt imaging called: "Interest in open-source capture software for Sumix M73". I'm not sure the status of Rob's project right now -- Rob if you're checking this thread, perhaps you could give us an update? My own preference at the moment regarding bandwidth problems is to shoot a more modest frame size. I'm currently shooting tests at around 1400 or 1600 width, which is my preferred method of "compression". Using the M72 camera 1600 is full sensor width, so field of view of the lens is not narrrowed. The post workflow is to take the uncompressed clip, export an uncompressed still sequence (tif, tga, whatever), and then finally use Photoshop to enlarge all the frames in the sequence to 2k width. Because you start with an uncompressed frame, and use Bicubic quality to make the enlargement, results may be more acceptable than onboard realtime jpeg of a larger frame. All the best, John. |
July 9th, 2007, 11:44 PM | #105 |
Major Player
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
|
Jose,
Maybe you will be able to overclock the usb chips, but you want to double the speed, and that will probably not be possible. Maybe you can run dual usb, or use dvi, or your own form of sdi, or simply GigE. Cheers, Take |
| ||||||
|
|