July 31st, 2006, 12:15 AM | #226 |
Regular Crew
Join Date: Jun 2006
Location: Columbus Ohio
Posts: 36
|
Questions
Hello Andrey. I recently read through this entire thread and I must say I am very impressed; both by the quality of video that the 333 is capable of and your willingness to help out the community. But now that the 353 is on its way I have a few questions about it. So here goes:
1. With the IDE connector built in I am assuming that we are going to see much higher bandwidth then on the 333? 2. If the image data is going to be able to be stored on the hard drive I assume that we are still going to need a computer with a network interface to control the camera? 3. I know your priority is to use this as a security camera, so I was wondering how much code development you are going to do for the IDE interface? Are you going to fully implement it or are you going to leave it up to us who want to use it for other purposes? Well these are all the questions I can think of for now. Thanks for your time. |
July 31st, 2006, 01:35 AM | #227 | |||
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
What will increase is the CPU speed, network operation (Ethernet checksum calculation in the dedicated hardware of the CPU chip) and FPGA->system memory transfer. And writing to disk is faster too - most difference compared to 333 will be for the low compression ratios/high quality. Quote:
Other option will be with USB host (unfortunately CPU has only 1.1, not 2.0) that might be connected to USB WiFi adapter (one of those that has open source driver so it can be compiled to run on non-x86 camera CPU) Quote:
As for the IDE interface itself - I'll definetely make sure it actually works. After that - it is just a hard drive connected to the GNU/Linux computer (in the camera) - you can use it with any of the existent file systems and other software. |
|||
July 31st, 2006, 05:16 PM | #228 |
Regular Crew
Join Date: Jun 2006
Location: Columbus Ohio
Posts: 36
|
Excellent, thanks for the reply. I have not used FPGA’s since back when I was in school, so I may be a little rusty at them. But I have a friend who works with them on a regular basis so I may get him to give me a refresher course. Then I will start looking over the great information on your site to get a better idea of how your setup works. I know you have posted the image of the routed 353 board, so would you be willing to share the actual schematic at this stage?
Thanks again. |
July 31st, 2006, 06:34 PM | #229 | |
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
|
|
August 1st, 2006, 11:06 AM | #230 | |
Regular Crew
Join Date: Apr 2004
Location: UK
Posts: 74
|
Quote:
|
|
August 3rd, 2006, 08:16 PM | #231 | |
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
|
|
August 10th, 2006, 05:09 AM | #232 |
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
compression
Hello all,
In the last few months I've been busy building my own camera based on a micron sensor. It didn't go really well as I'm mainly a sw guy. Thanks to this thread I found out that Elphel is building nearly the same device that I wanted to build. So I decided to leave the hw work to professionals and develop my own software. I'm interested only in the 1280x720 resolution as: 1) the new sensor will be able to provide that with binning and 2) this amount of data might be compressed in a lossless way to fit into the bandwith of the camera (with 24fps). What we know: -current camera bandwith is 8.75MB/s on ethernet. The new camera will have a faster processor which will slightly raise this number. -direct-to-disk recording is not useful for those who want to see and control the picture while recording - that can be handled by a separate pc only. -current memory i/o is slow and altough they plan to increase it with a huge factor there'll still be memory i/o during compression using theora. -a LUT can be used to drop the bit depth down to 10. -1280x720x10bitx24fps=26MB/s so using the 333 we would need 3:1 compression ratio but with the 353 a smaller value might also be sufficient. After I have an encoder I plan to write the decoder part. This will be written in C and will be realized in a form of a plugin of a popular video editing software running on pc. This plugin will never modify the actual raw data beneath, it'll store it as metadata. If all things are successful I intend to release the stuff. Andrey told me that he needs a few weeks to complete the camera so I'm planning to be ready with my things at the same time. I'll release sample images if I have my camera and all my software working with it. I don't have experience in image compression so if you have suggestions feel free to submit them here; and Andrey, if you feel that I want to do sg. stupid please correct me :-) I don't want to run into dead-ends. And an other thing: I'm only interested in simple algorithms; if the complexity reaches a specific level I'll stop development and rather use the built-in codec of the 353... Zsolt Last edited by Zsolt Hegyi; August 10th, 2006 at 01:25 PM. |
August 12th, 2006, 10:55 PM | #233 | ||||||
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Welcome
Quote:
Zsolt, are you doing this compressor in software, or hardware? Most compression schemes are difficult. I don't know if it has enough processing power in software. One of the easiest improvements is to buffer the frames (to smooth out data rate) and to store the difference between frames). Using the internal compressors as well this would give you an advantage much of the time in disk space and data rate. One of the guys in the Digital Cinema threads was Doing a simple lossless bayer codec with frame difference compression, and reported very good results. I don't know if it was one of the Juan's, or Jason Rodriguez who is now at Silicon Studios, but best to contact them. I think the person mentioned it in my Red Codec Suggestions thread in the Red camera sub forum. Read my previous suggestion posts here, and I believe I posted links into the web-wikipedia which has subjects listing many open-ware and as well as lossless codecs. It might be easier to drop one in from existing software if software is what you plan. As you can see I mentioned that BBC has one coming along in FPGA, there is also more behind it, so worth looking at. Quote:
Quote:
You can direct to disk on the camera and send a feed out through Ethernet to a portable computer (even handheld) for viewing. In this way all the handheld has to do is decode and display, and send control signals back, so lower cost device is all that is needed. A uncompressed image could even be saved to disk. With the simple compression algorithm like what Juan/Jason (whichever it was) was working on you could save heaps of disk space while sending a JPEG/Theora version to the portable. Quote:
Quote:
Quote:
A wish you success Zsolt, congratulations. |
||||||
August 12th, 2006, 11:16 PM | #234 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
I forgot to mention, low cost devices that can be used instead of a computer for viewing and control, if they have Ethernet. Most will have some form of official/nonofficial Linux for them, so Linux development can be ported between systems:
Intel's UMPC (Ultra Mobile PC) platform, MS Origami (cheaper version coming). Some Playstation (I don't think the Portable has Ethernet??) PDA Nintendo Wii (I think maybe Ethernet) Embedded Industrial Micro-controller, hundreds, if not thousands to search through. Future machine: Nintendo Gamecube Portable (maybe called GBA2) I expect with Higher Res screen then PSP, but Ethernet status unknown (and maybe only wireless). If there is cheap Ethernet to USB/SD/?? card adaptor, then most portable devices without Ethernet can be used. Search for my Technical thread fro much information about previous Digital Cinema cameras here: http://www.dvinfo.net/conf/showthread.php?t=28781 |
August 13th, 2006, 02:19 PM | #235 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Hello Zolt,
I have spent some time posting a thread over at comp.compression newsgroup requesting advice on lossless Bayer compression etc. You might like to go over there and see what people say. There should be a lot of expert professionals there. If you don't have a newsgroup reader setup, you can find them through google's newsgroup reader. Starts "Lossless/Near lossless, Bayer". There is a thread there with information on significant Jpeg recompression, there were a number of techniques, but the best is covered by a patent. Unless Elphel; has an Arithmetic coding license you probably will not be able to use most of them. Don't be discouraged that nobody replies to your post, it happens around here, best to just keep looking into your ideas in the short term. |
August 14th, 2006, 02:26 AM | #236 | |
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
|
|
August 14th, 2006, 04:47 AM | #237 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Thanks, just got a post from a guy over at the newsgroup that has some good suggestions for lossless and bayer that he also worked on and implemented with 1000 luts in an FPGA for his thesis. his suggestion sounds a bit like what was suggested in discussions with cineform a while ago.
Otherwise I am not really get the depth of breadth of answers I am looking for over there. I wish I could post a Link, but I don't know how to, maybe through google, bit of a chore. What do you think of the idea of changing the different colored bayer pixels into one colour, which can be restored after decompression, and then compressing as a grey scale, would that be simple, would it help much in compression? |
August 14th, 2006, 06:59 AM | #238 | |||||||
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
Quote:
The algorithm I'm currently using for intraframe will be sufficient also for the interframe compression. Only the previous frame needs to be stored not a whole group of frames with this method so it's easier to implement. The first frame must be stored without compression though but that's not a problem as the average bandwith usage will not increase in long-term. Quote:
Quote:
Quote:
Quote:
Quote:
Zsolt Last edited by Zsolt Hegyi; August 14th, 2006 at 10:29 AM. |
|||||||
August 14th, 2006, 07:09 AM | #239 | |
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
|
|
August 15th, 2006, 01:07 AM | #240 | |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Quote:
As I understand, colour follows brightness, which is why bayer interpolation works, so the brightness contains the high frequency data but the colour tends to stay the same over large areas, so each color contains mostly the same frequency information, but converting to one reversible colour eliminates most of this difference in frequency to real underlying differences in frequency and the odd colour difference. When you convert to one colour you bring the values of the pixels closer to one another and can apply more efficient grey scale compression. But I see what you mean, retaining the backwards compatibility means their will be some extra frequency variations. The interesting thing is that the variation reflects real data differences. This was only an simple, stop gap, idea that could be very easily implemented on the existing compression architecture to give an extra boost to the performance of compressing the bayer pattern as a grey scale (that Andrey recommended for the existing setup) by reducing the differences. I have been thinking of a new method some time back. Here you split into channels like you suggest, convert to a normalised colour, compress one channel (this leads to compressing disjointed details, as the intervening pixels are missing) then you record the difference from the other channels. But with my idea you base the difference off the interpolation between the pixels of the first channel, i.e. the interpolation is the assumed value of the pixels of the other channels. This is not as simple as doing the conversion and using the existing grey scale compression. Approaching the above system from another direction, you can leave all channels in their existing colors, in the assumed interpolation pixel used for comparison you can also calculate it out into the colour of the channel of the pixel being addressed. I should have stated this earlier, the reason why I designed the above like that, is there are different response curves for each colour, which means exactly the same brightness on an object in all colours has a different value in each color. This is surplus predictable information and my methods are designed to eliminate it and increase compression. Going one step further into the realm of FAX compression standards (see JBIG for the best). There is areas compression, lot less processing I believe. Where the difference across a line is recorded, then the difference between lines, which then can be easily run-length encoded. To get over corruption, their are file and transmission techniques to fully recover data from corrupt files, for maybe 25% increase in finale compressed size, which can be greatly over compensated for by compression savings by either using difference compression on the original channel, or by longer GOPS in other compression etc. JBIG tends to get good compression with less noise and detail, but I think by combining with the normalised color and assumed value techniques I describe above you can get better compression then cineform. Forgive that my reasoning here is not completely clear, as I have memory problems picking up the game of what I thought in designing this strategy previously in times past. |
|
| ||||||
|
|