August 15th, 2006, 01:12 AM | #241 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
I was going to reply to your other message, which I am most appreciative for, latter, but I'll condense it into here, as I am in a rush, as you can see by my jumbled thoughts:
lossless techniques The techniques I am describing are all true lossless, the data still is reversible to lossless, you still store the differences that represent the lossless. Advantages of using inter frame for storage and converting to intra frame for editing. Converting to intra frame lossless for storage will quickly consumer a lot of space, and costs many hard disks. If done properly all the techniques with Inter compression added, might compress marvelously. The original intention with the inter frame compression idea was to convert to intra intermediate compression codec only when you were ready for editing, and save back to the inter frame version for storage. So you get the best of both worlds, but know that bayer lossless with 100 hours filming is a lot of drives, before you even get to extra footage in the editing process. Not completely nice, but a compromise between two problems. First frame uncompressed You do not need to leave the first frame without compression, you have the original image in the buffer to perform the difference comparison on, if you compress the first frame lossless then you get they original frame back on decompression. If you are talking lossy, that is a different matter, as there will be a quality difference between the first frame and others, as the others are virtually lossless differences, and the first frame is lossy. A intensive way around this is to compress all frame and decompress, now consistent quality, and then record the difference between each subsequent frame. there must be an smarter way of doing this without having to fully compress. Maybe you could compress a heap of frames as a extended frame, therefore getting advantage of repetition represented there by the sub frames. This takes extra time, but the long buffer could be sued to smooth it out. As you can see there are too many variations ;). UMPC/Origami As long as you are not developing a Microsoft Origami/ Intel UMPC hardware device, I think free SDK/application development platforms should be available. I am sure there is a Microsoft cross platform development environment. With Linux, you are in the usual situation, somebody is probably is trying to develop a version of Linux for them. At any price under $799, you are getting too close to to the cost of ITX+ monitor+ batter system. I expect we will see (try VIA Web-pads too) UMPC below $500 eventually. Disk transfers uncompressed If the processor is off loaded from the compression task, i think there might be enough processing power, as long a DMA is available and it is not restricted to 16MB/s. Just a very simple/easy option, and it can be compressed post for storage. Of course I am only talking about 720p25/24 here, not 1080, which I agree would be too much. Not dual Ethernet/Disk formats You can transmit via Ethernet for viewing and record to internal camera disk the same format, and do the intra conversion latter. But if the processor is free enough from the FPGA JPEG process, you could record to disk and use FPGA to produce Jpeg to Ethernet. If it pases through the processor then maybe not, unless there is DMA to perform the transfers. Quality Even if we can get max quality Jpeg on a disk, that is an improvement. Does anybody know what compression ratio max quality is? I think 3:1 or better is what we should be looking at, 2:1 is probably close to visually lossless of cineform. But Jpeg is very sloppy and imprecise, there are way to get it to compress a much sharper image. I do not know what quality of Jpeg mechanism the Elphel uses, the images look nice so maybe it already sues a better more accurate Jpeg mechanism, does anybody know? For this difference refer to the threads in those newsgroups I mention, and the compression faq related to the newsgroup, under the section to do with lossless Jpeg implementations that explains some problems with normal Jpeg unable of preciseness for true lossless. Well the afternoon is gone again, looks like I didn't save any time anyway ;). |
August 15th, 2006, 05:42 AM | #242 | ||||||
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
Quote:
However, all of the above is obsolete if we use an interframe method which I'm starting to like more and more. I already have an algorithm which I implemented in Java and I've done some tests with it. In lossless mode the intraframe results were about 1.5:1 on high freq data (picture of a tree with sunlight coming through its leaves) and 2.8:1 with low freq data (an asphalt as background with some vehicles on it). If we consider that the frequency of the same pixel between frames is usually much lower than the content of the mentioned low freq image then we'll be able to achieve ratios even larger than that. And we need only 3:1 (or even smaller ratio is enough if we write to disk, see below) so there'll be some bandwith remaining for camera panning/object movement which generates images harder to compress in the time domain. If the camera/object movement is really fast then the resulting motion blur will smoothen the differences between pixels anyway so we get the same frequency as if the movement was slower. Quote:
Quote:
Quote:
Quote:
Zsolt |
||||||
August 15th, 2006, 10:12 PM | #243 | |||||
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Quote:
Quote:
I think we might be speaking two different languages here, you seem to be on about Spatial to frequency considerations like they use in Jpeg and wavelet (and most codecs), I am talking about whole values and simple integer based compression and difference schemes, Like used in Fax standards) as well, with some mathematical prediction to reduce this difference. These integer based difference schemes with some prediction are much simpler in software and FPGA then the normal schemes, and I think less processing. What is best is probably to test all methods and decide which works best in which circumstances, and include the best, or for better compression, if small enough on FPGA, swap between them as context dictates better results (more advanced). Quote:
Quote:
Quote:
Thanks for the accurate specs of the current systems performance. Pity that it can't do uncompressed, but it still puts reasonable compression within reach. I should say, bayer compression is definitely the way to go, you instantly get a 3:1 improvement over 4:4:4, which is very hard to match by 4:4:4 compression. Do you know the Ogg Theora people were developing a wavelet lossless composer they put on hold to develop the Theora codec? That should be back up and running. Keep it up Zolt, I am glad that you have ideas and are examining others. Still, it would be interesting to get Juan's input for ideas, he was doing a difference based post compression for storage. Once again, I am about the length, I normally rewrite more to condense, but did not get away yesterday and have to rush again today. |
|||||
August 16th, 2006, 12:38 AM | #244 | ||
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
Quote:
I did some thinking on using wireless networks. The new 540Mbps devices haven't come out yet or if they have then they're probably very expensive. So we could only use the normal wifi 54Mbps which is 6.75MB/s, way too thin. So forget wireless: record to disk and transfer a reduced resolution lossy stream to the display of a handheld attached to the camera directly, through a short ethernet cable. The mentioned 15MB/s disk write is the maximum the new processor will handle. The current ethernet data rate is 8.75MB/s. If we halve the horizontal and vertical resolution and reduce the quality we could get 1-2MB/s so the disk transfer could still use a 13-14MB/s transfer speed. Question is, do we have the time to encode to two different formats? The problem with this approach is that setting the lens focus won't be easy if based on a poor quality image. Zsolt Last edited by Zsolt Hegyi; August 17th, 2006 at 12:32 AM. |
||
August 16th, 2006, 12:07 PM | #245 | ||
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
Quote:
Also - the new chip is larger, so you may instanciate some modules twice and have twice the speed. Next - 353 will have 5x speed of FPGA->system memory transfers possible with direct bus control (compared to only pseudo-DMA of ETRAX-100LX) |
||
August 16th, 2006, 11:24 PM | #246 |
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Thanks for the corrections Andrey. As I wrote, my numbers were only predictions, without exactly knowing the parameters of the new 353.
The increases in the fpga processing speed (mem/compr clock separation, double instantiation, 5x fpga-mem speed) are good news but if we reach the limit of the processor before reaching the limits of the fpga then we've no use of that. If we record to disk we don't have an ethernet limit so the main question is now: what is the data throughput of the new processor? Zsolt |
August 17th, 2006, 12:00 AM | #247 | |
Major Player
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
|
Quote:
|
|
August 17th, 2006, 11:59 AM | #248 | |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Zsolt
Mihai Cartoaje has just posted, over in comp.compression, that he has added bayer support to the wavelet library Libima. Though he mentions something about lousy (Lossy?). http://geocities.com/repstsb/libima.html Probably worth going over to the newsgroup, and seeing their ideas. Have you had a look yet? Quote:
I would not mind testing out some algorithm variations myself. My previous thoughts on the bayer predictive issues are becoming clearer now. It has to also do with establishing the ratio of the colours from analysis of the the surrounding pixels, and using that in the predictive value, for the difference operation, as well as the previous information I gave. Thanks Wayne. |
|
August 18th, 2006, 12:21 PM | #249 | ||||
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
Quote:
Quote:
Quote:
Zsolt |
||||
August 20th, 2006, 01:01 AM | #250 |
Regular Crew
Join Date: Apr 2004
Location: UK
Posts: 74
|
http://www.tacx-video.com/images/HD2006/Italy/Rome A few reduced size pictures from the 333 in Rome last week.
|
August 21st, 2006, 05:26 AM | #251 |
New Boot
Join Date: Jun 2006
Location: Germany, near munich
Posts: 14
|
Hi,
one question, Phil, please tell me with lens you used in Rome. I look for a wide-angle-lens like this. Robert |
August 21st, 2006, 05:54 AM | #252 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Thanks Zolt.
It might take lots of time before I am ready, some unexpected things are up. I'll probably email you when I am freer. The register suggestion (also implying on chip ram as register) was only on the basis of doing a pixel at a time, needing only a few memory words/registers for surrounding pixels and intermediate results, not on 20*20 blocks of pixels. |
August 21st, 2006, 08:06 AM | #253 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Noise and compression artifact removal.
If anybody is interested. Michael Schoeberl, who has had experience with noise removal in medical imaging, over at the comp.compression thread has put me onto some very good noise removal software and plugin, also works on compression artifacts. There is both a still and video versions.
http://www.neatimage.com/ http://www.neatvideo.com/index.html?snim Would lead to more compressible cleaner files in post. I am aiming to look for routines suitable on camera as well. |
August 21st, 2006, 10:08 AM | #254 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Have looked over the examples, and the results are pretty amazing. I have compared the before and after file sizes on their site, and mostly reductions upto less then half the original size, usually less reduction for the stills. I must admit this does not entirely make sense, I think the re-compressor they are using is not doing such a good job, otherwise I would expect more reduction on average than this. Some minor loss in detail at times, and gain in some other places, as it tries to predict what is what. But still very nice.
http://www.neatimage.com/examples.html http://www.neatvideo.com/examples.html http://www.neatimage.com/reviews.html Reported to be very good too (see conclusions) http://www.michaelalmond.com/Articles/noise.htm |
August 21st, 2006, 02:29 PM | #255 | |
Regular Crew
Join Date: Aug 2006
Location: Hungary
Posts: 59
|
Quote:
You're probably aware that the cmos chips we intend to use contain analog noise removal circuits and they're really good (removing the noise digitally is not nearly as efficient).Well, unless you push up the analog gain in the chip - that seems to be the case with several images posted on the above link. And the other set of images are just poorly jpeg-compressed. By using correctly exposed sensors with normal analog gain levels we should have no significant noise on the raw bayer images. Last edited by Zsolt Hegyi; August 22nd, 2006 at 09:34 AM. |
|
| ||||||
|
|