High Definition with Elphel model 333 camera - Page 4 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project
Register FAQ Today's Posts Buyer's Guides

Closed Thread
 
Thread Tools Search this Thread
Old April 14th, 2006, 05:21 PM   #46
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Jef Bryant
With that mjpeg compression, underexposing might not be a good thing for image manipulation later.

Is there any way around this compression? Has elphel developed any other options in that area besides mjpeg? I like most of what I'm seeing, but the compression seems pretty heavy.
They talk about doing USB2 on the development site, that is a number of times faster than the Ethernet they use, that could be enough to enable RAW frame transfers in realtime (depending on the size and rate of the frame). Otherwise VP3, is the alternative to Mpeg2 they also use, and hopefully should yield much better quality.
Wayne Morellini is offline  
Old April 15th, 2006, 04:32 AM   #47
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
Isn't it better to keep some things appart... of course there may be more potential in the Elphel camera, but to me it's a bit strange to talk about a compressed HD camera, and start talking about how to make it uncompressed.
The benefits of compression and the quality it seems to deliver (especially in the Theora codec) is pretty high.

My camera was shipped about a week ago, so I guess it'll arrive early next week. I have my wax adapter ready, so I'm very curious.
Oscar Spierenburg is offline  
Old April 15th, 2006, 09:38 AM   #48
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Wayne Morellini
They talk about doing USB2 on the development site, that is a number of times faster than the Ethernet they use, that could be enough to enable RAW frame transfers in realtime (depending on the size and rate of the frame).
USB that we are working on is the USB host, not the USB device (same as on computer side). So it will be possible to connect some WiFi adapters, flash memories or sound cards to the camera, not camera to the computer. Of course only those devices will work that are standard and have open source drivers - camera CPU (Axis ETRAX100LX) is not x86 so proprietary binary drivers will not work. And in the current version of the camera it will not be really fast - definitely slower than Ethernet.

Quote:
Originally Posted by Wayne Morellini
Otherwise VP3, is the alternative to Mpeg2 they also use, and hopefully should yield much better quality.
Currently we have two alternative versions of FPGA code and the software - with JPEG/MJPEG and Ogg Theora - they do not fit at the same time in the FPGA. And as of today we have more features with the MJPEG branch because it was easier to use it on the client side - full speed camera data is a challenge to most PCs _even_ with simpler to decode MJPEG. But now Theora client side software improved and we will move new features that we have with MJPEG only to Theora branch too.
Andrey Filippov is offline  
Old April 15th, 2006, 02:48 PM   #49
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Oscar Spier
Isn't it better to keep some things appart... of course there may be more potential in the Elphel camera, but to me it's a bit strange to talk about a compressed HD camera, and start talking about how to make it uncompressed.
The benefits of compression and the quality it seems to deliver (especially in the Theora codec) is pretty high.
Just that Jeff was asking about RAW frames, and it simply can't do that across 100/10 at any significant frame rate. So it is a bit pointless until a faster interface becomes available (which, if you know how to do FPGA, you could do lossless, or visually lossless, if you were extremely ambitious) but now, I agree, we have VP3.
Wayne Morellini is offline  
Old April 15th, 2006, 03:31 PM   #50
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Andrey Filippov
USB that we are working on is the USB host, not the USB device (same as on computer side)
-
And in the current version of the camera it will not be really fast - definitely slower than Ethernet.

Currently we have two alternative versions of FPGA code and the software - with JPEG/MJPEG and Ogg Theora - they do not fit at the same time in the FPGA.
Welcome Andrey.

OK, I understand, so your saying that USB2 will work at less than 100Mb/s on your camera. the idea of having an external sound box, like M-Audio and EMU/creative is very helpful.

Something that nobody has answered, is the maximum data-rate of the compressed stream the camera can send over Ethernet for VP3 and for Mpeg. From this we can calculate minimum compression achievable for any resolution, as compression heavily effects quality of image. I know people think Mpeg is good, but for cinema even 100Mb/s Mpeg is not high grade for 4:2:2 1920*1080 frame (though it is not bad at 720p, and lossless bayer could fit in 100Mb/s, not quiet 1080). Once the image is blown up for the field of view used in cinema then compression artifacts could be ten times more evident then on a computer monitor. So, VP3 performance is the maker for this application I think.

So, with Mpeg there is a strong advantage over the HVX200 DVCPROHD prosumer camera at 720p, and the same at the cut down 1080 resolution of the HVX200, but the compression ratio goes upto around 8:1 for 1920*1080 8 bit, which isn't too bad considering the quality of mpeg2 HDTV transmissions. But, if people want to use 3Mp sensor instead of 1.3Mp they have to consider this. But so far people experiment and play.

The Micron 1.3Mp we experimented with a year or so ago had problems with blooming etc, which the 3Mp solved with a new circuit structure. Do any of the newer Microns 1.3Mp sensors solve these problems?

(1.3Mp sensors, with larger sensor pads, are an important consideration, because of larger well capacity and lower noise, giving larger latitude and sensitivity.)

Anyway, this on camera Axis ETRAX100LX processor, would it be fast enough to stream/control the current compressed stream to a Ethernet/USB caddy?


Thanks

Wayne.
Wayne Morellini is offline  
Old April 15th, 2006, 04:49 PM   #51
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Wayne Morellini
Welcome Andrey.
OK, I understand, so your saying that USB2 will work at less than 100Mb/s on your camera. the idea of having an external sound box, like M-Audio and EMU/creative is very helpful.
It will be more like USB 1 - maybe a little faster. There is no dedicated circuitry/DMA access for it in the current design. But for sound it is an easy solution.

Quote:
Originally Posted by Wayne Morellini
Welcome Andrey.
Something that nobody has answered, is the maximum data-rate of the compressed stream the camera can send over Ethernet for VP3 and for Mpeg. From this we can calculate minimum compression achievable for any resolution, as compression heavily effects quality of image.
ETRAX100LX does not have hardware checksum calculation for the Ethernet, so TCP speed is limited to approximately 30Mbps. For streaming we use now UDP (no checksums) and we can get to about 70Mbps.


That will change as I'm planing an upgrade to newer CPU (FX) that is both faster and has hardware checksum calculation. That camera will also have somewhat bigger FPGA and twice memory (64MB system, 64MB video, 32MB system flash) - it will likely have faster USB, but it will still be host, not device.

Current Ogg Theora implementation is not really good for general filming - it was intended for fixed-view network camera applications, so only two types of frames - INTRA (key) and INTER_NOMV (inter, no motion vectors). It gives a lot of extra volume savings only if the background is not moving. Motion vectors will wait for the next bigger FPGA :-)

The precise bandwidth for current Ogg Theora depends on multiple factors, I would say about 1-2 MB/sec is usually enough.


Quote:
Originally Posted by Wayne Morellini
I know people think Mpeg is good, but for cinema even 100Mb/s Mpeg is not high grade for 4:2:2 1920*1080 frame (though it is not bad at 720p, and lossless bayer could fit in 100Mb/s, not quiet 1080). Once the image is blown up for the field of view used in cinema then compression artifacts could be ten times more evident then on a computer monitor. So, VP3 performance is the maker for this application I think.
We do not have 4:2:2 - only 4:2:0 - anyway the sensor has Bayer pattern (so only one, not 3 color components in each physical pixel) and 4:2:2 will require 3 times amount of bits to compress compared to raw sensor data (4:2:0 - 1.5 times). Additional data is interpolated so I believe it is a waste to calculate it in the camera and increase bandwidth and storage - you can do the same by post-processing of the records.

Quote:
Originally Posted by Wayne Morellini
Welcome Andrey.

But, if people want to use 3Mp sensor instead of 1.3Mp they have to consider this. But so far people experiment and play.

The Micron 1.3Mp we experimented with a year or so ago had problems with blooming etc, which the 3Mp solved with a new circuit structure. Do any of the newer Microns 1.3Mp sensors solve these problems?

(1.3Mp sensors, with larger sensor pads, are an important consideration, because of larger well capacity and lower noise, giving larger latitude and sensitivity.)
1.3 MPix sensors are out of production, 3.0 will be discontinued soon and we will try new faster 5MPix sensors. It seems to me that the quality of Micron CMOS sensors is now the best but they are mostly interested in high-volume mobile phone market.

On the other hand - each new of their sensors so far was better than the previous one, so 3MPix with binning is better than 1.3, and 5MPix with binning will have approximately the same resolution as 1.3MPix one.


Quote:
Originally Posted by Wayne Morellini
Anyway, this on camera Axis ETRAX100LX processor, would it be fast enough to stream/control the current compressed stream to a Ethernet/USB caddy?
Hope to work on the new ETRAX100FX soon
Andrey Filippov is offline  
Old April 15th, 2006, 09:44 PM   #52
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Andrey Filippov
- it will likely have faster USB, but it will still be host, not device.
I have seen devices to allow slaves to act as hosts, I imagine there is also so Master can act as salve. But if it is controlling a drive caddy it would be master anyway wouldn't it?

Quote:
Current Ogg Theora implementation is not really good for general filming - it was intended for fixed-view network camera applications, so only two types of frames - INTRA (key) and INTER_NOMV (inter, no motion vectors). It gives a lot of extra volume savings only if the background is not moving. Motion vectors will wait for the next bigger FPGA :-)
That is not such a problem, some sacrifice on movement should still make it better than Mpeg. I their is converter/transcoding software out their in the Linux domain, or some editing support then it is workable for work flow. ON the Cinema project their was no direct software, but transcoding, or using a RAW format was all that was needed.

Have you considered raising the data rate (GigE or USB2) and implementing Bayer based Lossless compression routines in the FPGA?

Quote:
The precise bandwidth for current Ogg Theora depends on multiple factors, I would say about 1-2 MB/sec is usually enough.
That is not so good for cinema. I am still waiting to see what happens with RED, and various HDV and H264 cameras this year before I make decisions on personal path. I will have to wait for this camera then, 36Mb/S+ is preferable for Mpeg2 video work (I don't know VP3) 100Mb/s+ for Mpeg1, and double those rates are ideal for cinema work. Pity VP 4 to 7 are not available in public domain.

Quote:
We do not have 4:2:2 - only 4:2:0 - anyway the sensor has Bayer pattern (so only one, not 3 color components in each physical pixel) and 4:2:2 will require 3 times amount of bits to compress compared to raw sensor data (4:2:0 - 1.5 times). Additional data is interpolated so I believe it is a waste to calculate it in the camera and increase bandwidth and storage - you can do the same by post-processing of the records.
I thought it was compressed 4:2:0 are you saying it is Mpeg/Ogg compressed bayer output rather than 4:2:0?

Quote:
1.3 MPix sensors are out of production, 3.0 will be discontinued soon and we will try new faster 5MPix sensors. It seems to me that the quality of Micron CMOS sensors is now the best but they are mostly interested in high-volume mobile phone market.
We noted a drop in latitude and sensitivity with the move from 1.3 Mp to 3 Mp, 5 Mp might be hard to keep up with the optical picture quality of 3Mp. Microns did not impress me too much, a good Ibis 5a has much more potential (suitable potential for film). A camera based on this was developed (Drake camera) but there is a problem with poor implementations of Ibis 5a on cameras, that use internal ADC/s, and poor support circuits, that really destroyed the performance on a sensor that should trample Micron. The specs of the Micron are great for a phone, but consumer/prosumer grade for video work.

Quote:
so 3MPix with binning is better than 1.3, and 5MPix with binning will have approximately the same resolution as 1.3MPix one.
Binning doesn't regain the fill-factor lost to circuits around the sensor pad. Binning makes it around 1000 pixels across doesn't it? Maybe binning on the 5Mp+ would get true 1280*720, that would be a good compromise.
Wayne Morellini is offline  
Old April 16th, 2006, 12:37 AM   #53
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Wayne Morellini
I have seen devices to allow slaves to act as hosts, I imagine there is also so Master can act as salve. But if it is controlling a drive caddy it would be master anyway wouldn't it?
We are making it only master, slave USB functions are not planned (we use Ethernet to communicate to the camera)

Quote:
Originally Posted by Wayne Morellini
Have you considered raising the data rate (GigE or USB2) and implementing Bayer based Lossless compression routines in the FPGA?
GigE - yes, but there was no good PHY with the documentation available w/o signing NDA. Now there is, so I'm considering it as one of the projects. USB - no, we are making network cameras.

Quote:
Originally Posted by Wayne Morellini
That is not so good for cinema.
What exactly do you mean by "not good"? You want higher bandwidth or lower?

Quote:
Originally Posted by Wayne Morellini
36Mb/S+ is preferable for Mpeg2 video work (I don't know VP3) 100Mb/s+ for Mpeg1, and double those rates are ideal for cinema work.
It seems we are using different units. b=bit, B=byte.

So what I meant was that with full speed (like 1280x1024x30fps) I need 1-2MB (megabytes)/s fro "good" quality, when most background stays the same. There are ways to decrease it even more.
With the camera moving, current implementation will give very little advantage over plain motion JPEG - I do not have real measurements, but would estimate it as under 50% difference

Quote:
Originally Posted by Wayne Morellini
Pity VP 4 to 7 are not available in public domain.
We do not have VP3 implementation, only Ogg Theora and they are not exactly the same. And Ogg Theora is a licensed software (not public domain) - it comes with a BSD-style license.


Quote:
Originally Posted by Wayne Morellini
I thought it was compressed 4:2:0 are you saying it is Mpeg/Ogg compressed Bayer output rather than 4:2:0?
No, it is 4:2:0 and we send 50% more components than are actually available from the sensor (raw Bayer), making additional by interpolation. If the camera will interpolate to 4:2:2 it will need 3 times more color components compared to raw Bayer so in that case it will be better to compress just raw Bayer (with possible re-arrangement of blocks) and do or the color conversion as post-processing.

Quote:
Originally Posted by Wayne Morellini
We noted a drop in latitude and sensitivity with the move from 1.3 Mp to 3 Mp, 5 Mp might be hard to keep up with the optical picture quality of 3Mp. Microns did not impress me too much, a good Ibis 5a has much more potential (suitable potential for film). A camera based on this was developed (Drake camera) but there is a problem with poor implementations of Ibis 5a on cameras, that use internal ADC/s, and poor support circuits, that really destroyed the performance on a sensor that should trample Micron. The specs of the Micron are great for a phone, but consumer/prosumer grade for video work.
I never tried IBIS5 with external ADC, but I believe CMOS sensors should work with internal ADCs - they are not the CCDs (CCDs I prefer from Kodak - like the one in our model 323 camera). It is one of the advantages of the CMOS technology that ADC can be on-chip (you can even have individual slow ADC for each row or column of the sensor. And IBIS5 performed not as good as Micron does, and, as I wrote - 3MPix is better than 1.3MPix ones. BTW it has many undocumented featuires that as we found experimentally do work. Such as Flip-X and flip-Y. Or binning not only by 2 and 3, but by any number up to 8 in each direction.

As for the "grade for phones" - this technology really benefits from higher volume and one of the best IC manufacturer (we all trust their memory, don't we?)

Quote:
Originally Posted by Wayne Morellini
Binning doesn't regain the fill-factor lost to circuits around the sensor pad.
That is wrong. When they move to smaller elements it applies to multiplexer transistors as well as the photo-diodes so the fill factor stays about the same. And the dark current in 3MPix is lower (saturation by thermal current takes longer in 3MPix than in 1.3 ones). And even the same 3MPix were made in several chip release versions - each next had some bugs fixed.

Quote:
Originally Posted by Wayne Morellini
Binning makes it around 1000 pixels across doesn't it?
I did not understand about 1000 pixels.
Andrey Filippov is offline  
Old April 16th, 2006, 01:23 PM   #54
Major Player
 
Join Date: Apr 2006
Location: Barca Spain
Posts: 384
how big is the image plane of this thing? I guess it's standardized somehow with C-mount?
Does it have fiber optical taper front of CCD?
Frank Hool is offline  
Old April 16th, 2006, 10:35 PM   #55
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Andrey Filippov
We are making it only master, slave USB functions are not planned (we use Ethernet to communicate to the camera)
I think, we are also talking about different things here. When I say Master I meant it acts as master controlling slaves etc. I am no longer suggesting that you change it, I was just noting that people can buy external converters to do it if they wanted to reprogram it. But at USB1 speeds there is little need.

As far as sound, there are many modules, USB2.0 as well. It should be able to control/sync external sound recording modules, or be good for minimum cinema sound 48Khz 2-8 channels, or stereo at 96Khz uncompressed.

Quote:
What exactly do you mean by "not good"? You want higher bandwidth or lower?
Yes, 2-3MBytes/s is good for security video (unless you want high quality identification). For consumer video it needs to be another grade again (3MB/s VP3 with motion vectors would do it). For high quality professional video it needs to be roughly double again, for quality cinema yet double yet again. The highest quality cinema is lossless, double again, but I don't think low end productions need to necessarily go that far, between pro quality video and quality cinema should be enough (please note, that a few major film releases have been transfered from consumer video to film, but even though they generally go through very heavy computer picture processing, in film transfer labs by professionals, to make them look a lot better, they still look low quality). The problem is that the larger the screen the more field of vision it covers, making the resolution look smaller. So, a Cinema screen can take up many times more field of vision than a security screen, making quality differences many times more noticeable.

Quote:
With the camera moving, current implementation will give very little advantage over plain motion JPEG - I do not have real measurements, but would estimate it as under 50% difference
50% is preferable to no improvement.

Bayer compression, yes good. If you can get the next camera upto 12.5-25MBytes/s with RAW bayer compression that would be really good for this market. We did find a number of lossless routines, some open source, I don't know about bayer, even visually lossless is good. But I think you are more oriented to purely security video and don't really need anything more than visually lossless 99% of the time, and most of the time only upto consumer grade video.

Quote:
I never tried IBIS5 with external ADC, but I believe CMOS sensors should work with internal ADCs
That is the problem, signal to noise ratio from the on chip ADCs is lower, Steve, from Silicon Imaging, showed us samples from their camera (and Drake camera did even better) and the difference between that and what we got from an internal the Sumix one, was day and night (well it looked like dusk a lot of the time actually ;).

The Kodak CD's, are they better than the Micron, are they still available?

Quote:
As for the "grade for phones" - this technology really benefits from higher volume and one of the best IC manufacturer (we all trust their memory, don't we?)
Yes, volume helps pricing, I think the pricing is under half of fill-factory's, but they are not very good grade compared to good film and video sensors, and even fill-factory (which is being used by top cinema camera company and has been by Kodak for their top digital camera, and probably many more under NDA, but the internal ADC, is the "low cost" option). The Altasens, which was a high grade sensor from the previous year, is reported to achieve upto 96 db S/N during testing, the Ibis5a, can achieve something roughly in between at the level of the previous best professional video cameras, or just ahead, 37db-43db, is not good for low light (still have a documentary filming interest). The Ibis5a also had other distinct advantages, because of it's 100% fill factor scheme (where as the Micron is much lower than 50% with microlens, I believe, and global shutter. Because of this, it could get much more even image (no "fly screening" effect that requires interpolation and filtering to cover up) and you could use a super wide lens (under 1.0 aperture) that was a stop or two ahead of what HD Microlens sensors could achieve and still get a good quality image. The larger pad, and well capacity, also helps with range, apart from the multi-slope feature. This made the Ibis5a a good compromise for cinema cameras, over more costly higher performing sensors. I know, I was in contact with the engineer of the Drake camera from the very beginning before it became the Drake.

I still think that Micron is good for cheap cinema/doco camera, as it is as good/better than some prosumer HDV cameras. I would be surprised if it could match a mid end camera like the Sony XDCAM HD 1/2 inch though.

Quote:
That is wrong. When they move to smaller elements it applies to multiplexer transistors as well as the photo-diodes so the fill factor stays about the same. And the dark current in 3Mpix is lower (saturation by thermal current takes longer in 3Mpix than in 1.3 ones). And even the same 3Mpix were made in several chip release versions - each next had some bugs fixed.
I agree with you from that perspective, but if they use the same smaller process to make a 1.3Mp, it's circuits shrink allowing for even more pad space, and there are other issues that I won't get into here. But concentrating on the mobile market, I suspect they have little reason to keep older resolutions in the same sensor format size.

Quote:
I did not understand about 1000 pixels.
The sensor is around 2 thousand pixels wide, a binning of two halves that, which is why I am holding hopes out that the 5Mpixel chip will have a binning of two that is closer to 720p's 1280pixels. Maybe the situation will go much better for the Micron chips, this now interests me. If they could only raise latitude and S/N, and add multi-sampling, it would turn the situation a lot around.

Well Andrey, thanks for clearing these things up for me, I had been wondering about them for a while, I can stop now and wait to see the next camera and 5Mp sensor. I have had a little voice in me for a while telling me not to buy the present model, and now I understand why, it can process the framer rate but not the data rate I desire.


Thanks

Wayne.
Wayne Morellini is offline  
Old April 17th, 2006, 12:22 AM   #56
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Wayne Morellini
But at USB1 speeds there is little need.
As far as sound, there are many modules, USB2.0 as well. It should be able to control/sync external sound recording modules, or be good for minimum cinema sound 48Khz 2-8 channels, or stereo at 96Khz uncompressed.
USB1 will easily handle 96KHz audio.

Quote:
Originally Posted by Wayne Morellini
Yes, 2-3MBytes/s is good for security video (unless you want high quality identification).
I'm still confused with your question. I think I wrote both - the bandwidth required by Ogg Theora with the setting I consider good and the data rate we can send from the current (ETRAX100LX-based) camera (70Mbps).

Quote:
Originally Posted by Wayne Morellini
That is the problem, signal to noise ratio from the on chip ADCs is lower...
You mean - lower in FillFactory sensors or in any sensor? If you mean first - yes, probably. If the second - I would not agree. CMOS technology is the same in the sensors as in the ADC, so if the company is good in both areas (or licenses the ADC design) it should be better from the S/N point of view. To say nothing that (as I wrote last time) you can put a thousand slow ADCs on-chip (for each column) - something completely impossible fro the off-chip solution.

Quote:
Originally Posted by Wayne Morellini
The Kodak CD's, are they better than the Micron, are they still available?
You mean CCDs? Yes, they are - and we use some of them in our model 323 cameras (http://www.elphel.com/3fhlo/), but I could not find one that combines resolution and speed of the Micron CMOS imagers.


Quote:
Originally Posted by Wayne Morellini
Yes, volume helps pricing, I think the pricing is under half of fill-factory's,
It might change now, but when I was buying them the price difference was more like 10x, not 2x :-)

Quote:
Originally Posted by Wayne Morellini
... and has been by Kodak for their top digital camera, and probably many more under NDA, but the internal ADC, is the "low cost" option). ...
That camera was not really "the top" from the performance point of view, but I do agree that FillFactory has interesting sensors and nice features like multi-slope. And it is not about FillFactory (now Cypress) vs. Micron - it is that a high-performance ADC should be part of the sensor I believe. And all the CMOS imagers will have it sooner or later.

Quote:
Originally Posted by Wayne Morellini
If they could only raise latitude and S/N, and add multi-sampling, it would turn the situation a lot around.
Or Cypress will have a decent ADC on-chip :-)

Quote:
Originally Posted by Wayne Morellini
Well Andrey, thanks for clearing these things up for me, I had been wondering about them for a while, I can stop now and wait to see the next camera and 5Mp sensor.
The design I'm working on right now has 12x of that resolution, but frame rate is way smaller. Hope to get hands on 12-bit, 96MHz, 5MPix Micron sensor soon too.

Quote:
Originally Posted by Wayne Morellini
it can process the framer rate but not the data rate I desire.
That will stay the same in the next camera too - 100Mbps network connection.
Andrey Filippov is offline  
Old April 17th, 2006, 02:43 AM   #57
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
I'm still confused with your question. I think I wrote both - the bandwidth required by Ogg Theora with the setting I consider good and the data rate we can send from the current (ETRAX100LX-based) camera (70Mbps).
I'm sorry, I confused what you said, I thought you meant that the through put of the Ethernet was max of 70Mb/s (rather than 100Mb/s) and that the codec was limited to 3Mbyte per second. So, are you saying that the Ogg codec can do 70Mbit per second( and 9MB/s) that definitely helps. I was actually aiming to look at the length to size ratio of some of your sample footage verify this anyway.

ADCS quality:
In particular lower on the Ibis5a, but also many chips, because of thermal noise consideration etc, and the quality of high end ADCs. There is more to silicon sensor quality (and ADC) then normal silicon circuits (on good ADC they go beyond silicon) just because a company is good in one does not mean they are good in another. But, seriously, I don't think Micron aims to make costly top quality sensors to put in mobile phones and security cameras, I think they might aim for cheap top quality mobile and security sensors instead.

Kodak:
Quote:
You mean CCDs? Yes, they are - and we use some of them in our model 323 cameras (http://www.elphel.com/3fhlo/), but I could not find one that combines resolution and speed of the Micron CMOS imagers.
But will it do a 720p or 1080p frame at 25fps?

Ibis5a price:
Quote:
It might change now, but when I was buying them the price difference was more like 10x, not 2x :-)
I am speaking of price drops last year for mass quantity on the Ibis, maybe the Micron price was older, so maybe it was not the best comparison.

Quote:
That camera was not really "the top" from the performance point of view
I thought is was in the Kodak range for a 35mm sensor when released, but now things, of course, have moved on.

Quote:
it is that a high-performance ADC should be part of the sensor I believe. And all the CMOS imagers will have it sooner or later.
I agree, I was shocked at the quality coming out (I would imagine that some of the external circuitry issues I heard about, might have something to do with it too). Even for a "me too" on chip ADC for lower cost applications, I was not impressed. I think the on chip ADC should be much better. I don't have S/N figures for it, but it would not be surprised if it was not top far away from the Micron's 37db (6bit accuracy). But it doesn't matter, the FF chip is just to expensive for you to put in your cameras at their price point. Though if you wanted a really cheap chip with multislope like feature (apart from descent SN 48db min, 60db+ preferable, multislope is worth looking at because of latitude extending properties)) then Smalcamera is now owned by Cypress as well. Their feature is called autobrite, and I think it adjusts the gain on a pixel by pixel basis instead, but I am not sure. Though, I expect the quality might be just a bit low from non Security applications. There is another company with sensors for security cameras with multislope like feature that sounds like the Smalcamera ones, but I can't locate the web link at the moment.

Quote:
The design I'm working on right now has 12x of that resolution, but frame rate is way smaller.
That doesn't really worry me, as long as it can bin down to close to at least the horizontal size of 720p or 1080p frame.

There has been talk of upcoming 5Ghz programmable gate array technology, is that any close to a commercial product?

Quote:
That will stay the same in the next camera too - 100Mbps network connection.
Well, with 100Mb/s full implementation of Ogg with motion etc, then at least the quality should be good for cinema. I don't know where ever you can get good lossless bayer results though in 100Mb/s for 1080p bayer though. I have just realised I have some compression ideas that might help this situation.

Before I get to these techniques I will share another one that I had previous that it also turns out people are using in film restoration. Noise reduction, should improve existing performance, and the performance of lossless compression immensely. Most compression performance is lost in the last 2 bits of a image, because that contains the most noise. if you eliminate this noise you ramp of the compression that should be achievable at the same quality. Basically rather than just finding a pixel of noise and interpolating it out with the surrounding pixels, the pixel itself might still contain some information (in 3 chip, the other channels might contain the information) and the proceeding and succeeding frames contain information about the piece of image that should be in that pixel. By using this extra information you can restore the pixel with great accuracy, producing a cleaner image to compress. This would be a of great performance benefit to the techniques below.


Thanks

Wayne.
Wayne Morellini is offline  
Old April 17th, 2006, 02:47 AM   #58
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Compression of Bayer:--------------------------------

While I wish to keep the most efficient ones for commercial reasons, I have also been talking about some, lesser, helpful ideas around here that might help. I will attempt to get back latter with links to previous discussions that outline it. But the basic idea is to store succeeding pixel values as the difference from proceeding ones, and to store the differential between succeeding frames. Now, all this data is compressed with run length encoding and the usual sorting/mnemonic representation compression techniques used in regular video compression, and in fax like compression techniques.

Now, the beauty is the next method to reduce the differential even more. We know that in an image luminance generally changes more often then chrominance, so colours are less variable pixel to pixel then luminance. This generally helps a debayer algorithm predict the other primaries for each pixel position. But with this scheme, what I propose is that the proceeding/surrounding pixels value be used to establish what the pixel should be, in that primary colour (using the previous/surrounding proportion of that colour present) as the base value for the differential, thus reducing the amount of data needed drastically. We also use the surrounding pixels to estimate an interpolate prediction to modify the base value. The whole basis is to use estimation/prediction (that does not have to be recorded as the decompression software makes the same prediction) to reduce the data size before finale compression, in a format hopefully more compressible by finale compression. There is a bit more sophisticated things that could be done then this, some of that commercial stuff I mentioned, but as you see, the work done would mostly be simple comparative circuits, plus the run lenght/mnemonic you already use.

I'll just summarise, so I can remember, prediction based on previous pixel and interpolation of surrounding pixels, and previous proportion of that primary colour, modified for primary colour at the present pixel. Once the bayer is reconstructed, it is then debayered for display.

Of course, the interpolation of surrounding pixels that have not yet been calculated, in decompression algorithm would enquire some fancy maths, but a simpler effective form of it can be done without interpolation of unprocessed pixels.

I think I have posted somewhere a 3chip version of this scheme as well.

This is just one of the several different areas of high performance compression technique I would like to use. It is also one of the most expendable, and potentially one of the least effective in compression performance.


Thanks

Wayne.
Wayne Morellini is offline  
Old April 17th, 2006, 06:08 AM   #59
Major Player
 
Join Date: Apr 2006
Location: Barca Spain
Posts: 384
Quote:
Originally Posted by Frank Hool
how big is the image plane of this thing? I guess it's standardized somehow with C-mount?
Does it have fiber optical taper front of CCD?
Answers to my own questions:
image plane = 6.55mm*4.92mm
registration distance = 17.52mm
FO taper = no
am i right?
Frank Hool is offline  
Old April 17th, 2006, 11:39 AM   #60
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
Quote:
Originally Posted by Forrest Schultz
Jef, what compression artifacts can you see? do you mean the image looks soft, or do you see jpeg squares (like the grid looking thing)
To come back to this discussion... don't forget after effects plug-ins like 'Re:Vision SmoothKit - Staircase Suppress' can reduce those artifacts greatly. I made a quick(!) test on one of Forrest's framegrabs. I did a big contrast and color saturation boost to show those blocks. On the right is the one with Staircase Suppress.
http://s03.picshome.com/d29/staircasesuppress.jpg
Oscar Spierenburg is offline  
Closed Thread

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project


 



All times are GMT -6. The time now is 08:52 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network