New DIY HD Cinema Camera Project - Page 8 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods
Register FAQ Today's Posts Buyer's Guides

Alternative Imaging Methods
DV Info Net is the birthplace of all 35mm adapters.

Reply
 
Thread Tools Search this Thread
Old July 10th, 2007, 12:53 AM   #106
Regular Crew
 
Join Date: Aug 2004
Posts: 91
Quote:
Originally Posted by John Wyatt View Post
The post workflow is to take the uncompressed clip, export an uncompressed still sequence (tif, tga, whatever), and then finally use Photoshop to enlarge all the frames in the sequence to 2k width. Because you start with an uncompressed frame, and use Bicubic quality to make the enlargement, results may be more acceptable than onboard realtime jpeg of a larger frame.
You really should look for "avisynth" in google.... You could use superior upsizer algorithm than bicubic (Lanczos, B-spline...) and you wouldn't have the hassle to process each frame separatly (even with a batch process it's cumbersome).
Steven Mingam is offline   Reply With Quote
Old July 10th, 2007, 01:53 AM   #107
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Jamie Varney View Post
Just incase anyone has missed it there is a thread over in the AVCHD forum about the Apitek GO-HD (http://www.dvinfo.net/conf/showthread.php?t=95227) This camera uses the ambarella chip to do 720P. While the footage is pretty good you can tell the compression is quite high, but as Wayne mentioned it depends on exactly how programmable the chip is to see if it is usable or not.
That is probably their most simplest version. Be interesting to find out how much PC processing an top render quality codec would take at that data rate.


Jose,

It most likely will not be as simple as that, you would have to program the ambarella interface to suit, and maybe external circuits to connect the two.

To overclock USB, would be interesting, but in reality, why not use GigE, and an GigE caddy? The problem with USB is the peaks, and the processor drain, but with embedded you can have an section handling the USB, and buffering the data peaks before and after the USB. This equals, nearly, 50MB/s max through put (8bit 1080p24/25, or 720p50 without compression).


Take,
Interesting solution, I did find an HDMI to wavelet chip solution that included USB like interface, tried to talk them into making an USB capture solution, but nothing came through, but somebody else could do it.
Wayne Morellini is offline   Reply With Quote
Old July 10th, 2007, 03:21 AM   #108
Regular Crew
 
Join Date: Jan 2006
Location: West Country, UK
Posts: 141
Steve -- thanks for reminding me about avisynth. I remember looking at that a while ago but didn't go into the upsizing quality because I thought Photoshop would be superior. There are also Photoshop plug-ins like Genuine Fractals and S-spline (called something else now, can't, remember) which can also be used as an action in Photoshop -- I just walk away and do something more interesting while that's going on. I'll check out the latest version of avisynth...
Thanks.
John Wyatt is offline   Reply With Quote
Old July 10th, 2007, 03:47 AM   #109
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Quote:
Originally Posted by Steven Mingam View Post
You really should look for "avisynth" in google.... You could use superior upsizer algorithm than bicubic (Lanczos, B-spline...) and you wouldn't have the hassle to process each frame separatly (even with a batch process it's cumbersome).
Actually, you should replace the interpolation part of a debayer algorithm with this upscaler. As the debayer algorithm is already interpolating (scaling is interpolation) you will get a better quality result. Scaling once to the correct size is better than doing it in two steps.

I think it would be quite easy to change the Adaptive Homogeneity ... Algorithm, to do this.

Cheers,
Take
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old July 10th, 2007, 06:39 AM   #110
Major Player
 
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
Take, I don't want to double the speed of the usb interface. I just want to speed it up a bit. I already have like 18-20 fps at 1920x800. It would be just a matter of 40% or 50% more to get to 25-26fps and then fix it to 24fps.

The question is how can we do all that?

I mean, we're talking about different solutions but almost all of them require hard/soft engineering. I.e. I've got my demo board. How can we modify that board so it has a GigE interface instead of USB? If we do it, we also need new software to capture from the new interface.

That's why I was talking about overclocking the usb. Cause using parts that we already have is much easier than building new parts, putting them all together and developing new software.

If we want to stay with camera-to-laptop solution, which IMHO is the best and easier way, I see two different options:

I can capture 24fps bayer at 1600x666 using usb without any problem and it's direct to HDD capture so we have no time issues. I still prefer capturing to ram though, cause with this things faster is always better. We'd have to develop a piece of software that can read the bayer results and convert them to RGB 2048x858 in just one step while debayering. That would be Take's job if he wants to do it. Upscaling is very common even in top pro cameras, so if we know this upscaling process would give us much better quality than any pro camera, it's ok to me. We could use any of the great interpolation algorithms out there.

The other option would be to overclock the usb interface to get 24fps at 2048x858 or at 1920x800. In the first case it would be like 75% faster, the other would be easier and more stable, as it's just like 50% faster.

This way we would have 2k or near 2k bayer, but full HD anyway and we would just need to bebayer in post. The good part of this one is that we have more resolution to start with.

There's in fact another option that's a little more difficult, but a good one nonetheless:

If we connect the demo board to the Ambarella chip and then we stream to the computer using usb we could have lossless 2k directly to the laptop. We would need a way to connect the board to the chip, an usb interface from the chip to the computer and software to control the camera and capture. If we can do this one, we could easily turn it into a portable solution using a mini pc and a small lcd.

With any option we have to work together. The best of this is that we can start with the most simple solution and then upgrade it till we have a standalone camera.

So what do you think it's the most simple option?
Jose A. Garcia is offline   Reply With Quote
Old July 10th, 2007, 09:14 AM   #111
Major Player
 
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
I did it.

I changed USB clock from 125Hz to 500Hz. The camera works and now it captures 2048x858 at about 25fps already debayered to uncompressed AVI.

The question is... Is it bad for the camera? I mean, I already spent 900euro on the board. I used a patch used by pro gamers to increase mouse performance and the guy who programmed the tool says it's completely safe. In fact the program has an option to set the clock to 1000Hz and that's the only option the programmer says is not safe.

I'll post a full res clip soon.
Jose A. Garcia is offline   Reply With Quote
Old July 10th, 2007, 11:47 AM   #112
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Is that related to the actual speed, or an switching speed fro polling etc?
Wayne Morellini is offline   Reply With Quote
Old July 10th, 2007, 12:12 PM   #113
Major Player
 
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
What do you mean?
Jose A. Garcia is offline   Reply With Quote
Old July 10th, 2007, 02:38 PM   #114
Regular Crew
 
Join Date: Jun 2006
Location: Columbus Ohio
Posts: 36
Quote:
Originally Posted by Jose A. Garcia View Post
I did it.

I changed USB clock from 125Hz to 500Hz. The camera works and now it captures 2048x858 at about 25fps already debayered to uncompressed AVI.

The question is... Is it bad for the camera? I mean, I already spent 900euro on the board. I used a patch used by pro gamers to increase mouse performance and the guy who programmed the tool says it's completely safe. In fact the program has an option to set the clock to 1000Hz and that's the only option the programmer says is not safe.

I'll post a full res clip soon.
I was just thinking about this patch because a gamer friend of mine used to use it. Every electronic part has tolerances to how fast it can actually perform, and typically they are set low 'to be safe.' As long as nothing is getting hot on either your computer or your camera I think you are probably okay. My bigger concern is the integrity of the data you are receiving, By overclocking it you may be introducing noise or corrupt data into the stream. Or maybe not.

After talking to one of my friends I have given up on the gumstix idea. We are now talking about going purely with a CMOS-FPGA-hdd solution. Neither of us have ever attempted anything like this before, but that is kinda what makes it fun. Once I have some money to spare I think I am going to buy a Spartan-3E Xilinx dev board. But once again that could be a while.

Oh and Jose, that website you gave me didn't have the full datasheet for the Micron MT9P031 only the product overview. It seems you have to sign a NDA with Micron to get the full datasheet :-(
Jamie Varney is offline   Reply With Quote
Old July 11th, 2007, 01:46 AM   #115
Regular Crew
 
Join Date: Aug 2004
Posts: 91
Quote:
Originally Posted by Take Vos View Post
Actually, you should replace the interpolation part of a debayer algorithm with this upscaler. As the debayer algorithm is already interpolating (scaling is interpolation) you will get a better quality result. Scaling once to the correct size is better than doing it in two steps.

I think it would be quite easy to change the Adaptive Homogeneity ... Algorithm, to do this.

Cheers,
Take
Well, i've a working Directional filtering with A Posteriori Decision implementation, the avisynth filter is also written and need testing.
It's faster than AHD because you don't have to convert data to CIE color space (which is a very slow operation involving cube root) to calculate homogeneity map, but the result should be quite similar.
(and i've written it in a way that allow me to add easily MMX code where it matters)
Creating an avisynth filter using internal resizer shouldn't be too hard, but i wanted to see if i could wrote something from a scientific paper.
Steven Mingam is offline   Reply With Quote
Old July 11th, 2007, 02:36 AM   #116
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Hello Steven,

Acording to the AHD paper they also tried to use YUV with good result, so I am using YUV. I am doing camera_RGB->rec703_RGB->rec703_YUV using 4x4 matrices.

Also, the conversion of RGB to YUV or RGB to CIElab is not the limiting factor, it is the amount of convolutions that have to be performed, including running the image three times to a median filter.

BTW, I am running the complete AHD on the graphics card. It runs at about 25% (slower) real time. On a MacBook Pro.

Your directional filtering algorithm also looks very much like AHD, except that the decision seems to be made before interpolating red and blue.

It took me quite a long time to figure out what the AHD paper tried to explain. While the actual algorithm is quite easy to explain in detail. I which these math people would write papers with engineers in mind.

Cheers,
Take
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old July 11th, 2007, 03:08 AM   #117
Regular Crew
 
Join Date: Aug 2004
Posts: 91
Quote:
Originally Posted by Take Vos View Post
Hello Steven,

Acording to the AHD paper they also tried to use YUV with good result, so I am using YUV. I am doing camera_RGB->rec703_RGB->rec703_YUV using 4x4 matrices.

Also, the conversion of RGB to YUV or RGB to CIElab is not the limiting factor, it is the amount of convolutions that have to be performed, including running the image three times to a median filter.
Well, convolution is only integer math, which is very fast on current CPU and vectorizable. CIElab need cube root and divison which are still a lot slower (17 cycles for div vs 0.33 cycles for mov/add on a core2duo, and i'm not talking about memory penality with the lookup table access).

Quote:
Your directional filtering algorithm also looks very much like AHD, except that the decision seems to be made before interpolating red and blue.
That's the point of the algorithm : reduce amount of convolution needed by choosing before instead of after like AHD.

Quote:
It took me quite a long time to figure out what the AHD paper tried to explain. While the actual algorithm is quite easy to explain in detail. I which these math people would write papers with engineers in mind.
Same here :D
I lost a lot of hairs before figuring it out and then i was "whaat ? is that all ???"

edit : btw, if anybody has some raw bayer footage, i would glady download it so i can test the algorithm on different video material.
Steven Mingam is offline   Reply With Quote
Old July 11th, 2007, 07:16 AM   #118
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Steve,

When I get the camera running correct, I will be able to put some footage up. However, the reseller of the Pike doubts that the artifacts are abnormal.
He basically says: "you shouldn't do gamma correction".

So to prove that it is wrong I will have to make a light sensitivity graph (x: exposure time, y: AD output value) for a couple of pixels. it looks to me like 50% of the pixels are not linear in mid-light.

Cheers,
Take
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old July 11th, 2007, 09:20 AM   #119
Trustee
 
Join Date: Nov 2005
Location: Sauk Rapids, MN, USA
Posts: 1,675
Any overclocking is safe provided you can cool the parts enough...it goes up quickly though, you'll be liquid cooling the thing before too long ;)

Luddite question: is there a reason Firewire (IEEE1394) isn't being considered (or considered, then dismissed)?
__________________
Web Youtube Facebook
Cole McDonald is offline   Reply With Quote
Old July 11th, 2007, 09:38 AM   #120
Major Player
 
Join Date: May 2007
Location: Sevilla (Spain)
Posts: 439
Steve, I can provide you with raw bayer material from my cam. Do you want anything in particular? Resolution? Colors? Lighting?

Just tell me where I can upload it.
Jose A. Garcia is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods


 



All times are GMT -6. The time now is 01:28 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network