Genius Tech. - Shoot First and Ask What to Focus Later at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods
Register FAQ Today's Posts Buyer's Guides

Alternative Imaging Methods
DV Info Net is the birthplace of all 35mm adapters.

Reply
 
Thread Tools Search this Thread
Old November 23rd, 2005, 01:23 PM   #1
Major Player
 
Join Date: Dec 2003
Location: Anaheim, CA
Posts: 445
Genius Tech. - Shoot First and Ask What to Focus Later

Shoot First and Ask What to Focus Later

Early prototype of a camera that can selectively focus long AFTER the image has been shot. Impossible? Nope.

http://graphics.stanford.edu/papers/...era-150dpi.pdf

With the ever increasing mega pixels this technology seems very likely to show up in digital cameras in the future. Imagine never having a image that is slightly out of focus again or making multiple prints of the same moment in time but different subjects being the focus of the picture. Now extend the idea to moving pictures and video.
Brett Erskine is offline   Reply With Quote
Old November 23rd, 2005, 01:31 PM   #2
Trustee
 
Join Date: Jun 2005
Location: Hollywood, CA
Posts: 1,675
Images: 1
Haha--looks like Valeriu beat you to it.

http://www.dvinfo.net/conf/showthread.php?t=54839
__________________
BenWinter.com
Ben Winter is offline   Reply With Quote
Old November 23rd, 2005, 02:20 PM   #3
Major Player
 
Join Date: Dec 2003
Location: Anaheim, CA
Posts: 445
And Ren Ng, the inventor, beat us all to it. Rather than making something better that currently exists he went in a totally new direction. Genius.
Brett Erskine is offline   Reply With Quote
Old November 24th, 2005, 09:34 AM   #4
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Too late, I came up with a scheme to do this ages ago, I might have posted about it here as well. It is pretty simple. Another idea that seemed to hit the street before I got to it, was converting 2D pictures into 3D, which Intel did a few years back. All the info is there in the image to do these things. Patenting is an ass.
Wayne Morellini is offline   Reply With Quote
Old November 24th, 2005, 09:33 PM   #5
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
So great!!!
Show us how to make it our own!!!!
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old November 24th, 2005, 09:45 PM   #6
Major Player
 
Join Date: Nov 2004
Location: LI, NY
Posts: 274
Quote:
Originally Posted by Wayne Morellini
Too late, I came up with a scheme to do this ages ago, I might have posted about it here as well. It is pretty simple. Another idea that seemed to hit the street before I got to it, was converting 2D pictures into 3D, which Intel did a few years back. All the info is there in the image to do these things. Patenting is an ass.
Post it now.
Kyle Edwards is offline   Reply With Quote
Old November 25th, 2005, 01:38 AM   #7
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
I say a lot of truthful things at 1.34 AM, that I should keep to myself. But it is irritating to see things crop us elsewhere.

You have Intel and Stanford, so be happy with them. But I'll spurt off my theory anyway, and you can lookup my posts and notice I have given away a number of things, but you have to read them to notice them.

The thing with depth of field, is that as you move away from the perfect point of focus, the defocus grows at a mathematically predictable rate. So once you have the lens specs and settings you can calculate where all the fuzzy light came from, and backtrack to make focused pixels. Accuracy of light value may suffer a bit.

More mechanics:
As a point of detail de-focuses it forms a shape of defocus around it, that predictably declines over distance (in a gradient effect). In a perfect aperture that is round, with many irises this has sides (the reason why de-focused background lights in a shot have sides). Also, objects in the real world may distort this shape, as light from a point may not get to part of the lens because an object between is blocking it's path. The sides of the frame will also cut off focus data, so that defocus increases according to depth distance and the distance from the sides of the frame (another reason that camera adaptors have circular dark edges around the frame unless they compensate with optics).

The point of maximum intensity of the information yields the point of the detail from which you can work out colour/intensity, and you work out in an circular/iris shape from there detracting the appropriate light value according to distance, the values over distance can also yield more info (and higher res detail, as they as larger pictures of smaller details within the pixel area ;). Useful for Hi-res upscaling. By re-ramping these gradients of values you adjust focus. The problem of having all the light values mixed up is what reduces the accuracy of further away objects.

I was intending on using that in an image compression and codec routines. You store the image with defocus completely removed with lens specs settings and environmental settings, and a depth indicator (maybe as little as 4 bits with computer prediction) then a routine recreates it at the other end or any dof placement, and in 3D. Alternatively, you could send it de-focused with lens specs, settings, and environmental settings, and the routine recreates the desired effect.

This also yields perfect 3D information, so the shallowest DOF possible for 3D is probably the best, no DOF makes it impossible (no defocus data).

There we go, you also get a cheap 3D tool.


Simple, and that's nothing. It is also the simple version. But you see how you can get several patents simply and quickly (assuming it hadn't been patented anywhere before or used) by following different lines of reasoning to their natural conclusion. I can offer no guarantee that any of these things haven't been tried before, as I aimed to do in-depth checking of prior-art only after I got down to them on my list of things to do.

Now I'll have to read the PDF to find out how they did it (I said I had a scheme, I did not say the same scheme).

Thanks

Wayne.
Wayne Morellini is offline   Reply With Quote
Old November 25th, 2005, 02:12 AM   #8
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
additional note:

I should point out that objects in between affect the location of maximum intensity away from the location of the point causing it (such as the design of the mirror lens, that causes defocus to have a more intense ring of defocus around the actual point). A computer calculation should also be able to calculate with out lens spec and settings, by sampling the image to determine what they are. At first (before development of good routines) you could film a sample shape to determine values.

I forgot to mention, that because part of the lens might see a point and another not, you can get 3D information to see a bit around objects in the line of sight. Theoretically, if you had a 10 foot lens, you could see way around objects (but precision of the pixel sensor needs to increase to increase accuracy (compensate for the inaccuracy). A mirror lens is one of the only cheapest ways of doing this at the moment, depending on the nature of the defocus ring, it should be able to be calculated out, also on a large lens, the obstruction causing these rings might be small enough to be less noticeable. But I'm only using the ten footer as an exaggerated example, for normal fixed stereo 3D, several centimeters is all that would be needed, but a ten footer would allow actual broad movement in viewing angle especially when fed under user control (so they don't have to walk around) or for advertising signs. But, I expect the accuracy to decrease the further off axis you go, and the plane of focus might look flat.
Wayne Morellini is offline   Reply With Quote
Old November 26th, 2005, 01:24 AM   #9
Major Player
 
Join Date: Dec 2003
Location: Anaheim, CA
Posts: 445
Hey Wayne nice theory. I see it potentially working fairly well on single points of light (astro-photography) to bring it back in focus but wonder about its ability in the real world when it comes to the every day scene being capture that has many many points of light. I understand your idea but where it may run into a hangup is when these out of focus points of light data overlap (as they almost always will) and add to each others intensity which will sometimes result in a over exposure in that part of the image and thus any out of focus data that you planned to use to reconstruct the image is lost and unretrievable. It seems that this technique will only work if none of the out of focus image goes beyond the recordable latitude of the sensor (or film) being used. Also, as you mentioned, the light thats out of focus and spills off the side of the frame will also be lost and potentially will effect the accuracy other light data thru out the rest of the image. I'm curious if you had a chance to try out your theory or if you could point us in the direction of more info on the subject.

This is totally different to how Ren got his camera to work but still very interesting Wayne. Now just patent the next one and make yourself more than just a genius make yourself a wise man with millions. Good luck.
Brett Erskine is offline   Reply With Quote
Old November 26th, 2005, 09:06 AM   #10
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Brett Erskine
Hey Wayne nice theory. I'm curious if you had a chance to try out your theory or if you could point us in the direction of more info on the subject.
Thanks, but as I said, it is my scheme for achieving it, there is nothing else I based this on, I just came up with it myself (but I mention below, that it now occurs to em that there might have been similar examples with 2D pictures at set defocus, well blur as there is no variation in focus).

It now occurs to me that there must be some technique they use on spy cameras to focus on Skylab's rivets in the early 80's. But to answer your question, things are not as bad as they look. Even though the light overlaps, it is not such a problem as it seems, because the overlapping comes in a shape, you can work on points of maximum intensity of the colour, but also from the edges working your way in, once you have the gradient and you sample along to the point then you get a truer interpretation of what the original point was. You can also work on the colour of all points, using the variations they cause as you move away from them and when they overlap with all the other points (producing many patches of sample effects for the same point producing a more accurate interpretation of how much, say yellow belongs to which point next to it, and how much to itself), subtract all that, then if needed use the new values as part of a routine to re-interpret the original image again (as many times as needed). If you look at how the calculations in things like single chip complementary/hybrid colour sensors work to separate the red green and blue channels out of the overlapping colour, you realise there is a bit of this colour calculation in the industry going on already (though that has little to do with this idea). I see there might be limits to sharpness the more diffuse the focus (but as I read last night, the hardware solution has a limit too) but then accuracy can compensate for it, but just how accurate can you make a sensor which limits it (but sampling over subsequent frames will cause a variation in the noise floor and likely reveal some detail below the noise floor, as the original data will likely statistically show up more often, don't tell anybody I said that ;). The more diffuse something is the more coarse and flatter it will likely be. But the factor is the accuracy that you see. 2D will be a lot more accurate than around edge 3D, because then your loosing quantity of information. Most of the problems that I mention about objects and screen edges can be compensated for (the extent to which the edge effects across the screen also tells you something of it's 3D position ;).

As I said the one problem is accuracy, latitude is not the problem, as long as you are prepared to loss so much at the top or bottom end. The accuracy is more important, because the more accurate it is (the more values in the bit range it can accurately define the more levels and better edges you can get, and the extent of the effect of an points edge and it's value can lead to more accurate values. The edge of a week point of light will go beneath the noise floor sooner, and for stronger it will end stronger, this distance/value can give you a reasonable value of the original point. As I said sampling all the interferences of the point on the overlapping patches of light gives a clearer picture of how the point of maximum intensity should be modified to get the truer value. So anyway back to bits, having 14-bits is not such an advantage if the maximum noise cancels out the lower 6 bits, thats only 8 bits accuracy, not much room for anything without effecting accuracy. If we had 16 bits accuracy on an 16 bit camera (I don't know if there is a camera that can do that commercially available) you then have 8 bits to play around with, but go one step further if we eventually, hypothetically, had 48 bits, then thats 40 bits to play with and your bound to get outstanding images (if the optics could ever live upto the job). I haven't sat down and thought about how many bits you need to get 8 bit accurate images over a large field of focus (and we would prefer at least 10 bits for colour correction, and even 16 bits). You could however use a lens arrangement that in a fashion that keeps maximum de-focus under control to guarantee results. It just occurred to me they have sharpened voyagers and Hubble's photos, but they are flat set blur across the lot for them, this is over varying focus range and 3D depth/range. But ultimately, there are things at the ends of the defocus range that you just don't care if they are in focus or not, so you adjust your camera to give results over a range of defocus. I must mention that the processing to do this is going to range from moderate to very very much, depending on the quality you want. Intel has some new routine that requires super computer power to up-res images, I imagine ti 3D's and de-focuses them too. But realise, an PS3 has around 2 teraflops of processing power (xbox360 has around 1 teraflop, but supposedly more usable for this sort of thing) which should be able to do a lot of this stuff easy.

Quote:
This is totally different to how Ren got his camera to work but still very interesting Wayne. Now just patent the next one and make yourself more than just a genius make yourself a wise man with millions. Good luck.
Well, as far as I know, by publishing this in this technical forum, I am invalidating all rights to patents, as the previous posters who demanded I post it should know. If I can still make money out of it, I am willing to work with somebody on it. But this is not much compared to the many other things I have around, but lack the money to exploit (big bucks, mostly hundreds of thousands+ for hardware, tens of thousands sometimes with the right team). I scheduled working on an optics system and an game/vos system at the moment.
Wayne Morellini is offline   Reply With Quote
Old November 26th, 2005, 09:23 AM   #11
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Something I wrote latter last night that you might be interested in:

I'm reading the pdf document, and some of this stuff at the end of section 2 sounds like a technique I came up with in the 80's, not for camera lens, but for 3D rendering of an image for incorporation in computer graphics. I had an idea for a 3D display and an idea for scanning/storing pictures of 3D images (I don't know which was first). In the picture scanning thing, it occurred to me that you could store pictures from all directions and play back the one that matched (which they do on a horizontal line in the matrix, but that consumes too much storage space, so I figured you could get away with a subset of pictures and interpolate (re-edit in 3D positioning) between them to get most details from any view point (some indentations on an object the camera will not see down to it's end because of angle). So you start with 4 or more pictures instead of millions. On the 3D display I had a 3D feature given different view for different directions, that is a bit impractical and prone to issues, so I decided that a flat 2D display under a micro-lens, where each lens has a area of pixels under it, as lens shows a different sub pixel depending on angle it is viewed from to form true 3D image, which is sort of the reverse of how this DOF field camera works. I must say, years latter I met a guy that came up with the multi surround camera thing, but I didn't bother to tell him, because he hadn't figured out the next step of interpolation, I kept that to myself. But he ran off for talks with SGI about it, and sometime after that I noticed they started using this technique. So a number of people must come up with these things. Imagine if we had a big database for people to freely enter their real solutions and ideas, many conventional technologies could be advanced ten times faster. A wiki of information. I think such a thing should replace patenting which really does bottleneck technology innovation and public good.

I see how they got over the inherent problem of a compound eye, the inefficient sharing of light from a single point over multiple images of the compound, by using a lens to focus it. A nice solution. I must say it is sophisticated and quiet good.

The increase in low light ability and increase in signal to noise ratio is a little misleading at first. You can get more light through a bigger aperture and a higher SNR, but you loss some also because it is being distributed along many more smaller sensor pads that have a higher ratio of space spent on pad support circuits. There would be an gain in SNR because each point is being sampled a number of times and the averaging would reduce the effect of noise.

----------------------------


I apologise for my long windedness, but I was a bit sick yesterday and less so now (which is why I couldn't remember the word for blur and made up the term de-focus instead ;). Unfortunately it's effected my ability to keep it short (and sloshed my reading ability) but fortunately internal logic reasoning on my own designs is usually strong and the last to go.
Wayne Morellini is offline   Reply With Quote
Old November 26th, 2005, 06:50 PM   #12
Tourist
 
Join Date: Nov 2005
Posts: 4
Quote:
Originally Posted by Brett Erskine
And Ren Ng, the inventor, beat us all to it. Rather than making something better that currently exists he went in a totally new direction. Genius.
Hello everyone, I just came across this forum.

Actually it seems that Ed Adelson at MIT invented the camera (he was awarded a patent in 1991) Ren developed some techniques for more efficiently processing the image data to simulate depth of field. Adelson's goals were apparently somewhat different than Ren's, he actually recovered 3D depth information. See:

http://www-bcs.mit.edu/people/jyawan...plenoptic.html
Larry Edwards is offline   Reply With Quote
Old November 26th, 2005, 07:06 PM   #13
Tourist
 
Join Date: Nov 2005
Posts: 4
Quote:
Originally Posted by Brett Erskine
Hey Wayne nice theory. I see it potentially working fairly well on single points of light (astro-photography) to bring it back in focus but wonder about its ability in the real world when it comes to the every day scene being capture that has many many points of light. I understand your idea but where it may run into a hangup is when these out of focus points of light data overlap (as they almost always will) and add to each others intensity which will sometimes result in a over exposure in that part of the image and thus any out of focus data that you planned to use to reconstruct the image is lost and unretrievable. It seems that this technique will only work if none of the out of focus image goes beyond the recordable latitude of the sensor (or film) being used. Also, as you mentioned, the light thats out of focus and spills off the side of the frame will also be lost and potentially will effect the accuracy other light data thru out the rest of the image. I'm curious if you had a chance to try out your theory or if you could point us in the direction of more info on the subject.

This is totally different to how Ren got his camera to work but still very interesting Wayne. Now just patent the next one and make yourself more than just a genius make yourself a wise man with millions. Good luck.
I'm not totally clear on what Wayne is proposing but depth/shape from focus/defocus has been a subject of research and development since at least the late '80s. The most well known early paper is perhaps Alex Pentland's in '87.

For references see:

http://homepages.inf.ed.ac.uk/rbf/CV...dtutorial.html
Larry Edwards is offline   Reply With Quote
Old November 26th, 2005, 07:10 PM   #14
Tourist
 
Join Date: Nov 2005
Posts: 4
Quote:
Originally Posted by Wayne Morellini
[...]
Well, as far as I know, by publishing this in this technical forum, I am invalidating all rights to patents, as the previous posters who demanded I post it should know. [...]
Actually in the US you still have a year after publication... but in most other countries as soon as you publish, you're out of luck patent-wise...
Larry Edwards is offline   Reply With Quote
Old November 26th, 2005, 07:29 PM   #15
Tourist
 
Join Date: Nov 2005
Posts: 4
Quote:
Originally Posted by Wayne Morellini
[...] I must say, years latter I met a guy that came up with the multi surround camera thing, but I didn't bother to tell him, because he hadn't figured out the next step of interpolation, I kept that to myself. But he ran off for talks with SGI about it, and sometime after that I noticed they started using this technique. So a number of people must come up with these things. Imagine if we had a big database for people to freely enter their real solutions and ideas, many conventional technologies could be advanced ten times faster. [...]
Actually there is a big database of such info, old SIGGRAPH proceedings ;)

The first hint of this kind of idea probably dates back to Andrew Lippman's movie-map work in the late 70s early 80s, but he used a brute force video-disc approach. Viewpoint interpolation seems to have really captured peoples attention in the early to mid 90s... perhaps as a result of Apple's Quicktime VR and image morphing. To find out what others have done google on image based rendering, light field rendering, Lunmigraph, or plenoptic modeling. The lab that Ren came out of had a bunch of papers on this in the mid 90s.
Larry Edwards is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods


 



All times are GMT -6. The time now is 04:28 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network