If you look at the depth of field equations, they are based on whether the circle of confusion is smaller than the size of the pixel. When you look at an image on the web (generally about 2MP or below) the effective dof is a lot greater than if you pixel peep the raw file (16-24 MP or more), because the smaller files can handle a lot larger circle of confusion than the raw image.

It seems to me that with sensors so high resolution that they are diffraction limited at f/5.6 or so, that it would be possible for image processing software to detect the increasing circle of confusion (at 2 or 3 pixels) with a lot more accuracy than our eyes can, and could therefore more accurately enhance the effects of depth of field than just applying a blur to the background of an image.

Likewise, given a good model of the lens (bokeh) it might also be possible to mathematically increase the depth of field.

A corollary to this is that it would in theory be possible to extract depth/distance information from the image (though it might be hard to tell the difference between something being 2 times the focal distance and 1/2 the focal distance.

Are there any hard core signal/image processing nerds on the list who know anything about work being done on this? It wasn't too long ago that it would take the sort of processing that only Los Alamos or the NSA had to do this, but desktop computers are probably running something like 2005 "Craymarks", particularly with GPUs.

--
Larry Colen  l...@red4est.com (postbox on min4est) http://red4est.com/lrc


--
PDML Pentax-Discuss Mail List
PDML@pdml.net
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.

Reply via email to