Hello everybody,

recently I was talking to a colleague about creating images with 
"unlimited" depth of field out of several single images with different 
focal points. In the meantime I have been playing around a bit and found 
the following solution:

# First, create an average of all input images

convert test1.png test2.png test3.png test4.png -average average.png

# Second, analyze the distance of a pixel in each image to the averaged 
pixel and pick the most extreme one

convert average.png test1.png test2.png test3.png test4.png -fx 
"xx=max(max(abs(u[1]-u[0]),abs(u[2]-u[0])),max(abs(u[3]-u[0]),abs(u[4]-u[0]))); 
yy=max(max(u[1]-u[0],u[2]-u[0]),max(u[3]-u[0],u[4]-u[0])); 
xx>yy?u[0]-xx:u[0]+xx" out.png

The thought behind this is that pixels in blurred areas will always have 
a very similar color (i.e. the average of all surrounding pixels), while 
the color of a sharp pixel will differ from the average by the greatest 
amount.
While this basically works (at least with the simple test images I made 
up), there is still room for improvement:
Doing this with the -fx operator takes ages with big images. Somehow 
there must be a way to do this with a more basic kind of operation, but 
I still lack the right idea (and probably the sufficient knowledge of 
what IM can do).
If there is no alternative and I need to stay with -fx: The max() 
operator accepts more arguments than two (e.g. max(a,b,c)), but seems to 
ignore any argument between the first and the last one. For this reason, 
I had to work with nested max() expressions, which seems a bit clumsy to 
me. I bet there is a more elegant way to do it.
Any comments and suggestions are greatly appreciated.
Best regards,

Jan


PS: If you really want to read on...
I also thought about some alternatives to the above approach which 
failed for various reasons:
1) The "gradient" approach which does not need an averaged image, but 
relies on the images being in the right order. Here the idea is that 
when a pixel becomes sharp, it has the highest difference to the same 
pixel in the previous or next image.

convert test1.png test2.png test3.png -fx "gradientuv=u[1]-u[0]; 
gradientvw=u[2]-u[1]; 
abs(gradientuv)>=3*abs(gradientvw)?u[0]:(abs(gradientvw)>=3*abs(gradientuv)?u[2]:u[1])"
 
out2.png

The factor 3 is just an arbitrary number to prevent the expression from 
picking an intermediate maximum/minimum, e.g. when a pixel has the values
u[0]=0.1, u[1]=0.8, u[2]=0.79
then the right value will probably be u[0], not u[1], although it is the 
maximum.
The problem with this is that so far I was not able to figure out a way 
to do this with more than three images, or to include the result 
out2.png in a further convert command that adds more images to the 
result. Maybe this is easy to fix...

2) The sharpness approach which could even work with two input images 
(if it just worked at all...). Here I take a look at the pixels 
surrounding a specific pixel and pick the pixel that has the highest 
distance to one of its neighbors.

convert test1.png test2.png -fx 
"udist=max(max(abs(u[0]-u[0].p[0,-1]),abs(u[0]-u[0].p[0,1])),max(abs(u[0]-u[0].p[-1,0]),abs(u[0]-u[0].p[1,0])));
 
vdist=max(max(abs(u[1]-u[1].p[0,-1]),abs(u[1]-u[1].p[0,1])),max(abs(u[1]-u[1].p[-1,0]),abs(u[1]-u[1].p[1,0])));
 
udist>vdist?u[0]:u[1]" out3.png

Here the problem is a more basic one: If, for example, there is a black 
area adjacent to a white area, the border is a grey gradient when it is 
unsharp. In areas that should be purely white (or black), the expression 
picks the grey pixels instead, because light grey differs from dark grey 
by a higher amount than white differs from white ;-(

_______________________________________________
Magick-users mailing list
[email protected]
http://studio.imagemagick.org/mailman/listinfo/magick-users

Reply via email to