Jan Harnisch on wrote... | Hello everybody, | | recently I was talking to a colleague about creating images with | "unlimited" depth of field out of several single images with different | focal points. In the meantime I have been playing around a bit and | found the following solution: | | # First, create an average of all input images | | convert test1.png test2.png test3.png test4.png -average average.png | | # Second, analyze the distance of a pixel in each image to the | averaged pixel and pick the most extreme one | | convert average.png test1.png test2.png test3.png test4.png -fx | "xx=max(max(abs(u[1]-u[0]),abs(u[2]-u[0])),max(abs(u[3]-u[0]),abs(u[4]-u[0]))); | yy=max(max(u[1]-u[0],u[2]-u[0]),max(u[3]-u[0],u[4]-u[0])); | xx>yy?u[0]-xx:u[0]+xx" out.png | Yes I have seen this before. Using a picture of a gear and a champaine bottle. But I can't seem to find any bookmark or other reference back to that page. It is lost in the web.
| The thought behind this is that pixels in blurred areas will always have | a very similar color (i.e. the average of all surrounding pixels), while | the color of a sharp pixel will differ from the average by the greatest | amount. | While this basically works (at least with the simple test images I made | up), there is still room for improvement: | Doing this with the -fx operator takes ages with big images. Somehow | there must be a way to do this with a more basic kind of operation, but | I still lack the right idea (and probably the sufficient knowledge of | what IM can do). | If there is no alternative and I need to stay with -fx: The max() | operator accepts more arguments than two (e.g. max(a,b,c)), but seems to | ignore any argument between the first and the last one. For this reason, | I had to work with nested max() expressions, which seems a bit clumsy to | me. I bet there is a more elegant way to do it. | Any comments and suggestions are greatly appreciated. | Best regards, | There are alturnatives. -fx is inherentally slow, as it currently interpretes the expression once for every VALUE processed. That is generally 3 times the number of pixels. That is a LOT. Now you can convert the command line into a pixel-by-pixel API script such as the PerlMagick demonstration script "pixel_fx.pl" as a starting point, and you will get a BIG speed improvement. However as you are using it as a 'selection' method, and -fx works by value, you may have color distortions caused by some channel values being selected from different images. Though as the image should be of the same object it would not be very visible. The BETTER solution is to re-implement the -fx as global image transforms so as to produce a selection mask for each image. This will work with ANY NUMBER of images, and while it has a lot of steps, all they are really doing is all the individual operations you were doing in -fx, but using whole images, rather than pixel by pixel. Note that as I have no images to test this it is untested... Please send me some test images for debugging.... First, start convert and read in ALL the original images... Any order, though later ones will be given a higher preference than early ones when the difference is the same (see later). convert image?.png \ Now create the average image, in a seperate working image sequence See Parenthesis http://www.imagemagick.org/Usage/basics/#parenthesis \( -clone 0--1 -average \ Now do a 'difference' compose that average against each of the original images. See -layers composite for compositions with multiple image paris... http://www.imagemagick.org/Usage/anim_mods/#composite null: -clone 0--1 -compose difference -layers composite \ At this point you have two sets on images. the original images and a seperate working sequence of the differences from the average. Now we need convert that into a mask, for each image which has the maximum value. However masks should be black and white, so we need to gray-scale the difference masks in the working image sequence. That will also solve the color distortion problem. -colorspace Gray \ A better way may be to add the differences of all three color channels rather than just grayscaling, but I am not certain how to do this with multiple images. This should be good enough anyway. Now to generate the masks of the images with the maximum value, you first need to know what is the maximum difference. So lets compose all the images together looking for the maximum difference. See Compose lighten http://www.imagemagick.org/Usage/compose/#lighten For that we need another 'working image sequence' \( -clone 0--1 -compose Lighten -flatten \ Now we have three image sequences. the originals, the differences and a single 'maximum difference'. We now combine the maximum difference with the individual difference images, and look for ANY difference. So lets inserts the -layers composite marker and terminate the last working image sequence combining it with the previous ones. null: +insert \) \ now get a difference from maximum difference, by making any difference transparent! See ChangeMask http://www.imagemagick.org/Usage/compose/#changemask This added to IM in v6.3.4 so make sure you IM is up-to-date. -alpha set -compose ChangeMask -layers composite \ Note this could be done using the 'difference' to get images representing the difference from maximum. But change mask converts that into a 'shape' mask for us. Now the masks could overlap on images which have the same 'maximum difference' but that should not matter. Ok, home stretch. We have our original images, and there corresponding shape masks. So lets use the shape masks to make the unwanted parts of the original images transparent. Again lets insert the -layers composite 'marker' and merge the two images sequences. null: +insert \) \ -alpha set -compose DstIn -layers Composite \ And each image has now been masked with the 'maximum difference mask' so lets just flatten them to produce our final image. "depth of field merge basied on maximum difference from average image" -flatten depth_field_merge.png If the masks do overlap on some pixel (same max difference from average) then the later pixel will take preference. DONE! Here are alls the steps together. convert image?.png \ \( -clone 0--1 -average \ null: -clone 0--1 -compose difference -layers composite \ -colorspace Gray \ \( -clone 0--1 -compose Lighten -flatten \ null: +insert \) \ -alpha set -compose ChangeMask -layers composite \ null: +insert \) \ -alpha set -compose DstIn -layers Composite \ -flatten depth_field_merge.png Now if you like to give me a set of examples I can debug the above and write it up on IM examples. ADDUNDUM... Note that the reverse of this.. 'minimum distance from average' can also be useful, for removing moving objects from a large number of images of the same scene. That is removing people from the scene! See discussion in IM Examples, Double Exposure! http://www.imagemagick.org/Usage/photos/#double | 1) The "gradient" approach which does not need an averaged image, but | relies on the images being in the right order. Here the idea is that | when a pixel becomes sharp, it has the highest difference to the same | pixel in the previous or next image. | Does not work well for scenery images. However the reverse of this Bluring by gradient is often used to make a real life image look like a photo of a desktop models.. See Fake Model Photography http://recedinghairline.co.uk/tutorials/fakemodel/ the method is actually called 'tilt-shift lens' http://en.wikipedia.org/wiki/Tilt-shift_photography For an extreme example try this video!!! http://elvery.net/drzax/2008/10/23/tilt-shifting-in-the-extreme/ | 2) The sharpness approach which could even work with two input images | (if it just worked at all...). Here I take a look at the pixels | surrounding a specific pixel and pick the pixel that has the highest | distance to one of its neighbors. | | convert test1.png test2.png -fx | "udist=max(max(abs(u[0]-u[0].p[0,-1]),abs(u[0]-u[0].p[0,1])),max(abs(u[0]-u[0].p[-1,0]),abs(u[0]-u[0].p[1,0]))); | vdist=max(max(abs(u[1]-u[1].p[0,-1]),abs(u[1]-u[1].p[0,1])),max(abs(u[1]-u[1].p[-1,0]),abs(u[1]-u[1].p[1,0]))); | udist>vdist?u[0]:u[1]" out3.png | | Here the problem is a more basic one: If, for example, there is a black | area adjacent to a white area, the border is a grey gradient when it is | unsharp. In areas that should be purely white (or black), the expression | picks the grey pixels instead, because light grey differs from dark grey | by a higher amount than white differs from white ;-( | This is known as 'unsharp' for a single image. Anthony Thyssen ( System Programmer ) <[EMAIL PROTECTED]> ----------------------------------------------------------------------------- "The avalanche has already started. It is too late for the pebbles to vote." -- Ambassador Kosh ----------------------------------------------------------------------------- Anthony's Home is his Castle http://www.cit.gu.edu.au/~anthony/ _______________________________________________ Magick-users mailing list [email protected] http://studio.imagemagick.org/mailman/listinfo/magick-users
