[hugin-ptx] Re: Fwd: Re: Re: possible memory leak in enblend enfuse?

2009-10-29 Thread Pablo d'Angelo

Rogier Wolff schrieb:
 On Wed, Oct 28, 2009 at 10:56:08PM +0100, Pablo d'Angelo wrote:
 Implementations of Minimum cut algorithms are widely available, for 
 example from the boost graph library or 
 http://www.cs.ucl.ac.uk/staff/V.Kolmogorov/software.html (maxflow v2.2 
 is GPL licensed).

 I think an implementation in enblend wouldn't be too hard to do, and 
 might be simpler than trying to fix all special cases with the current 
 approach.
 
 The current approach, I think, is very good at hiding some differences
 between adjacent images. Getting a good initial cut-line might help
 a bit. But just switching to a good cut line according to this
 algorithm you described is not good enough.
 
 I have been thinking about THIS algorithm for the past few weeks. 
 
 Your edge cost function I think is too simple. I would add in the
 costs (i.e. absolute pixel differences) of a circle around the current
 edge. 

Actually, I believe that enblend also uses a very similar cost function 
currently, ie absolute gray value differences, with a much more 
restrictive state space. Thus I am not convinced that the simple 
absolute differences cost function is not good enough, when comparing 
to the current state.

 Suppose we have a person half on an image. The intention is to cut
 completely around this person. (cutting through the person should
 incur high costs because the pixels differ a lot.)
 
 Now suppose a small black line is in front of the person. Now both
 images have this line where the absolute difference between pixels
 is quite low (both black), but closeby pixels DO differ.

Of course it is possible to fool this simple cost function, as it is 
possible with the current enblend.

The pragmatic approach would be to first implement the graph cut 
optimization and then see where the problems with the simple absolute 
differences cost.

Then the cost function can be further tuned. I suspect that maybe adding 
a soft constraint that tries to keep the seam line closer to the center 
of the overlap region might also be helpful.

ciao
   Pablo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
hugin and other free panoramic software group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~--~~~~--~~--~--~---



[hugin-ptx] Re: Fwd: Re: Re: possible memory leak in enblend enfuse?

2009-10-28 Thread Pablo d'Angelo

Hi Andrew,

Andrew Mihal schrieb:
 Hi Pablo,
 Can you elaborate a little on your criticism of snakes?
 
 Is the problem:
 - That the polyline formulation cannot describe a good solution at all?

A polyline can describe all possible solutions, if it is fine enough.

 - That the size of the state space in the current implementation is
 too restrictive?

While I don't have a strict proof, I feel that the state space of the
snake model is too restrictive and too powerful at the same time, as it
allows self crossings of the seam line etc. Also, only a small part of
the image content (on lines perpendicular to the initial seam) is used
to determine the major location of the seam line.

I believe that a state space that includes the whole overlap area and
not only some profiles of it would lead to better results.

The second step (dijkstra shortest path) does try to remove these
limitations but it can't undo the decisions made by the polyline
optimisation.

A straightforward state space is an image that holds 0 for image pixels 
in the first image, and 1 for image pixels in the second image. This 
statespace has the advantage that it directly models the problem. 
Finding a solution can be done by modelling it as a graph and finding 
the minimum cut. In this approach, each pixel in the overlap region 
would be a node in the graph. The edges connect each pixel to the 4 
nearest neighbours. The edge value is the sum of the absolute color 
difference between the nodes connected by the edge.
The source node is connected to all pixels that are on the boundary of 
the overlap regions and are only valid in the first image.
The sink node is connected to all pixels that are on the boundary of the 
overlap regions and are only valid in the second image.

Finding the minimum cut in this graph will give the seam line that has 
the smallest color difference (note that this is the global minimum), 
and hence a good position for a seam line. This also means that only the 
cost function and not the parameterization of the optimization algorithm 
influence the result.

This should be relatively efficient, and it should be able to handle all 
special cases with strange overlap situations with holes, small overlaps 
etc. naturally.

There are some illustrations of the above in:
http://www.cs.huji.ac.il/course/2006/impr/lectures2006/Tirgul9b_Panoramas_blendStitch.pdf
slides 29ff

Some more ideas on how to refine the cost function so that other 
constraints can also be added:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.129.8181

Implementations of Minimum cut algorithms are widely available, for 
example from the boost graph library or 
http://www.cs.ucl.ac.uk/staff/V.Kolmogorov/software.html (maxflow v2.2 
is GPL licensed).

I think an implementation in enblend wouldn't be too hard to do, and 
might be simpler than trying to fix all special cases with the current 
approach. Additionally, I believe that it would find better seams, so it 
is worth trying. Unfortunately, I currently don't have the time to do 
so... :-(

 - That the current implementation's cost functions do not correlate
 with a good solution?

No, I think these are good enough.

 - That the current implementation's GDA solver is insufficiently
 powerful to find a good answer to the optimization problem?

Yes, combined with the problems of the statespace indicated above.

 - something else?

The main source of bugs seems to be related to the bitmap - vector -
bitmap approach required by the polyline optimisation.

 It seems to me that the best seamline can be modeled as a polyline,
 and the search space can be defined in such a way as to include the
 best seam as a possibility. I admit that the current search space
 definition is a little rough, and that the GDA is buggy and needs
 parameter tuning. I think the hard part is defining what is best and
 turning that into equations. But surely the graph cut approach has
 this same requirement?

The main reason why I advocate trying graph cut approach is:
It should allow a different state space, which I think will sidestep
most of the painful problems related to many special cases with the
vector seams. This might be also possible to optimize the pixel labeling
directly with the simulated annealing, however it would probably take
forever to find a good solution due to the higher dimensionality of the
state space.

ciao
   Pablo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
hugin and other free panoramic software group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~--~~~~--~~--~--~---



[hugin-ptx] Re: Fwd: Re: Re: possible memory leak in enblend enfuse?

2009-10-28 Thread Rogier Wolff

On Wed, Oct 28, 2009 at 10:56:08PM +0100, Pablo d'Angelo wrote:
 Implementations of Minimum cut algorithms are widely available, for 
 example from the boost graph library or 
 http://www.cs.ucl.ac.uk/staff/V.Kolmogorov/software.html (maxflow v2.2 
 is GPL licensed).
 
 I think an implementation in enblend wouldn't be too hard to do, and 
 might be simpler than trying to fix all special cases with the current 
 approach.

The current approach, I think, is very good at hiding some differences
between adjacent images. Getting a good initial cut-line might help
a bit. But just switching to a good cut line according to this
algorithm you described is not good enough. 

I have been thinking about THIS algorithm for the past few weeks. 

Your edge cost function I think is too simple. I would add in the
costs (i.e. absolute pixel differences) of a circle around the current
edge. 

Suppose we have a person half on an image. The intention is to cut
completely around this person. (cutting through the person should
incur high costs because the pixels differ a lot.)

Now suppose a small black line is in front of the person. Now both
images have this line where the absolute difference between pixels
is quite low (both black), but closeby pixels DO differ.

Roger. 

-- 
** r.e.wo...@bitwizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
**Delftechpark 26 2628 XH  Delft, The Netherlands. KVK: 27239233**
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement. 
Does it sit on the couch all day? Is it unemployed? Please be specific! 
Define 'it' and what it isn't doing. - Adapted from lxrbot FAQ

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
hugin and other free panoramic software group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~--~~~~--~~--~--~---



[hugin-ptx] Re: Fwd: Re: Re: possible memory leak in enblend enfuse?

2009-10-20 Thread Andrew Mihal

Hi Pablo,
Can you elaborate a little on your criticism of snakes?

Is the problem:
- That the polyline formulation cannot describe a good solution at all?
- That the size of the state space in the current implementation is
too restrictive?
- That the current implementation's cost functions do not correlate
with a good solution?
- That the current implementation's GDA solver is insufficiently
powerful to find a good answer to the optimization problem?
- something else?

It seems to me that the best seamline can be modeled as a polyline,
and the search space can be defined in such a way as to include the
best seam as a possibility. I admit that the current search space
definition is a little rough, and that the GDA is buggy and needs
parameter tuning. I think the hard part is defining what is best and
turning that into equations. But surely the graph cut approach has
this same requirement?

Andrew

On Sat, Oct 17, 2009 at 10:18 AM, Pablo d'Angelo pablo.dang...@web.de wrote:

 Hi Andrew,

 Andrew Mihal schrieb:
 Hi,
 I suspect a problem in the vectorization of the seam lines.

 Actually, the approach of using vectorized seam lines is a relatively
 complicated process. Additionally, snakes are not particularly well
 known to find good global solutions. I think a different approach to
 seam line finding would avoid these problems, and also leads to better
 seams. One possibility are the graph cut based segmentation algorithms.
 Here the masks are generated directly and there is no need for going
 from masks to vectors and back to masks. I'm also quite sure that the
 generated seams would be better than with the current snake algorithm.

 ciao
  Pablo

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
hugin and other free panoramic software group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~--~~~~--~~--~--~---



[hugin-ptx] Re: Fwd: Re: Re: possible memory leak in enblend enfuse?

2009-10-17 Thread Pablo d'Angelo

Hi Andrew,

Andrew Mihal schrieb:
 Hi,
 I suspect a problem in the vectorization of the seam lines.

Actually, the approach of using vectorized seam lines is a relatively 
complicated process. Additionally, snakes are not particularly well 
known to find good global solutions. I think a different approach to 
seam line finding would avoid these problems, and also leads to better 
seams. One possibility are the graph cut based segmentation algorithms. 
Here the masks are generated directly and there is no need for going 
from masks to vectors and back to masks. I'm also quite sure that the 
generated seams would be better than with the current snake algorithm.

ciao
  Pablo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
hugin and other free panoramic software group.
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugin-ptx@googlegroups.com
To unsubscribe from this group, send email to 
hugin-ptx+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx
-~--~~~~--~~--~--~---