On 6/26/15 10:15 , David Bosschaert wrote:
Looks like a great piece of work, Guillaume. I left some minor
thoughts in a few places on github.

One potential concern is that the change to depth-first search will
produce different results than the previous approach.

I wouldn't worry too much about producing different results that previous versions of the resolver (although admittedly, this would cause some pain for some people). The more important issue, from my point of view, is that a given version of the resolver should produce a consistent result given the same input...

-> richard

Does it still
aim to minimise the number of class-spaces?

This might cause different results for bundles that import packages
with underspecified version ranges. On the other hand, that might then
be a good reason for people to ensure correct import ranges.

Cheers,

David

On 26 June 2015 at 14:41, Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
Hi Guillaume,

What a work you did !!

It looks really great !

I cloned your repo to test and take a deeper look.

Thanks !
Regards
JB


On 06/26/2015 03:06 PM, Guillaume Nodet wrote:
I've spent the last two weeks working on optimising the resolver.
The results are available in the following github fork
     https://github.com/gnodet/felix/commits/FELIX-4942

The commits are split so that they can be reviewed more easily.
Most of them do not really change the algorithm in any way, so I won't
comment them, but I want to highlight a bit those that introduce more
important changes.

* Avoid throwing exceptions during resolution


https://github.com/gnodet/felix/commit/fa012fe9d3760f6b80c03eecf0b7f03218ceae6f
    This commit contains two major ideas: try/catch are slow and the
computation of the resolution messages are usually expensive and useless.
    So instead of exceptions, the resolver creates a small object
containing
all the information needed to create the exceptions later if needed (this
usually happen only once at the end).
When the resolver is doing some recursive work, it usually

* Compute all package sources for a given resource at the same time


https://github.com/gnodet/felix/commit/a602f43ccd76983026d6c0c76958bc3445c0f4c0
    Time can be saved by computing all the package sources for a given
resource at the same time, especially when a resource exports the same
package with multiple versions (the computation is still recursive, but
this will be removed in a following commit).

* Compute package spaces by stages


https://github.com/gnodet/felix/commit/71cd667e89d41d580c8f2b88605694cc361bf134
   Instead of doing things lazily, most of the computation can be done by
stages.
So we first compute the list of wire candidates, then the exported
packages, then the imported and required package lists, then the package
sources and last, the use constraints.

   * Parallelism


https://github.com/gnodet/felix/commit/b1e88923fb9a2f056f3b6b4bc96682d620652fd5


https://github.com/gnodet/felix/commit/68650153aa6bdde96b4c357ee8faa88fa4e90f6e
Following the previous point, things can now be computed in parallel,
given
they are only using read-only objects from a previous stage and the
computation are done for a given resource with no recursion.  This allow
using multiple threads and start the work for each resource, gather
everything at the end of a stage, start the next stage.
I haven't really investigated parallelism in checkPackageSpaceConsistency,
and this is always done sequentially, however, most of the work is
computing the package spaces (and creating Candidates copies).
I've also experimented a bit with starting resolving several Candidates in
parallel, but with really poor success (usually, I end up with OOM
errors).

   * Remove recursion in Candidates.populate()


https://github.com/gnodet/felix/commit/3a82b2342c87450edcd1aa6fa5c514c2b04ae346
The algorithm is exactly the same, but we use a list instead of using
recursion.  This is faster and do not impose memory constraints (some
platforms have a small stack by default).

   * Use depth-first algorithm for permutations


https://github.com/gnodet/felix/commit/e4f479b5ec2bf4351f755909aa33ea5f0c6af2dc
When new permutations are computed, they were always added at the end of
one of the lists.  This lead to a width breadth first algorithm.  However,
the algorithm is smart enough to find the causes of an error, create a
substitution based on this error which should go one step in the right
direction.  By adding the new substitutions to the beginning of the lists,
we always go in the right direction, leading to much fewer iterations.


Overall results
===========
On my computer, the BigResolutionTest takes roughly 11s (after the removal
of GenericCapabiliy#equals method).
With the head of this branch, the time is down to 100 ms

I'm planning to test the new resolver a bit more in the following days,
making sure I don't see any weird behaviour, but the tests looks really
good so far.

All comments welcome !

Cheers,
Guillaume Nodet

--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Reply via email to