https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68331

--- Comment #8 from rguenther at suse dot de <rguenther at suse dot de> ---
On Tue, 1 Dec 2015, vries at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68331
> 
> --- Comment #5 from vries at gcc dot gnu.org ---
> Created attachment 36885
>   --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=36885&action=edit
> Abort solve_graph after num_edge passes threshold
> 
> This patch aborts solve_graph after stats.num_edges passes 4.000.000.
> 
> Using this patch, in combination with a -fipa-pta on-by-by-default patch, I'm
> able to do a no-bootstrap build and reg-test.
> 
> Test failures:
> ...
> libstdc++.sum:FAIL: experimental/filesystem/path/concat/strings.cc execution
> test
> libstdc++.sum:FAIL: experimental/filesystem/path/construct/locale.cc execution
> test
> ...

I think limiting the number of edges is somewhat odd.  I'd rather
limit the number of varinfos created initially and maybe the number
of solver iterations.

And add some statistics to the whole solving process.

Note that in theory the solver can just coalesce more vars (and thus
edges) to make the solution quicker (and more imprecise).  So giving
up completely (and wasting work already done) isn't necessary.  Of
course the result from (IPA) PTA will then be a lot less "reliable"
and reducing a testcase might end up "fixing" wrong-code bugs just
because the solution becomes more precise (or the other way around).

One has to visualize the solving process for some moderately complex
testcase as I think the cycle elimination during the solving doesn't
work reliably with the solving turning complex constraint into
edges (well, not sure).  Having unmerged direct cycles just slows
down the solving and increases the memory use for the solutions.

Reply via email to