https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64058

--- Comment #17 from Jeffrey A. Law <law at redhat dot com> ---
The nice thing about looking at the conflict set sizes is it doesn't change the
computational complexity.  After we've built the conflict graph we can walk the
coalesce pairs once and compute the size of the resultant conflict set for each
pair if we're able to coalesce the pair.

We can then trivially use that for tie-breaking.

Note that doesn't capture the effect fully -- doing that would probably result
in an increase in computational complexity as when we successfully coalesce a
pair, we'd have to propagate the conflict set size of the union to any
still-to-be-coalesced pairs that use the representative partition.  It's not
clear if that's really useful or not.

Note that with the primary costing model based on edge frequency we have a
global costing metric.  Using the size of the conflicts as a secondary key
captures at least some of the "how does coalescing this pair potentially affect
other (lower priority) pairs we might want to coalesce later".

Literature in this space isn't particularly helpful except to note that they
use conflicts to reject certain coalescing opportunities because they'll make
the graph uncolorable.  Some have a fairly conservative algorithm which they
iterate (ugh) to achieve better results.

Reply via email to