https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90273

--- Comment #27 from rguenther at suse dot de <rguenther at suse dot de> ---
On April 30, 2019 4:27:25 PM GMT+02:00, "aoliva at gcc dot gnu.org"
<gcc-bugzi...@gcc.gnu.org> wrote:
>https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90273
>
>--- Comment #26 from Alexandre Oliva <aoliva at gcc dot gnu.org> ---
>I saw the #c11 patch in gcc-patches, and it seemed to have been posted
>FTR and
>installed.  It looked good, so I didn't comment on it.
>
>I agree about the effects of #c16, though I begin to get a feeling that
>it's
>working too hard for too little benefit.  Ditto trying to optimize
>debug temps:
>you will get some savings, sure, but how much benefit for such global
>analyses?
>
>Perhaps we'd get a much bigger bang for the buck introducing vector
>resets, in
>which a single gimple bind stmt would reset several decls at once.  If
>that's
>become as common as it is being made out to be, this could save a
>significant
>amount of memory.
>
>Though from Jan's comments on compile times, it doesn't look like we've
>got
>much slower, which makes me wonder what the new problem really is... 
>Could it
>be that debug binds have always been there, plentiful but under the
>radar, and
>that the real recent regression (assuming there really is one) lies
>elsewhere? 
>(sorry, I haven't really dug into it myself)

The recent regression is we no longer throw them away plentiful during CFG
cleanup and now they pile up during inlining. 

I agree full DCE with liveness will be expensive for usually little gain. Not
sure if vector resets will improve things much.

Reply via email to