On Sat, Mar 15, 2014 at 5:09 AM, Mike Stump <mikest...@comcast.net> wrote:
> On Mar 14, 2014, at 7:45 PM, Alexandre Oliva <aol...@redhat.com> wrote:
>> In some cases, the resulting executable code is none, but the debug stmts
>> add up to millions.
>
> I'd like to think there is a better theoretic answer to the specific 
> problem...  trimming random debug info I think just invites a bad experience 
> where people want to know what is going on and to them it just feels like a 
> bad compiler that just randomly messed up debug info.  A user that wants 
> faster compilation can refrain from using -g, or use -g1?
>
> For example, if there truly is no code, removing all scopes that have no 
> instruction between the start and the end along with all the debug info that 
> goes with those scopes.  If there is one instruction, seems tome that it 
> should be hard to have more than a few debug statements per instruction.  If 
> there are more than 5, it would be curious to review each one and ask the 
> question, is this useful and interesting?  I'd like to think there are entire 
> classes of useless things that can be removed with no loss to the debug 
> experience.

I agree, this doesn't seem to be a good solution (though the ability
to disable VTA per function looks good to me).  If we want to limit
sth then we should
limit the number of debug stmts inbetween two real stmts (ok, more
like the ratio of debug vs. real stmts).  But then the question is
which debug stmts do
we retain?  IMHO generating debug stmts in the first place for each
initializer in an unrolled

int a[10000];
for (;;)
  a[i] = 0;

is bad.  That is, I question the usefulness of the fancy debug stmts
we create from a dead

 a[12345] = 0;

stmt.  Can we add a -fextra-verbose-var-tracking-assignments for
those?  Or disable it for arrays?

Richard.

Reply via email to