On Dec 16, 2007, "Daniel Berlin" <[EMAIL PROTECTED]> wrote:

> There is no portion of the DWARF3 spec which requires you output
> information that is correct or useful. The same way the C standard
> does not require you to write correct programs, only valid ones, the
> DWARF3 spec does not require you to output correct information, only
> information that is encoded properly.

But if a C compiler translated programs to garbage, that would be
wrong.  By the same reasoning, if a Dwarf producer created garbage,
that would be wrong.

It's true that most of Dwarf 3 attributes are optional.  But when it
says "if you output this attribute, its operand must be such and
such", if you output the attribute with operands that don't match the
specification, that's a bug.

> It is certainly a goal of DWARF3 to allow producers to provide correct
> info

Exactly.  And where's the permission to provide incorrect info, rather
than merely leaving it out?

>> I've heard this "intrusiveness" argument be pointed out so many times,
>> by so many people that claim to not have been able to keep up with the
>> thread, and who claim to have not looked at the patches at all, that
>> I'm more and more convinced it's just fear of the unknown than any
>> actual rational evaluation of the impact of the changes.

> Well, no.
> You yourself have shown it to be intrusiveness in the extreme, in the
> very next paragraphs!

> "
> At some point you have to face reality and see that such information
> isn't kept around by magic, it takes some effort, and this effort is
> needed at every location where there are changes that might affect
> debug information.  And that's pretty much everywhere. "

> So, everywhere needs to change. That's pretty intrusiveness, no?

No.  Looks like selective attention, because you're reasoning out the
part in which I discussed using the strength of the optimizers against
the problem, by letting them do what they are already used to on the
debug information too.

If we add a new RTL code or a new TREE code, is that intrusive because
now every optimization pass will deal with the new node types in very
much the same way they've dealt with other similar node types forever?
Of course not.

And if we have to add a few exceptions here and there to deal with the
specifics of this new node type, does that become too intrusive then?
I don't think so.

Then what's the fuss about the new node types?  Do you want to count
the number of places in which INSN_P remains there, lexically
unchanged, and compare with the number of places in which I've added a
!DEBUG_INSN_P after it?

> Having to stop and think at every point in an optimization about the
> debug info,

Well, sorry, writing compilers is hard.  You have to think about
several things at the same time.  Shall we just go shopping instead?

I'm trying to make it as simple as possible.  The fact that nearly
100% of the code is unchanged seems to indicate to me that it's not
such a bad an approach, but if you want something that just magically
works, you're up for much disappointment.

> (having to stop and think about debug info at every single point of
> every single optimization).

Information doesn't come out of thin air, and thin air doesn't
maintain information accurate just because we wish it does.  We have
to work to create and update the information throughout compilation,
at every transformation, and my reasoning is precisely that optimizers
already do this all the time, so why not use them for what we need?

> You don't need to be this intrusiveness to stop outputting the
> incorrect info we do.

What do you have to back your statement up?

Let me help you: sure we don't.  We can just refrain from outputting
any debug information whatsoever.  Then, it will be compliant with the
standard.  But it won't be useful.

>> I've never seen this documented as such, and we've never worked toward
>> these stated goals.

> Who is we?
> I certainly have worked exactly towards these goals.
> As have almost all the authors of the current debugging info
> framework.

Oh, wow, I guess I just wasn't welcome into the club, because I didn't
get the guidelines book.  How unfortunate, now I have to give up my
plan of doing better and abide by the unpublished and undocumented
goals of some small cabal.  Or do I?

> If you look in the mailing list archives, you will even discover Diego
> is not the first one have exactly the viewpoint about what should and
> should not be debuggable, and that the community has consistenly
> worked towards exactly the viewpoint diego describes.

I've seen several different viewpoints from "the community".

> Anyway, I give up on reading this thread.  It has turned into a mess.
> You really need to step back

Oh, do I?  Why is that?

> and see that you have not achieved any sort of consensus of what
> levels of optimization should be how debuggable,

Why would I expect to get any consensus on that?  I haven't even
tried, and I won't.  This is not what the issue is about.  The issue
is about not emitting incorrect information.  Better debuggability for
all levels of optimization will be a side effect of achieving that,
and it will be achievable incrementally once we have an actual
framework that enables us to take steps in this direction without
introducing further regressions.

> I certainly wouldn't agree that we should take such intrusive steps to
> make -O2 -g as debuggable as you want,

It is obvious that you misunderstood what I want, and how intrusive
the approach is.

> I'd much rather see us do what we can easily, and drop any info that
> ends up being incorrect.

So what's your plan to find out what's incorrect?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}

Reply via email to