Re: gdb 8.x - g++ 7.x compatibility

2018-03-03 Thread Daniel Berlin
Again, please don't do this.
As you can see (see Tom Tromey's email), others have a use to go between
vtable types and the types they are attached to.
We should be getting away from linkage names, not going further towards
them.
There are a bunch of gdb bugs this won't solve, but adding an extension
(like tom did for rust) to go between vtable types and concrete types will
solve *all* of them, be *much faster* than what gdb does now, and have
basically *no* space increase at all.

Meanwhile, i can hand you binaries where the size increase is in the
hundreds of megabytes to gigabytes for adding linkage names.



On Fri, Mar 2, 2018 at 3:06 PM, Roman Popov  wrote:

> Ok, sounds reasonable. In case of debugger we are indeed "linking" RTTI
> name with name in debuginfo.
>
> I've checked LLVM docs, they generate Debuginfo from LLVM "Metadata", and
> metadata for types already contains mangled names in "identifier" field:
> https://llvm.org/docs/LangRef.html#dicompositetype . So it should not be
> hard to propagate it to object file.
>
> I will ask on LLVM maillist if they can emit it.
>
>
> 2018-03-01 13:03 GMT-08:00 Jason Merrill :
>
> > On Thu, Mar 1, 2018 at 3:26 PM, Andrew Pinski  wrote:
> > > On Thu, Mar 1, 2018 at 12:18 PM, Roman Popov 
> wrote:
> > >> Is there any progress on this problem?
> > >>
> > >> I'm not familiar with G++ , but I have little experience with LLVM.  I
> > can
> > >> try make LLVM emitting mangled names to DW_AT_name, instead of
> demangled
> > >> ones.
> > >> This way GDB can match DW_AT_name against RTTI. And for display it can
> > >> call  abi::__cxa_demangle(name, NULL, NULL, ), from #include
> > >> .
> > >>
> > >> Will it work?
> > >
> > >
> > > Reading http://wiki.dwarfstd.org/index.php?title=Best_Practices:
> > > the DW_AT_name attribute should contain the name of the corresponding
> > > program object as it appears in the source code, without any
> > > qualifiers such as namespaces, containing classes, or modules (see
> > > Section 2.15). A consumer can easily reconstruct the fully-qualified
> > > name from the DIE hierarchy. In general, the value of DW_AT_name
> > > should be such that a fully-qualified name constructed from the
> > > DW_AT_name attributes of the object and its containing objects will
> > > uniquely represent that object in a form natural to the source
> > > language.
> > >
> > >
> > > So having the mangled symbol in DW_AT_name seems backwards and not the
> > > point of it.
> >
> > If we add the mangled name, which seems reasonable, it should be in
> > DW_AT_linkage_name.
> >
> > Jason
> >
>


Re: gdb 8.x - g++ 7.x compatibility

2018-02-07 Thread Daniel Berlin
>
>
> This avoids the problem of the demangler gdb is using getting a different
> name than the producer used. It also should always give you the right one.
> If the producer calls the type "vtable fo Foo<2u>" here and "Foo<2>"
> elsewhere, yes, that's a bug. It should be consistent.
>
>
This should be Foo<2u> vs Foo<2>


> If there are multiple types named Foo<2u>, DWARF needs to be extended to
> allow a pointer from the vtable debug info to the class type debug info
> (unless they already added one).
> Then you would do *no* symbol lookups, you'd follow that pointer (gdb
> would add it to the symbol_info structure)
>

Note that the ABI is explicitly designed so that type identity can be done
by address comparison.

Also note that adding alternative names for symbols is probably a "not
great" idea, though it would work. The *vast* majority of debug info is in
those names, and adding long names will often triple or quadruple the size
of debug info.
Google has binaries where 90% of the size is in gigabytes of linkage
names.  People have worked hard to need the names *less*.

So you want to get *away* from going by name, especially when the compiler
knows "this is the vtable that goes with this type". It should just tell
you.
Right now, that is what you are missing "given a vtable for a type, how do
i get the type".

Trying to do that by name is a hack. A hack that has lasted 15+ years mind
you, but still a hack.

I would just kill that hack.


Re: gdb 8.x - g++ 7.x compatibility

2018-02-07 Thread Daniel Berlin
On Wed, Feb 7, 2018 at 5:44 AM, Simon Marchi <simon.mar...@polymtl.ca>
wrote:

> On 2018-02-07 02:21, Daniel Berlin wrote:
>
>> As the person who, eons ago, wrote a bunch of the the GDB code for this
>> C++
>> ABI support, and as someone who helped with DWARF support in both GDB and
>> GCC, let me try to propose a useful path forward (in the hopes that
>> someone
>> will say "that's horrible, do it this  instead")
>>
>> Here are the constraints i believe we are working with.
>>
>> 1. GDB should work with multiple DWARF producers and multiple C++
>> compilers
>> implementing the C++ ABI
>> 2. There is no canonical demangled format for the C++ ABI
>> 3. There is no canoncial target demangler you can say everyone should use
>> (and even if there was, you don't want to avoid debugging working because
>> someone chose not to)
>> 4. You don't want to slow down GDB if you can avoid it
>> 5. Despite them all implementation the same ABI, it's still possible to
>> distinguish the producers by the producer/compiler in the dwarf info.
>>
>> Given all that:
>>
>> GDB has ABI hooks that tell it what to do for various C++ ABIs. This is
>> how
>> it knows to call the right demangler for gcc v3's abi vs gcc v2's abi. and
>> handle various differences between them.
>>
>> See gdb/cp-abi.h
>>
>> The IMHO, obvious thing to do here is: Handle the resulting demangler
>> differences with 1 or more new C++ ABI hooks.
>> Or, introduce C++ debuginfo producer hooks that the C++ ABI hooks use if
>> folks want it to be separate.
>>
>> Once the producer is detected, fill in the hooks with a set of functions
>> that does the right thing.
>>
>> I imagine this would also clean up a bundle of hacks in various parts of
>> gdb trying to handle these differences anyway (which is where a lot of the
>> multiple symbol lookups/etc that are often slow come from.
>> If we just detected and said "this is gcc 6, it behaves like this", we
>> wouldn't need to do that)
>>
>> In case you are worried, you will discover this is how a bunch of stuff is
>> done and already contains a ball of hacks.
>>
>> Using hooks would be, IMHO, a significant improvement.
>>
>
> Hi Daniel,
>
> Thanks for chiming in.
>
> This addresses the issue of how to do good software design in GDB to
> support different producers cleanly, but I think we have some issues even
> before that, like how to support g++ 7.3 and up.


They are, IMHO, the same.


> I'll try to summarize the issue quickly.  It's now possible to end up with
> two templated classes with the same name that differ only by the signedness
> of their non-type template parameter.


Yup.


>   One is Foo and the other is Foo (the 10 is
> unsigned).  Until 7.3, g++ would generate names like Foo<10> for the former
> and names like Foo<10u> for the later (in the DW_AT_name attribute of the
> classes' DIEs).  Since 7.3, it produces Foo<10> for both.
>
> When GDB wants to know the run time type of an object, it fetches the
> pointer to its vtable, does a symbol lookup to get the linkage name and
> demangles it,


Yes, this is code i wrote :)


> which gives a string like "vtable for Foo<10>" or "vtable for Foo<10u>".
> It strips the "vtable for " and uses the remainder to do a type lookup.
> Since g++ 7.3, you can see that doing a type lookup for Foo<10> may find
> the wrong type,

Certainly if you can't distinguish the types you are screwed, but this  is
not the only way to find this type. This was in fact, the first hack i
thought up to make it work because the ABI was not entirely fully formed at
the time, and the debug info did not have fully qualified names.

Here is a different way that should produce more consistent results.

Find the linker symbol
look up the symbol in the dwarf info by address.
This will give you the vtable type debug info.
Look at the name attribute of the debug info, which should already be
demangled.

Strip the "vtable for" from that.
Look that up.

This avoids the problem of the demangler gdb is using getting a different
name than the producer used. It also should always give you the right one.
If the producer calls the type "vtable fo Foo<2u>" here and "Foo<2>"
elsewhere, yes, that's a bug. It should be consistent.

If there are multiple types named Foo<2u>, DWARF needs to be extended to
allow a pointer from the vtable debug info to the class type debug info
(unless they already added one).
Then you would do *no* symbol lookups, you'd follow that pointer (gdb would
add it to the symbol_info structure)



> and doing a lookup for Foo<10u> won't find anything.
>

Correct.  This stripping is a hook that does the stripping and  lookup in
gnuv3_rtti_type.

That is not the only way yo to it.



>
> So the problem here is how to uniquely identify those two classes when we
> are doing this run-time type finding operation (and probably in other cases
> too).



>
>
> Simon
>


Re: gdb 8.x - g++ 7.x compatibility

2018-02-06 Thread Daniel Berlin
As the person who, eons ago, wrote a bunch of the the GDB code for this C++
ABI support, and as someone who helped with DWARF support in both GDB and
GCC, let me try to propose a useful path forward (in the hopes that someone
will say "that's horrible, do it this  instead")

Here are the constraints i believe we are working with.

1. GDB should work with multiple DWARF producers and multiple C++ compilers
implementing the C++ ABI
2. There is no canonical demangled format for the C++ ABI
3. There is no canoncial target demangler you can say everyone should use
(and even if there was, you don't want to avoid debugging working because
someone chose not to)
4. You don't want to slow down GDB if you can avoid it
5. Despite them all implementation the same ABI, it's still possible to
distinguish the producers by the producer/compiler in the dwarf info.

Given all that:

GDB has ABI hooks that tell it what to do for various C++ ABIs. This is how
it knows to call the right demangler for gcc v3's abi vs gcc v2's abi. and
handle various differences between them.

See gdb/cp-abi.h

The IMHO, obvious thing to do here is: Handle the resulting demangler
differences with 1 or more new C++ ABI hooks.
Or, introduce C++ debuginfo producer hooks that the C++ ABI hooks use if
folks want it to be separate.

Once the producer is detected, fill in the hooks with a set of functions
that does the right thing.

I imagine this would also clean up a bundle of hacks in various parts of
gdb trying to handle these differences anyway (which is where a lot of the
multiple symbol lookups/etc that are often slow come from.
If we just detected and said "this is gcc 6, it behaves like this", we
wouldn't need to do that)

In case you are worried, you will discover this is how a bunch of stuff is
done and already contains a ball of hacks.

Using hooks would be, IMHO, a significant improvement.



On Mon, Feb 5, 2018 at 7:52 PM, Martin Sebor  wrote:

> On 02/05/2018 09:59 AM, Simon Marchi wrote:
>
>> On 2018-02-05 11:45, Martin Sebor wrote:
>>
>>> Yes, with auto, the type of the constant does determine the type
>>> of the specialization of the template in the source code.
>>>
>>> In non-type template arguments, and more to the point I was making,
>>> in diagnostics, the suffix shouldn't or doesn't need to be what
>>> distinguishes the type of the template, even with auto.  The part
>>> "with auto IVAL = 10" in the message
>>>
>>>   'void foo::print() [with auto IVAL = 10]':
>>>
>>> would be far clearer if auto were replaced by the deduced type,
>>> say along these lines:
>>>
>>>   'void foo::print() [with int IVAL = 10]':
>>>
>>> rather than relying on the suffix alone to distinguish between
>>> different specializations of the template.  That seems far too
>>> subtle to me.  But I think the diagnostic format is (or should
>>> be) independent of the debug info.
>>>
>>
>> That makes sense.
>>
>> With respect to the suffix, I keep coming back to the reality
>>> that even if GCC were to change to emit a format that GDB can
>>> interpret easily and efficiently, there still are other
>>> compilers that emit a different format.  So the conclusion
>>> that a general solution that handles more than just one format
>>> (at least for non-type template arguments without auto) seems
>>> unescapable.
>>>
>>
>> If there are other compilers we wanted to support for which we can't
>> trust the template format, we could always ignore the template part of
>> DW_AT_name specifically for them.  But since g++ and gdb are part of the
>> same project and are expected to work well and efficiently together, I
>> would have hoped that we could agree on a format so that gdb would not
>> have to do the extra work when parsing a g++-generated file
>> (consequently the same format that libiberty's demangler produces).
>>
>> Given the problem I illustrated in my previous mail, I don't have a
>> general solution to the problem to propose.
>>
>
> Okay, let me talk to Jason to see what he thinks.  I'm open
> to restoring the suffix for the debug info as long as it doesn't
> adversely affect the diagnostics.  I agree that if GCC can help
> make GDB more efficient it's worth putting effort into.  (I do
> still think that GDB should work with other providers besides
> GCC, even if perhaps not necessarily as efficiently.)
>
> Martin
>


Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Daniel Berlin
On Sat, Nov 24, 2012 at 12:13 PM, Ian Lance Taylor i...@airs.com wrote:
 Diego Novillo dnovi...@google.com writes:

 Sure.  First I wanted to find out whether this requirement is just a
 technical limitation with our mailing list software.

 It is not a technical limitation.  We explicitly reject HTML e-mail.  We
 could accept it.

 As Jonathan pointed out, accepting HTML e-mail and then displaying it in
 the web archives will make us even more of a spam target than we already
 are, and will mean that we will need some mechanisms for identifying and
 removing spam and virus links in the web pages.

I'd love to see data on this.  As others have pointed out, almost
every other open source project accepts html email.
I went through, for example, the LLVM email archives, and i don't see
a massive amount of spam.
Do you have reason to believe our existing spam detection solution
will start to fail massively when presented with html email?
After all, if most of the HTML email is spam, something being HTML
email is a great signal for it.

 A possible compromise would be to accept HTML e-mail that has a text
 alternative, and only display the text alternative in the archives.
 That would also work for people who have text-only e-mail readers.  In
 general that would help for people who use e-mail programs that send
 HTML with text alternatives by default.  But it would fail for people
 who actually use HTML formatting in a meaningful way.
I have not seen html formatting used in the other open source
projects, just text/html emails.

  And, of course,
 this would require some administrative work to be done.

 I don't really care one way or the other on this issue.  That said:

 1) People who send HTML e-mail ought to get a bounce message, so I would
 think they would be able to reform.

At that point they probably don't care.
Honestly, any community that actively makes it hard for me to send
mail from a common email program, is a huge turn-off.
Folks can retort that we may not want users who don't want to take the
time to send non-html email.  I doubt this is actually true, since the
majority of folks i've seen are just using clients that default to
html email, and aren't doing anything obnoxious.

Note that *we* are currently rejecting multipart/alternative if it
contains text/html, even if it contains text/plain.
This is fairly obnoxious.

 2) The fact that Android refuses to provide a non-HTML e-mail capability
 is ridiculous but does not seem to me to be a reason for us to change
 our policy.

Expect it to get worse.  Folks can say what they like, but other
communities i'm a part of, and are much larger than GCC, deal with
HTML email with zero problem.  All bouncing HTML email is really doing
is turning away some people.

In the olden days, when html email was some shitty gobbledygook
produced by an old version of exchange, this may have made sense.  In
the days now of relatively sane multipart/alternative emails, it just
seems like folks being annoyed that the rest of the world changed.




 Ian


Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Daniel Berlin
On Sat, Nov 24, 2012 at 12:47 PM, Robert Dewar de...@adacore.com wrote:

 2) The fact that Android refuses to provide a non-HTML e-mail capability
 is ridiculous but does not seem to me to be a reason for us to change
 our policy.


 Surely there are altenrative email client for Android that have plain
 text capability???


Yes, we should expect users to change, instead of keeping up with users.


Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Daniel Berlin
Sorry dude, I don't engage in substantive conversation with abusive trolls.


On Sat, Nov 24, 2012 at 2:43 PM, Ruben Safir ru...@mrbrklyn.com wrote:
 On Sat, Nov 24, 2012 at 12:58:33PM -0500, Daniel Berlin wrote:
 On Sat, Nov 24, 2012 at 12:13 PM, Ian Lance Taylor i...@airs.com wrote:
  Diego Novillo dnovi...@google.com writes:
 
  Sure.  First I wanted to find out whether this requirement is just a
  technical limitation with our mailing list software.
 
  It is not a technical limitation.  We explicitly reject HTML e-mail.  We
  could accept it.
 
  As Jonathan pointed out, accepting HTML e-mail and then displaying it in
  the web archives will make us even more of a spam target than we already
  are, and will mean that we will need some mechanisms for identifying and
  removing spam and virus links in the web pages.

 I'd love to see data on this.

 Go generate it with you own mailing list and let us know.


 As others have pointed out, almost
 every other open source project accepts html email.


 Wrong

 And it is silly to burden everyone else with the bulk and storage of
 that nonsense, let alone the multiple fonts and standards and CSS style
 sheets and missappropriate images and links.

 Its time for users to get with it and not use every stupid thing shoved
 down their through.  And PLEASE tell me you write you C programming in
 adroind using your thumb prints.

 Ruben

 --
 http://www.mrbrklyn.com - Interesting Stuff
 http://www.nylxs.com - Leadership Development in Free Software

 So many immigrant groups have swept through our town that Brooklyn, like 
 Atlantis, reaches mythological proportions in the mind of the world  - RI 
 Safir 1998

 http://fairuse.nylxs.com  DRM is THEFT - We are the STAKEHOLDERS - RI Safir 
 2002

 Yeah - I write Free Software...so SUE ME

 The tremendous problem we face is that we are becoming sharecroppers to our 
 own cultural heritage -- we need the ability to participate in our own 
 society.

  I'm an engineer. I choose the best tool for the job, politics be damned.
 You must be a stupid engineer then, because politcs and technology have been 
 attached at the hip since the 1st dynasty in Ancient Egypt.  I guess you 
 missed that one.

 © Copyright for the Digital Millennium


Re: question on points-to analysis

2010-09-11 Thread Daniel Berlin
On Thu, Sep 9, 2010 at 7:24 AM, Richard Guenther
richard.guent...@gmail.com wrote:
 On Thu, Sep 9, 2010 at 1:19 PM, Amker.Cheng amker.ch...@gmail.com wrote:
 Hi,
 I am studying gcc's points-to analysis right now and encountered a question.
 In paper Off-line Variable Substitution for Scaling Points-to
 Analysis, section 3.2
 It says that we should not substitute a variable with other if it is
 taken address.
  and How gcc keeps accuracy of points-to
 information after doing this.
In theory, this is true, but a lot of the optimizations decrease
accuracy at a cost of making the problem solvable in a reasonable
amount of time.
By performing it after building initial points-to sets, the amount of
accuracy loss is incredibly small.
The only type of constraint that will generate inaccuracy at that
point is a complex address taken with offset one, which is pretty
rare.
On the other hand, *not* doing it will make the problem take forever to solve :)

What's better, something that gives correct but slightly conservative
answers in 10s, or something that gives correct and 1% less
conservative answers in 200s?


Re: Two debug entries for one local variables, is it a bug in GCC or GDB

2010-07-09 Thread Daniel Berlin
Your bug was not a real bug, AFAICT.
At least the debug info you have shown in
http://gcc.gnu.org/ml/gcc/2010-01/msg00054.html is not wrong.
Certainly, two DIES were unnecessary, but the way it did it was not broken.
Note that one of them is marked as DW_AT_declaration, specifying that
is where the declaration of that variable occurred.
The other is a definition.

They happen to be at the same line, so it's pointless to create two
DIE's, but it's not broken.

In this case, the debug information asmwarrior is showing is clearly broken.
I suspect GCC is splitting the variable somehow, because if you
notice, templateArgument is given different memory locations in both
blocks.


On Fri, Jul 9, 2010 at 1:58 AM, Nenad Vukicevic ne...@intrepid.com wrote:
  I reported something similar back in January:

 http://gcc.gnu.org/ml/gcc/2010-01/msg00054.html

 As I recall, GCC creates duplicates.

 Nenad

 On 7/8/10 7:33 PM, asmwarrior wrote:

  I have post this message to both GCC and GDB, because I'm not sure it is
 a bug in GDB or GCC.
 Hi, I have just find two dwarf debug entries for one local variables.

 For example, the sample code is just like:

 -

 wxString ParserThread::ReadAncestorList()
 {

    wxString ccc;
    wxString templateArgument;
    wxString aaa;
    aaa = m_Tokenizer.GetToken(); // eat :
    templateArgument = aaa;
    while (!TestDestroy())
    {

        //Peek the next token
        wxString next = m_Tokenizer.PeekToken();

        if (next.IsEmpty()
            || next==ParserConsts::opbrace
            || next==ParserConsts::semicolon ) // here, we are at the end
 of ancestor list
        {
            break;
        }
        else if (next==ParserConsts::lt)       // class AAA : BBB  int,
 float
        {
            wxString arg = SkipAngleBraces();
            if(!arg.IsEmpty())                 // find a matching
            {
                templateArgumentarg;
            }
            else
            {
                TRACE(_T(Not Matching  find. Error!!!) );
            }
        }
 ...
 ---

 But I found that GDG can show the wxString aaa correctly, but wxString
 templateArgument incorrectly.

 I have just check the debug information in the object file.
 and found that there are two entries for local variable
 argumentTemplate, but only one entry for aaa.

 
 240a9f: Abbrev Number: 182 (DW_TAG_variable)
 40aa1    DW_AT_name        : (indirect string, offset: 0x1095):
 templateArgument
 40aa5    DW_AT_decl_file   : 19
 40aa6    DW_AT_decl_line   : 2593
 40aa8    DW_AT_type        :0xd168
 40aac    DW_AT_accessibility: 3    (private)
 40aad    DW_AT_location    : 2 byte block: 53 6     (DW_OP_reg3;
 DW_OP_deref)
 240ab0: Abbrev Number: 164 (DW_TAG_lexical_block)
 40ab2    DW_AT_ranges      : 0x168
 340ab6: Abbrev Number: 165 (DW_TAG_variable)
 40ab8    DW_AT_name        : ccc
 40abc    DW_AT_decl_file   : 19
 40abd    DW_AT_decl_line   : 2592
 40abf    DW_AT_type        :0xd168
 40ac3    DW_AT_location    : 2 byte block: 91 50     (DW_OP_fbreg: -48)
 340ac6: Abbrev Number: 179 (DW_TAG_variable)
 40ac8    DW_AT_name        : (indirect string, offset: 0x1095):
 templateArgument
 40acc    DW_AT_decl_file   : 19
 40acd    DW_AT_decl_line   : 2593
 40acf    DW_AT_type        :0xd168
 40ad3    DW_AT_location    : 2 byte block: 91 4c     (DW_OP_fbreg: -52)
 340ad6: Abbrev Number: 165 (DW_TAG_variable)
 40ad8    DW_AT_name        : aaa
 40adc    DW_AT_decl_file   : 19
 40add    DW_AT_decl_line   : 2594
 40adf    DW_AT_type        :0xd168
 40ae3    DW_AT_location    : 2 byte block: 91 48     (DW_OP_fbreg: -56)
 340ae6: Abbrev Number: 170 (DW_TAG_lexical_block)


 --
 Also, you can see the screen shot in my Codeblocks forums' post:

 http://forums.codeblocks.org/index.php/topic,12873.msg86906.html#msg86906


 So, my question is:

 Is this a bug in GCC or GDB? ( I have just test the MinGW GCC 4.5 and
 MinGW 4.4.4, they get the same result)


 Thanks

 Asmwarrior (ollydbg from codeblocks' forum)




Re: Unnecessary PRE optimization

2009-12-25 Thread Daniel Berlin
 In general it will be tricky for latter passes to clean up the messes.
 The fundamental problem is that the address computation is exposed to
 PRE prematurely (for a given target  ) at GIMPLE level.


Yeah, i'm not sure PRE can really do anything different here.
I also think you would have a very hard time trying to stop everything
from moving invariant computations around at the tree level.
Might make more sense to finish the tree level with a global code
motion pass that is target sensitive or something.


Re: Caused by unknown alignment, was: Re: On the x86_64, does one have to zero a vector register before filling it completely ?

2009-12-19 Thread Daniel Berlin
On Sat, Dec 19, 2009 at 2:48 PM, Steven Bosscher stevenb@gmail.com wrote:
 On Mon, Nov 30, 2009 at 12:10 PM, Steven Bosscher stevenb@gmail.com 
 wrote:
 I'll see if I can make the intraprocedural version work again before
 Christmass.

 Well, it works, but then again it really does not. For example, the
 original implementation doesn't even look at the alignment of var.

  So
 the pass doesn't do anything useful. Dan, do you have a copy somewhere
 that does more, or was that never implemented?
Yeah, somewhere, but honestly, i don't feel like digging it out.


Re: Caused by unknown alignment, was: Re: On the x86_64, does one have to zero a vector register before filling it completely ?

2009-11-29 Thread Daniel Berlin
On Sun, Nov 29, 2009 at 3:33 PM, Richard Guenther
richard.guent...@gmail.com wrote:
 On Sun, Nov 29, 2009 at 9:18 PM, Daniel Berlin dber...@dberlin.org wrote:

 Such a thing already existed a few years ago (IIRC Haifa had something
 that Dan picked up and passed on to me). But it never brought any
 benefits. I don't have the pass anymore, but perhaps Dan still has a
 copy of it somewhere.

 It was actually posted and reviewed, you can find it in the archives.

 It's probably wildly out of date now ;)

 It's a fairly trivial lattice problem to implement using the sparse 
 propagator.

 Well - you need a place to store the result obviously...

Sure, but when the pass was written, the only thing that could take
advantage was the vectorizer, and it was too simple to be helped by
this at the time.
All the testcases it could easily vectorize were ones where the
alignment was trivially known anyway ;)

Back then, I believe the result was stored in the data reference
structure somewhere
This pass may even be on the lno branch or something.


Re: Worth balancing the tree before scheduling?

2009-11-25 Thread Daniel Berlin
On Mon, Nov 23, 2009 at 10:17 AM, Ian Bolton bol...@icerasemi.com wrote:
 David Edelsohn wrote:
 On Fri, Nov 20, 2009 at 10:05 AM, Ian Bolton bol...@icerasemi.com
 wrote:
  From some simple experiments (see below), it appears as though GCC
 aims
  to
  create a lop-sided tree when there are constants involved (func1
 below),
  but a balanced tree when there aren't (func2 below).
 
  Our assumption is that GCC likes having constants all near to each
 other
  to
  aid with tree-based optimisations, but I'm fairly sure that, when it
  comes
  to scheduling, it would be better to have a balanced tree, so sched
 has
  more
  choices about what to schedule next?

 I think this would depend on the target architecture and instruction
 set: CISC vs RISC, many registers vs few registers, etc.  I do not
 believe that GCC intentionally is trying to optimize for either, but I
 do not think there is a single, right answer.

 Regardless of the architecture, I can't see how an unbalanced tree would
 ever be a good thing.

We actually don't *unbalance* it on purpose, but we do rewrite it on
purpose to maximize the number of subexpressions that are the same.
IE given two sets of calculations

(a+5)+(b+7)+(c+8)+(d+9)+(e+10)
(a+10)+(b+9)+(c+7)+(b+8)+(a+5)

we will attempt to sort and rewrite these into the same tree.

It so happens that in doing so, it chooses the most trivial way to do
so, which generates an unbalanced trees, in particular, left-linear
form.

The code i wrote in tree-ssa-reassoc.c was mainly to eliminate some
missed subexpression elimination opportunities, it was not meant to be
the end-all be all of reassociation.
There are much nicer theoretical ways to do tree balancing that still
retain equivalent subexpressions, but nobody as of yet has implemented
it.

It's not even clear that the tree level is where you should be doing
this balancing, probably somewhere at the RTL level there should be
another real instruction-tree rewriting phase.
Patches welcome!


Re: delete dead feature branches?

2009-10-13 Thread Daniel Berlin
On Mon, Oct 12, 2009 at 5:20 PM, Jason Merrill ja...@redhat.com wrote:
 On 10/12/2009 05:17 PM, Andrew Pinski wrote:

 That seems like a huge bug in git-svn because we already use multiple
 directory levels under branches.  Hint ibm and redhat and debain.

 Yep, that's why I said expand.  I've thought about fixing that aspect of
 git-svn, but I'm not sure how it would tell the difference between a branch
 directory and a directory of branches given that SVN basically models a
 filesystem.


Branches always start with a copy of a directory somewhere else, at
least in the gcc repo.
Further, at least in the gcc repo, all branches start with a copy of a
directory from branches or from trunk.
So, it's fairly easy to detect branches.

 Jason



Re: asm goto vs simulate_block

2009-08-27 Thread Daniel Berlin
My guess, witjout seeing the testcase.
In ccp_initialize we have:

  for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (i))
{
  gimple stmt = gsi_stmt (i);
  bool is_varying = surely_varying_stmt_p (stmt);

  if (is_varying)
{
  tree def;
  ssa_op_iter iter;

  /* If the statement will not produce a constant, mark
 all its outputs VARYING.  */
  FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_ALL_DEFS)
set_value_varying (def);
}
  prop_set_simulate_again (stmt, !is_varying);

This code looks clearly broken if the statement is control altering
(like your asm), since it will cause us to never simulate it and add
the outgoing edges (since the outgoing edges are only added in
simulate_stmt).

Without hacking through the abstraction (since it's ssa propagate that
adds the outgoing edges on simulation), the only thing i can see to do
would be to change the conditional to if (is_varying 
!stmt_ends_bb_p (bb)) (or the equivalent) so that it simulates them.
Or add a way to tell the propagator about edges it should add
initially to the control worklist.




On Thu, Aug 27, 2009 at 1:23 PM, Richard Hendersonr...@twiddle.net wrote:
 The kernel folk here at Red Hat have given me a test case (which I'll be
 happy to forward, along a complete patch vs mainline) which gets miscompiled
 because we never get around to adding all of the appropriate blocks outgoing
 from an asm-goto to the simulation.

 I can't figure out why the VARYING that we get in simulate_stmt and
 subsequent calls to add_control_edge are not enough to DTRT.  All I know is
 that the attached patch does in fact work around the problem, changing the
 .028t.ccp1 dump:

 ...
  Lattice value changed to VARYING.  Adding SSA edges to worklist.
 +Adding Destination of edge (13 - 14) to worklist
 +
 +
 +Simulating block 14
 ...

 Can someone give me a good explanation as to why this patch would be needed?


 r~



Re: Work on gc-improv branch

2009-08-21 Thread Daniel Berlin
On Fri, Aug 21, 2009 at 5:37 AM, Steven Bosscherstevenb@gmail.com wrote:
 On Fri, Aug 21, 2009 at 10:52 AM, Laurynas
 Biveinislaurynas.bivei...@gmail.com wrote:
 BTW, it does not deal with types that in some instances have variables
 allocated in proper GC way (with a path from GC root) and in some
 instances not. Fixing these is going to be hard.

 Do you have some examples?

 Trees and rtxes mostly.

 Not to discourage you, but, eh... -- wouldn't it be a much more useful
 project to move RTL out of GC space completely instead of improving GC
 for rtxes?  The life time of RTL is pretty well defined by now and
 much of the unwieldly GC / GTY (and, in fact PCH) code would go away
 if RTL would just live on obstacks again.


One problem with obstacks was that it didn't allow freeing of stuff in
the middle, so the high watermark for passes could be quite high
unless you used many obstacks.
It would be trivial to make an arena-style allocator that took care of
this problem, however.


Re: How could I get alias set information from data_reference_p

2009-07-16 Thread Daniel Berlin
On Thu, Jul 16, 2009 at 5:00 AM, Li Fengnemoking...@gmail.com wrote:
 Hi Richard,
 On 7/16/09, Richard Guenther richard.guent...@gmail.com wrote:
 On Thu, Jul 16, 2009 at 1:15 AM, Tobias
 Grossergros...@fim.uni-passau.de wrote:
 On Wed, 2009-07-15 at 22:48 +0200, Richard Guenther wrote:
 On Wed, Jul 15, 2009 at 10:46 PM, Richard
 Guentherrichard.guent...@gmail.com wrote:
  On Wed, Jul 15, 2009 at 9:15 PM, Tobias
  Grossergros...@fim.uni-passau.de wrote:
  A note on Lis final graph algorithm.  I don't understand why you
  want
  to allow data-references to be part of multiple alias-sets?  (Of
  course
  I don't know how you are going to use the alias-sets ...)
 
  Just to pass more information to Graphite. The easiest example might
  be
  something like
 
  A -- B -- C
 
  if we have
 
  AS1 = {A,B}
  AS2 = {B,C}
 
  we know that A and C do not alias and therefore do not have any
 
  No, from the above you _don't_ know that.  How would you arrive
  at that conclusion?

 What I want to say is that, if  A -- B -- C is supposed to be the alias
 graph
 resulting from querying the alias oracle for the pairs (A, B), (A, C),
 (B, C)
 then this is a result that will never occur.  Because if (A, B) is true
 and (B, C) is true then (A, C) will be true as well.

 What for example for this case:

 void foo (*b) {
  int *a
  int *c

  if (bar())
        a = b;
  else
        c = b;
 }

 I thought this may give us the example above, but it seems I am wrong.
 If the alias oracle is transitive that would simplify the algorithm a
 lot. Can we rely on the transitivity?

 Actually I was too fast (or rather it was too late), an example with
 A -- B -- C would be

 int a, c;
 void foo(int *p)

 with B == (*p).  B may alias a and c but a may not alias c.

 So, back to my first question then, which is still valid.

 Just to pass more information to Graphite. The easiest example might be
 something like

 A -- B -- C

 if we have

 AS1 = {A,B}
 AS2 = {B,C}

 we know that A and C do not alias and therefore do not have any
 dependencies.

 How do you derive at 'A and C do not alias' from looking at
 the alias set numbers for AS1 and AS2.  How do you still
 figure out that B aliases A and C just from looking at
 the alias set numbers?  Or rather, what single alias set number
 does B get?
 AS1 = {A,B}
 AS2 = {B,C}

 B is not neccessary to have only a single alias set number,
 for this situation, B will have alias number both 1 and 2 (it
 is in both alias set),
 A will be with alias number 1 and
 C will be with alias number 2.
 So A and C got different alias set number, we could conclude
 that they are not alias.
 While for A and B or B and C, as B got alias number both 1 and 2,
 so they may alias.

So if i understand you right, it seems all you've done is inverted the
existing alias/points-to sets.
IE instead of saying A has B, C, D in it's alias set, you are saying B
is in the alias set of A, C is in the alias set of A, D is in the
alias set of A.

Effectively,

A - {B, C, D}
B - {C, D, E}
becomes
B - A
C - A, B
D - A ,B
E - B

Then you are assigning numbers to the sets that appear on the RHS.
You still end up with bitmaps, and you still have to intersect them
(or describe containment some other way and do containment queries).

For a large program, this mapping is actually massive and quite
expensive to compute (In points-to, normally you use location
equivalence and BDD's to compress the sets. I never got around to
finishing location equivalence inside GCC, though it is in the LLVM
implementation i did if you want to look).

--Dan


Re: Internal Representation

2009-07-07 Thread Daniel Berlin
You must be looking at old documentation or something.
Call's are represented by GIMPLE_CALL_STMT (or CALL_EXPR in older GCC'en).
There has been a callgraph for quite a long time (see cgraph*.c and cgraph*.h)

On Tue, Jul 7, 2009 at 7:26 AM, Nicolas
COLLINnicolas.col...@fr.thalesgroup.com wrote:
 Hello,
 I looked at the part of the documentation about function bodies and I wonder
 something : is there a way to get the function calls from it ? Because I'd
 like to make a call graph which represent function and the functions it
 calls.
 Thank you.

 Nicolas COLLIN



Re: Phase 1 of gcc-in-cxx now complete

2009-06-27 Thread Daniel Berlin

 All that above said - do you expect us to carry both vec.h (for VEC in
 GC memory) and std::vector (for VECs in heap memory) (and vec.h
 for the alloca trick ...)?  Or do you think we should try to make the GTY
 machinery C++ aware and annotate the standard library (ick...)?

Since the containers have mostly standard iterators, i'm not sure we
have to do much to the standard library. Simply require a set of
iterators with the right properties exist and generate code that
depends on this.
If you make your own container, you have to implement the iterators.


Re: (known?) Issue with bitmap iterators

2009-06-25 Thread Daniel Berlin
On Mon, Jun 22, 2009 at 3:19 PM, Dave
Korndave.korn.cyg...@googlemail.com wrote:
 Joe Buck wrote:

 As a general rule there is a performance cost for making iterators
 on a data structure safe with respect to modifications of that data
 structure.  I'm not in a position to say what the right solution is
 in this case, but passes that iterate over bitmaps without modifying
 those bitmaps shouldn't be penalized.  One solution sometimes used is
 two sets of iterators, with a slower version that's safe under
 modification.

  But then we'll run into the same bug again when someone uses the wrong kind,
 or changes the usage of a bitmap without changing which type it is.

We have this situation with, for example, the immediate use iterators,
and at least as far as anyone knows, it hasn't been broken yet ;)
So i don't think we should be too worried about it happening if we
were to create two types of iterators.


Re: (known?) Issue with bitmap iterators

2009-06-21 Thread Daniel Berlin
On Sat, Jun 20, 2009 at 10:54 AM, Jeff Lawl...@redhat.com wrote:

 Imagine a loop like this

 EXECUTE_IF_SET_IN_BITMAP (something, 0, i, bi)
  {
   bitmap_clear_bit (something, i)
   [ ... whatever code we want to process i, ... ]
  }

 This code is unsafe.

 If bit I happens to be the only bit set in the current bitmap word, then
 bitmap_clear_bit will free the current element and return it to the element
 free list where it can be recycled.

 So assume the bitmap element gets recycled for some other bitmap...  We then
 come around to the top of the loop where EXECUTE_IF_SET_IN_BITMAP will call
 bmp_iter_set which can reference the recycled element when it tries to
 advance to the next element via bi-elt1 = bi-elt1-next.  So we start
 iterating on elements of a completely different bitmap.  You can assume this
 is not good.

 Our documentation clearly states that I is to remain unchanged, but ISTM
 that the bitmap we're iterating over needs to remain unchanged as well.
 Is this a known issue, or am I just the lucky bastard who tripped over it
 and now gets to audit all the EXECUTE_IF_SET_IN_BITMAP loops?

No, this is known, and in fact, has been a source of interesting
bugs in the past since it doesn't segfault, but often, as you've
discovered, starts wandering into the free list happily iterating over
elements from bitmaps of dataflows past.

Making it safe is a little tricky, basically, you need to know whether
the element you are currently iterating over disappears.
At the very worst, you could make pre-delete hooks and have the
iterators register for them or something.
At best, you can probably set a bit in the bitmap saying it's being
iterated over, and then add a tombstone bit, which lets you mark
elements as deleted without actually deleting them until the end of
iteration when they are in the middle of iteration or something.



Also, what do you expect the semantics to be?
In particular, are new bits past the current index iterated over, or
do you expect to iterate over the bitmap as it existed at the time you
started iteration?


Re: git mirror at gcc.gnu.org

2009-06-16 Thread Daniel Berlin
On Tue, Jun 16, 2009 at 10:17 AM, Jason Merrillja...@redhat.com wrote:
 On 06/15/2009 01:22 PM, Bernie Innocenti wrote:

 On 06/15/09 16:28, Rafael Espindola wrote:

 It fails with

 $ git config --add remote.origin.fetch
 '+refs/remotes/*:refs/remotes/origin/*'
 $ git fetch
 fatal: refs/remotes/origin/gcc-4_0-branch tracks both
 refs/remotes/gcc-4_0-branch and refs/heads/gcc-4_0-branch

 Perhaps I should remove those friendly refs pointing at the remote
 branches?  Or can we find a better alternative?  Their use was to make a
 few frequently used branches readily visible in gitweb and with a simple
 clone.

 It makes sense to me to have the friendly refs so that the simple case
 (clone, don't try to use svn directly) works easily.

 What I'm doing doesn't have any problem with them.  Rafael was following
 older instructions written by someone else.  When I started editing that
 page, I put my suggestions at the bottom because I was fairly new to git.
  Now I feel more confident that what I'm doing is right (or at least better)
 so I think I'll remove the old instructions.

 But what are oldmaster/pre-globals-git/restrict-git?

pre-globals-git and restrict-git were test branches from an old mostly
broken version. ;)

They are dead and if someone wants to manually remove the refs, go for it.
No idea what oldmaster is.

  And why have both
 master and trunk as heads?


See broken ;)

 Perhaps git-svn could be configured to map svn branches directly to the
 local namespace instead of remotes/ ?

 It could, but it seems unnecessary.

 Jason



Re: git mirror at infradead?

2009-06-11 Thread Daniel Berlin
On Wed, Jun 10, 2009 at 9:38 PM, Jason Merrillja...@redhat.com wrote:
 Bernie Innocenti wrote:

 I won't re-create the repository from scratch, then.

 re-creating it from scratch should be fine as long as the metadata uses
 svn+ssh://gcc.gnu.org/svn/gcc.  I'd think that

 git svn clone -s file://path/to/svn/root \
  --rewrite-root=svn+ssh://gcc.gnu.org/svn/gcc

 would be the way to go.

Unless git svn fixed a bug i reported about rewrite roots, this will
crash eventually.
I had to work around it with some hacks :(


 Jason



Re: git mirror at gcc.gnu.org

2009-06-11 Thread Daniel Berlin
On Thu, Jun 11, 2009 at 3:07 AM, Bernie Innocentiber...@codewiz.org wrote:
 On 06/10/09 02:43, Ian Lance Taylor wrote:
 fche has already installed git 1.6.3.2 in /usr/local/bin on sourceware.
 That is now the one you will get if you connect to port git.  Hope
 nothing breaks.

 Thanks.

 I made a few changes that hopefully won't compromise existing clones:

 0) Since when Daniel disabled his cron job to update the repository,
   mine had not actually been running because the crontab line was
   commented out.  I enabled it.

 1) Set UseDeltaBaseOffset=true for slightly smaller packs
   The downside is that we loose compatibility with git versions
   older than 1.4.4.  Moreover, people fetching from dumb protocols
   will probably have to refetch from scratch.

 2) Remove the local checkout and configure the repository as
   bare=true


Yeah, this i had forgotten to do ;)

 3) I stupidly ran a git gc on the repository without specifying
   any parameters, which made the pack jump to a whopping 3.4GB.
   People fetching over git-aware protocols shouldn't notice
   anything unusual except, maybe, longer server-side packing time.
   Those stuck with http:// will have a bad surprise. /me ducks.
   I've now configured the default window and depth both to 100,
   and ran another repack from scratch which will take a long,
   long, long, long time.

It may be faster for my to rsync it to a 32 core machine, pack it,
then rsync it back now that delta compression is threaded.
Does it get large enough speedups these days to be worth it?


Re: git mirror at infradead?

2009-06-09 Thread Daniel Berlin
On Tue, Jun 9, 2009 at 3:16 PM, Bernie Innocentiber...@codewiz.org wrote:
 On 06/09/09 16:17, Jason Merrill wrote:
 Bernie Innocenti wrote:
 On 06/07/09 12:40, Ralf Wildenhues wrote:
 Is this mirror an independent conversion from the infradead one (i.e., I
 have to throw away the repo and re-download a full repo?  Or can I reuse
 objects)?

 It's an independent mirror, and I wouldn't recommend switching to it yet.

 There are permissions problems, and I might end up rsyncing the whole
 infradead repository rather than fixing things locally.

 Please please do *not* rsync the infradead repository.  The repository
 on gcc.gnu.org is set up so that I can switch back and forth between
 pulling from git and using git-svn directly; the infradead repository is
 not.

 For one thing, the infradead repository uses svn://gcc.gnu.org/svn/gcc,
 which makes it impossible to use git-svn to check in changes; the
 gcc.gnu.org git repository uses svn+ssh://gcc.gnu.org/svn/gcc, as is
 right and proper.  Also the remotes are in a different place from where
 git-svn puts them, though I suppose that's easy enough to adjust when
 fetching.

 I won't re-create the repository from scratch, then.

 Though I would still need an updated version of git to enable lots of
 branches and tags without wasting too much hard disk space.

 Can a sourceware admin please install (or build) git 1.6.3.x?  If there
 are concerns of breaking other things, I could install a local copy in
 ~/bin.
+overseers

I don't see a problem doing this (we definitely don't want two
versions installed), but there are real live git projects on
sourceware so we should be a bit careful.


Re: increasing the number of GCC reviewers

2009-06-09 Thread Daniel Berlin
On Tue, Jun 9, 2009 at 8:51 PM, Joseph S. Myersjos...@codesourcery.com wrote:
 On Tue, 9 Jun 2009, Ian Lance Taylor wrote:

 I believe that the most useful immediate thing we could do to speed up
 gcc development would be to move to a distributed version control
 system.

 We haven't even finished the last version control system transition
 (wwwdocs is still using CVS), it's not long since we started it and there
 is as yet no one clear winner in the world of DVCSes, so another
 transition would seem rather premature.
There will never be a clear winner here, because none of them is
better enough than the others, nor is it likely they ever will be.
At least hg and git have significant mindshare, and seem to attract
different kinds of communities.

Personally, I think at this point, moving to git would make the most
sense (as much as I love hg). It's much more forgiving than it used to
be, most things support it, and there are some cool possibilities,
like gerrit (http://code.google.com/p/gerrit/).  It is precisely built
to handle the problem of finding the right reviewers, making sure
patches don't fall through the cracks, while still making sure it's
easy to submit patches and have maintainers merge them.
I'm happy to set up a demo if anyone wants to see it.

As for advantages, having used both hg and git, the only thing i ever
use svn for anymore is to occasionally patch into a clean tree to do a
commit.

Lastly, as for wwwdocs, it's more a case of migration of wwwdocs was
left to those who care about it, which is a very small set of people.
Combined with the fact that it has a bunch of preprocessing/etc
scripts that get run over it, and nobody wants to do the conversion.


Re: git mirror at infradead?

2009-06-07 Thread Daniel Berlin
On Sun, Jun 7, 2009 at 8:08 AM, Bernie Innocentiber...@codewiz.org wrote:
 On 06/07/09 13:38, Bernie Innocenti wrote:
 On 06/07/09 12:40, Ralf Wildenhues wrote:
 Is this mirror an independent conversion from the infradead one (i.e., I
 have to throw away the repo and re-download a full repo?  Or can I reuse
 objects)?

 It's an independent mirror, and I wouldn't recommend switching to it yet.

 There are permissions problems, and I might end up rsyncing the whole
 infradead repository rather than fixing things locally.

 Daniel, could you please execute these commands on sourceware?

  cd /sourceware/projects/gcc-home/gitfiles
  chmod g+w -R .
  find . -type d -print0 | xargs -0 chmod g+s


Done.

 Where is the cron job to update the mirror?  Could you make it writable
 by group gcc, or just disable it so I can start mine from my user crontab?

I can't make it writable, but i have disabled it.

 Thanks!

 --
   // Bernie Innocenti - http://codewiz.org/
  \X/  Sugar Labs       - http://sugarlabs.org/



Re: VTA merge?

2009-06-05 Thread Daniel Berlin

 We can measure some of these things now.  Some can even be measured
 objectively ;-)

Do you have any of them handy (memory use, compile time with release
checking only, etc) so that we can start the public
argument^H^H^H^H^H^discussion?

;)


Re: c++ template conformance: gcc vs MS

2009-05-28 Thread Daniel Berlin
On Wed, May 27, 2009 at 10:33 PM, Mark Tall mtall@gmail.com wrote:
 2009/5/28 Andrew Pinski:

 GCC see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24314 .


 hmm.. known since 2005.  Is there some difficulty in fixing this ?


More likely it's pretty rare so nobody has gotten itchy enough to
scratch that part of the code :)

I'm sure if you wanted to take a gander nobody would stop you :)


Re: optimization question

2009-05-18 Thread Daniel Berlin
On Sat, May 16, 2009 at 5:49 AM, Richard Guenther
richard.guent...@gmail.com wrote:
 On Sat, May 16, 2009 at 11:41 AM, VandeVondele Joost vond...@pci.uzh.ch 
 wrote:

 I think it is useful to have a bugzilla here.

 will do.

 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40168


 Btw, complete unrolling is also hindred by the artificial limit of
 maximally
 unrolling 16 iterations.  Your inner loops iterate 27 times.  Also by the
 artificial limit of the maximal unrolled size.

 With --param max-completely-peel-times=27 --param
 max-completely-peeled-insns=666

 (values for trunk) the loops are unrolled at -O3.

 hmmm. but leading to slower code.

 Not for me - but well, the main issue is the memsets which the
 frontend generates.

We should be able to ignore memset for scalarizing/etc, no?


Re: [graphite] Weekly phone call notes

2009-04-29 Thread Daniel Berlin
On Wed, Apr 29, 2009 at 6:19 PM, Steven Bosscher stevenb@gmail.com wrote:
 On Thu, Apr 30, 2009 at 12:15 AM, Richard Guenther
 richard.guent...@gmail.com wrote:
 Well, the challenge is to retain the per SSA name information across
 Graphite.  At some point we need to stop re-computing points-to
 information because we cannot do so with retaining IPA results.

 Not to mention the compile time pains it causes...

Not to mention it's a lot easier for me to hide bugs if we stop
running it so much ;)


Re: Any plans to upgrade svn repository format to 1.5.0+ ?

2009-04-25 Thread Daniel Berlin
Errr, the format is not pre-1.5.0
It was svnadmin upgraded a while ago.


On Sat, Apr 25, 2009 at 5:06 AM, Laurynas Biveinis
laurynas.bivei...@gmail.com wrote:
 Hi,

 Apparently the server is already running svn 1.5.5 but the repository
 format is pre-1.5.0. If the repository format was upgraded, we could
 start using proper svn merge support for branch maintenance and get
 rid of manual merges and svnmerge.py. There is even an upgrade path
 from the svnmerge.py to svn 1.5.0 merge support for existing branches.
 And the upgrade would not disturb those who are using pre-1.5.0 svn
 clients.

 Any thoughts?

 --
 Laurynas



Re: update_version_svn (was: Minimum required GNAT version for bootstrap of GNAT on gcc trunk)

2009-04-04 Thread Daniel Berlin
On Sat, Apr 4, 2009 at 2:57 PM, Gerald Pfeifer ger...@pfeifer.com wrote:
 On Tue, 24 Feb 2009, Joseph S. Myers wrote:
 build.html was missing group write permission:

 -rw-r--r--   1 gerald   gcc 18920 Mar 30  2008 build.html

 This probably meant that the nightly onlinedocs update would fail to
 update it.  I've now moved and copied the file so it now has group write
 access.  So hopefully the next build will update it.

 Thanks for addressing this, Joseph!  To make up for this...

 Unfortunately the cron mails to gccadmin from that nightly update ceased
 regularly arriving in January 2008 (a few isolated ones have got through
 to the list since then), so any symptoms of this file not being updated
 that might appear in the output of that cron job will have been missed.
 I don't know if this is a message size limit, a spam check issue or
 something else (but reported it on the overseers list at the time).

 ...I investigated the issue a bit and found the following:

 The output of the update_web_docs_svn script is 1590406 byte, which
 pretty much explains why the messages have not made it to the list,
 I assume.

 One further observation I made is that this script is invoked via sh -x,
 and 22029 of 31959 lines and 1046189 of 1590406 byte overall are due to
 that.  In other words, we owe two thirds of the output to sh -x.

 (This has been like this since the first version of this instances of
 the script, revision 105483 by dberlin on 2005-10-17.)

I only made it work with SVN, so my guess is it was like this before,
back in the cvs days as well


 I propose to address this by the patch below, but will wait a couple of
 days (not the least until you are back) before making this change.
This looks fine to me


Re: GCC + libJIT instead of LLVM

2009-04-01 Thread Daniel Berlin
On Wed, Apr 1, 2009 at 5:33 AM, Kirill Kononenko
kirill.konone...@gmail.com wrote:
 Hello Dear GCC Developers,



 I would like to ask your opinion about possibility for integration of
 the libJIT Just-In-Time compilation library and GCC. For example, the
 same way as libffi is integrated within gcc source tree. It seems to
 me that LLVM solves many goals that are already complete and solved in
 GCC. So I think libJIT potentially is more useful for GCC and software
 developers.

Highly disagree.


 What is your opinion about this idea? How this should be done and what
 improvements could be made to both libJIT and GCC for this kind of
 integration?

I don't think we should integrate libJIT into GCC. It doesn't solve
any of the *interesting* JIT problems we would have, it only solves
the ones we know how to solve in a fairly short time (given enough
developers).


Re: GCC 4.4 Branch Created

2009-03-31 Thread Daniel Berlin
On Tue, Mar 31, 2009 at 2:46 PM, Rainer Orth
r...@techfak.uni-bielefeld.de wrote:
 Daniel Berlin dber...@dberlin.org writes:

 On Fri, Mar 27, 2009 at 6:34 PM, Joseph S. Myers
 jos...@codesourcery.com wrote:
  On Fri, 27 Mar 2009, Mark Mitchell wrote:
 [...]
  If we want to deprecate gccbug in 4.4 and remove it in 4.5 (and so not
  need 4.5.1 or subsequent versions in this script), there is still time to
  do so (though not to get it in the first deprecated-features-removal patch
  for 4.5 - that has already been approved for 4.5 and I am retesting it
  tonight before committing it to trunk).


 I think we should.
 We haven't received a bug through gccbug in quite a while :)

 No wonder: it didn't work for quite some time (reports didn't make it
 through), and despite several request from me you couldn't make time to
 find out what was going on.

This is true, I don't have time to maintain an incoming email script
used solely by you

  I don't blame you at all, but find it highly
 unfortunate to be forced to use a browser for initial submission instead of
 being able to use a proper mailer/editor.
I'm sorry you feel that way, but I simply don't have the extra time to
support a method of submission used by exactly one person.


Re: GCC 4.4 Branch Created

2009-03-31 Thread Daniel Berlin
On Tue, Mar 31, 2009 at 2:51 PM, Rainer Orth
r...@techfak.uni-bielefeld.de wrote:
 Daniel Berlin writes:


   I don't blame you at all, but find it highly
  unfortunate to be forced to use a browser for initial submission instead of
  being able to use a proper mailer/editor.
 I'm sorry you feel that way, but I simply don't have the extra time to
 support a method of submission used by exactly one person.

 Understood, but I wonder how other projects deal with this.  I cannot
 possibly be the only one in the world who wants to submit structured bug
 reports by mail.  Doesn't bugzilla have something native to handle this?

Bugzilla had a few things in the contrib dir at one point to handle
this, but they have all gone unmaintained, AFAIK.

The closest thing that exists in newer bugzilla is xmlrpc server
support, which you could use to make a client htat acted however you
want.


Re: GCC 4.4 Branch Created

2009-03-31 Thread Daniel Berlin
On Tue, Mar 31, 2009 at 3:01 PM, Gabriel Dos Reis dosr...@gmail.com wrote:
 On Tue, Mar 31, 2009 at 1:51 PM, Rainer Orth
 r...@techfak.uni-bielefeld.de wrote:
 Daniel Berlin writes:

 Understood, but I wonder how other projects deal with this.  I cannot
 possibly be the only one in the world who wants to submit structured bug
 reports by mail.

 No, you are not the only one.  Unless my memory is
 failing me I believe last time, I was told I was the only one...

But you also believed lynx accounted for more than 0.1% of web traffic, so 
In any case, this is simple:
we have not had an incoming email for gcc-gnats in all of march
or february
or january

If you two want to get together and write some perl incoming email
handler for bugzilla for whatever format and get it set up, i'm not
going to stop you.


Re: GCC 4.4 Branch Created

2009-03-29 Thread Daniel Berlin
On Sun, Mar 29, 2009 at 11:27 AM, Joseph S. Myers
jos...@codesourcery.com wrote:
 On Fri, 27 Mar 2009, Mark Mitchell wrote:

 The tasks that remain from branching.html are:

 I believe everything needed for starting the new release branch is now
 done apart from this:

 13. Asking Danny Berlin to adjust PRs.

 Daniel, could you change 4.4 to 4.4/4.5 in the summaries of all open
 PRs (4.4 Regression - 4.4/4.5 Regression, etc.) (through database
 access, not manually editing each PR with the web interface)?

Done

 Once this is done I'll deal with closing 4.2 branch (manually - branch
 closing involves interpreting the PR log for each 4.2 regression bug,
 unlike branch opening where anything present at 4.4 branchpoint is
 automatically present when 4.5 started).

 --
 Joseph S. Myers
 jos...@codesourcery.com



Re: GCC 4.4 Branch Created

2009-03-27 Thread Daniel Berlin
On Fri, Mar 27, 2009 at 6:34 PM, Joseph S. Myers
jos...@codesourcery.com wrote:
 On Fri, 27 Mar 2009, Mark Mitchell wrote:

 12. Updating the email parsing script.  AFAICT, this hasn't been done in
 a while, so I wasn't sure if it was considered obsolete.

 I have done this.  I'll deal with the snapshot and .pot files later.
 I'll close 4.2 branch at some point after the PR summaries have been
 updated to mention 4.5 (to avoid conflicts between two sets of bulk
 Bugzilla changes), though probably not until next week.

 If we want to deprecate gccbug in 4.4 and remove it in 4.5 (and so not
 need 4.5.1 or subsequent versions in this script), there is still time to
 do so (though not to get it in the first deprecated-features-removal patch
 for 4.5 - that has already been approved for 4.5 and I am retesting it
 tonight before committing it to trunk).


I think we should.
We haven't received a bug through gccbug in quite a while :)


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-22 Thread Daniel Berlin
On Sun, Mar 22, 2009 at 1:00 AM, Joe Buck joe.b...@synopsys.com wrote:
 On Sat, Mar 21, 2009 at 07:37:10PM -0700, Daniel Berlin wrote:
 The steering committee was founded in 1998 with the intent of
 preventing any particular individual, group or organization from
 getting control over the project. Its primary purpose is to make major
 decisions in the best interests of the GCC project and to ensure that
 the project adheres to its fundamental principles found in the
 project's mission statement. [see the original announcement below].

 The purpose of that statement (which dates from egcs days), was to address
 concerns that egcs represented a Cygnus takeover of GCC.  egcs started
 before the Red Hat acquisition of Cygnus, and it started with the Cygnus
 devo tree with a Cygnus employee as RM, and some Cygnus marketing people
 at the time were actually telling customers that it *did* represent a
 Cygnus takeover, so they had to have a Cygnus support contract if they
 wanted any influence over egcs!  Fortunately those people were quickly
 slapped down.  And after the Cygnus/Red Hat merger, the rest of the
 community was worried about the 800 pound gorilla.
All of this is a great statement of history if those are no longer the
goals, you need to update the statement, so *the rest of us* know
exactly what it is the steering committee sees itself doing these
days.
If the steering committee is no longer following this mission or
abiding by these guidelines, you really should update the page.
It also sounds a lot like you are saying the Steering Commitee does
not care much if the FSF has control over the project, which I know to
be false :)


 1. The FSF, as an organization, clearly now has control over the project.
 You even liken them to the administration of which you are just a 
 subordinate.
 You also believe you must act in accordance with their policy or
 resign from the group supposed to be making the major decisions in the
 best interests of the GCC project.

 Even in the egcs days, every contributor signed over their copyright to
 their contributions to the FSF, so even then the FSF played a special
 role.  Many of the contributors worked (and still work) for organizations
 that compete with each other: if there weren't some nonprofit with legal
 ownership of the code one would have had to be invented.
None of this is a refutation of the above, and in fact, seems to support it.
Again, if it is no longer true that the Steering Commitee has a
goal/purpose of keeping organizations from taking over GCC, change the
page and tell us what the SC's current goals are.


 There are checks on FSF control in the sense that the project can be
 forked and developers can leave.

This is not a meaningful check on control.
The same way that in the US revolution is not really a check on
control of the government, it is a method of replacement when checks
and controls are not respected.
(This is why in the US we have a whole full constitution of checks and controls)
Me, i'd rather see us have meaningful checks and controls.

 But in this particular case, I'm hopeful
 that this holdup is going to be resolved soon; there's new language and
 meetings this weekend which I hope will resolve matters, and the new
 language is designed to fix problems raised on this list by GCC
 developers.  Most of the time, the FSF hasn't interfered with GCC except
 on a couple of matters that they care about; licensing is one such matter.
I *heavily* disagree with this statement.

Let's see, just in the somewhat recent past:
Writing out the IL
Plugins
Changing over the bug system
Hosting on sourceware.org
Moving to subversion

Claiming  this as a couple matters they care about seems a bit much.

All of these were certainly resolved, but *they never should have been
issue that the FSF had any control over in the first place*.


Re: [gcc-in-cxx] bootstrap fails

2009-03-22 Thread Daniel Berlin
On Sun, Mar 22, 2009 at 12:29 AM, Jerry Quinn jlqu...@optonline.net wrote:
 Ian Lance Taylor wrote:

 Jerry Quinn jlqu...@optonline.net writes:



 2009-03-21  Jerry Quinn  jlqu...@optonline.net

   * config/i386/i386.c (ix86_function_specific_save): Don't check
   range of enum values.


 I still don't know why I don't see this, but this is OK for the
 gcc-in-cxx branch.



 Do I need to take any actions before I can commit into gcc's svn repository?

Do you have a gcc.gnu.org account?
If yes, there are no special actions you need to take before committing.
If not, there is a form to fill out, you can list me as the sponsor.


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-22 Thread Daniel Berlin
On Sun, Mar 22, 2009 at 10:47 AM, Paolo Bonzini bonz...@gnu.org wrote:
 On Sun, Mar 22, 2009 at 15:41, Richard Kenner
 ken...@vlsi1.ultra.nyu.edu wrote:
 I must admit that this interpretation is quite new to me.
 It certainly wasn't when EGCS reunited with gcc.

 I disagree.  reuniting with GCC means reuniting with the FSF.

 ... but not raising a white flag.

If the SC now has a different mission/etc than they used to,  they
should, you know, tell the rest of us and put it on the page, since
clearly nobody understands exactly what the GCC's project governance
is like?


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-22 Thread Daniel Berlin
On Sun, Mar 22, 2009 at 6:47 PM, Jeff Law l...@redhat.com wrote:
 Richard Kenner wrote:

 Of course, just I (and others) don't see why they should do it in this
 case.  Delaying a *branch* is different from, say, using a proprietary
 version control or bug tracking system.


 I don't either.  Requesting a delay of a *release* on a license issue
 is completely and perfectly understandable, but what that has to do
 with making a *branch* makes absolutely no sense to me.


 Agreed.  I'll note nobody has really argued that delaying a branch to deal
 with a license issue makes any sense.  The FSF itself hasn't even stated
 reasons for their stance.  That may simply be because the issue is expected
 to be moot after the weekend meetings.

 What I find most distressing about this whole discussion is the fact that we
 have developers who don't seem to grasp that the FSF owns the copyright to
 GCC and we are effectively volunteering to work in the FSF's sandbox under
 certain rules and guidelines set forth by the FSF.

Maybe this is because every piece of documentation on the GCC project
says otherwise?


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-22 Thread Daniel Berlin
On Sun, Mar 22, 2009 at 7:01 PM, Daniel Berlin dber...@dberlin.org wrote:
 On Sun, Mar 22, 2009 at 6:47 PM, Jeff Law l...@redhat.com wrote:
 Richard Kenner wrote:

 Of course, just I (and others) don't see why they should do it in this
 case.  Delaying a *branch* is different from, say, using a proprietary
 version control or bug tracking system.


 I don't either.  Requesting a delay of a *release* on a license issue
 is completely and perfectly understandable, but what that has to do
 with making a *branch* makes absolutely no sense to me.


 Agreed.  I'll note nobody has really argued that delaying a branch to deal
 with a license issue makes any sense.  The FSF itself hasn't even stated
 reasons for their stance.  That may simply be because the issue is expected
 to be moot after the weekend meetings.

 What I find most distressing about this whole discussion is the fact that we
 have developers who don't seem to grasp that the FSF owns the copyright to
 GCC and we are effectively volunteering to work in the FSF's sandbox under
 certain rules and guidelines set forth by the FSF.

 Maybe this is because every piece of documentation on the GCC project
 says otherwise?

Also, do you not realize this is precisely because of the massive lack
of transparency about how the project is governed?
Do you guys realize that governing like this is in fact, destroying
our community (how fast is a question people disagree about)?


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-21 Thread Daniel Berlin
On Fri, Mar 20, 2009 at 7:18 PM, Mark Mitchell m...@codesourcery.com wrote:
 Richard Kenner wrote:

 The matters to which we defer to the FSF are any matters that they *ask*
 us to!  They own the code.  If RMS, for some reason, decides that he doesn't
 like the phrasing of a comment somewhere, we have to either convince RMS
 he's wrong or change the comment.

 Indeed.
Err, sorry, but no.
They are certainly the legal owners of the code. That does not mean
they can force you to do anything.
That is in fact, the beauty of the GPL.
Nobody is suggesting forking, or anything of the sort, but the idea
that they own the code, we are just at the mercy of whatever they
tell us! is wildly wrong, and you all know it. In fact, you've said
as such at public meetings!
About the only thing they could do is kick us out of the GNU project.
I guarantee if you took a poll of developers (and users), and asked
whether they care about this or not, the answer would be a resounding
now.
We run the repository, we run the bug tracking, and the mailing lists.
 The only thing the FSF has really handled so far is copyright
assignments and enforcement. The first is a trivlal matter. Google has
processed more copyright agreements using a simple app i wrote in the
past 3 months than the FSF has in it's 20 year history (at least if
the copyright files are correct. It's not even a close number).  We've
also never lost one or had a single real complaint.  The SFLC would
certainly be more than  happy to take care of any negotiations about
assignments and enforcement for us.
So what exactly are we so afraid of here?
Not to mention the FSF isn't stupid, just slow.  I don't think anyone
believes that given the choice between get this done or tell us off
and break ties they would choose B.
Can we at least stop pretending that we simply have to do whatever the
FSF says, all the time, and we are just oppressed with no choices?

 I do not understand RMS' resistance to creating the branch.  I have
 explained that branching and releasing are different things, that at
 this time we've made no changes to the age-old exceptions, and so forth.
  I have asked RMS to allow us to go forward.  He hasn't directly
 responded, but he has indicated that there is an FSF meeting this
 weekend in which this will be discussed and seems to be suggesting that
 something will happen soon after that.

Great, i hope something finally happens!
Of course, I see this as just another example of a much larger
problem, but i'm sure if it got done reasonably quick it would calm
everyone down enough that we could continue this discussion in another
year when they give us another directive!

 As developers, our leverage is the ability to go play in a different
 sandbox if we do like the rules the FSF imposes.  As an SC member, I can
 (and do) lobby the FSF, but when given an explicit directive my choices
 are to go along with FSF policy, or resign.  I don't think it's
 appropriate to disobey the FSF's directives in the FSF's official
 repository.
Even when those directives have 0% support from the developer and user
community you are meant to serve?
Because that is the point where i would believe it is more than
appropriate to disobey the FSF's directives and let them make the hard
choices.


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-21 Thread Daniel Berlin
On Sat, Mar 21, 2009 at 8:46 PM, Mark Mitchell m...@codesourcery.com wrote:
 David Edelsohn wrote:

 I do not believe that Mark is asserting that he and the other release
 managers have to follow the requests of the FSF.  The question is not
 what the GCC community or the release managers *can* do, the question
 is what we *should* do.  Ignoring a direct, polite request from the
 FSF has implications -- ramifications; it is a form of communication.

 Correct.  It's as if I were a member of the President's cabinet and
 disagreed with the administration's policy.  I'd try to persuade the
 President s/he was wrong, and if I felt strongly enough I'd resign, but
 I'd not act in defiance of that policy while remaining in the cabinet.

Except we aren't supposed to have a president with you as the cabinet,
you are supposed to be preventing any individual group or organization
form getting control over the project.

Otherwise,  can you please change the first sentence on
http://gcc.gnu.org/steering.html
which states:

The steering committee was founded in 1998 with the intent of
preventing any particular individual, group or organization from
getting control over the project. Its primary purpose is to make major
decisions in the best interests of the GCC project and to ensure that
the project adheres to its fundamental principles found in the
project's mission statement. [see the original announcement below].

1. The FSF, as an organization, clearly now has control over the project.
You even liken them to the administration of which you are just a subordinate.
You also believe you must act in accordance with their policy or
resign from the group supposed to be making the major decisions in the
best interests of the GCC project.
If this is not giving defacto control to an organization over the GCC
project, i don't know what is.
2. Everyone who has spoken up so far does not believe these decisions
are in the best interest of the GCC project.

(FWIW, I'm not suggesting you all resign, I am suggesting maybe
letting the FSF has as much control over GCC as it has and continues
to is not the path GCC should take, as it has done *nothing* but cause
us misery for years now)

--Dan


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-20 Thread Daniel Berlin
On Fri, Mar 13, 2009 at 2:28 PM, Joe Buck joe.b...@synopsys.com wrote:
 On Fri, Mar 13, 2009 at 10:25:34AM -0700, Richard Guenther wrote:
 The topmost sentence should be unambiguous.  Yes, the SC asked us not
 to branch.

 The request came from RMS, the SC just passed it on.

There are two things here that bother me.

1. The occasional defending on the length of time it takes the FSF to
get back to us.  Sorry, but this defense is, honestly, fairly silly.
Even the government agencies I work with aren't  this slow.  By the
time we have a response, members of the  GCC developer community may
well be living on the moon.  This doesn't mean they aren't good people
trying to do a good job, or seriously overworked.  At some point, this
ceases to be a sane reason for something taking so long, and clearly,
it isn't something we should let affect our development schedule.

2. Where is the pushback by the SC onto the FSF?
Why haven't we given them a hard deadline, or even any deadline at all?
It's clear when they have no deadlines, they take forever to get
anything done. After all, if they are allowed to not prioritize it and
have no incentive to get their ass in gear and meet a deadline, what
exactly did we expect to happen other than it not getting done in a
reasonable amount of time?

Why hasn't the SC sent something to the FSF like:

We are grateful for your concern about the issues this licensing
change and subsequent discussion has brought up.  However, sadly, the
amount of time it is taking to reach consensus on how/what to change
has begun to seriously impede GCC development and it's future.
Therefore, we request you resolve this licensing issue by March 28th,
or we will have to branch and prepare the current GCC mainline for
release, and wait until the next version to make any licensing
changes.
We regret this, but it is necessary in order to not further impede
development of GCC and it's community

It's fairly clear what the view of the developer community is on this
issue.  At some point, if the FSF can't be a  organization that
responds to problems in a sane length of time, we shouldn't let them
get in the way.

--Dan


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-20 Thread Daniel Berlin
On Fri, Mar 20, 2009 at 11:17 AM, David Edelsohn dje@gmail.com wrote:
 On Fri, Mar 20, 2009 at 9:34 AM, Daniel Berlin dber...@dberlin.org wrote:
 On Fri, Mar 13, 2009 at 2:28 PM, Joe Buck joe.b...@synopsys.com wrote:
 On Fri, Mar 13, 2009 at 10:25:34AM -0700, Richard Guenther wrote:
 The topmost sentence should be unambiguous.  Yes, the SC asked us not
 to branch.

 The request came from RMS, the SC just passed it on.

 There are two things here that bother me.

 1. The occasional defending on the length of time it takes the FSF to
 get back to us.  Sorry, but this defense is, honestly, fairly silly.
 Even the government agencies I work with aren't  this slow.  By the
 time we have a response, members of the  GCC developer community may
 well be living on the moon.  This doesn't mean they aren't good people
 trying to do a good job, or seriously overworked.  At some point, this
 ceases to be a sane reason for something taking so long, and clearly,
 it isn't something we should let affect our development schedule.

 2. Where is the pushback by the SC onto the FSF?
 Why haven't we given them a hard deadline, or even any deadline at all?
 It's clear when they have no deadlines, they take forever to get
 anything done. After all, if they are allowed to not prioritize it and
 have no incentive to get their ass in gear and meet a deadline, what
 exactly did we expect to happen other than it not getting done in a
 reasonable amount of time?

 Why do you think that the SC has not pushed back?  Not all diplomacy
 is best done in public.

Okay then, as the leadership body of the GCC community, part of your
responsibility is keeping your constituents (the rest of us!) informed
of the status of things troubling them.
I don't believe saying we have given the FSF a deadline to meet in
the near future would at all endanger any diplomacy, and i'd love to
see a counter argument that says otherwise.


 I am sorry that none of us on the GCC SC caught the ambiguity in the
 original runtime license.  Re-opening a document for revisions is
 fraught with hazards because the GCC community is not the only
 party that wanted changes.  That is what we have encountered.
 We need to ensure that we do not fix one problem, but end up
 with a license less acceptable to the community due to other
 changes.

Nobody really blames anyone for ambiguity in the license. Most people
simply want to get on with developing GCC.


 There apparently is a revised version of the license that addresses
 the concerns raised by the community.  We are trying to get FSF to
 approve and release that text so that we may proceed.

 I agree that we cannot wait indefinitely.  The FSF is having a meeting
 this weekend and hopefully they can resolve the license issue.

 The GCC Community has operated with a rather low amount of public
 drama relative to many FOSS projects and I think that has served us
 well.
I'd say our drama level is actually significantly above other major
projects, actually, but in the end it does not matter.

 The customers and users of GCC are more than those
 intimately involved in FOSS projects and they appreciate a stable,
 professional project, not forks, fragmenting community, and rash
 decisions.
None of which anyone has suggested. I have yet to see a
non-professional suggestion, in fact.
That said, you realize that doing this all behind the curtain and not
keeping the rest of us informed is, in fact, fragmenting the
community, making people consider rash decisions, etc.
Sunlight is, in fact, the best disinfectant.

 We cannot be held hostage, but more people are watching
 than GCC insiders
Yet most of the others watching take their queues from the feelings of
GCC insiders.
I have yet to see them act particularly independently, anyway, so it
seems silly to assume they will until something makes us think
otherwise.


Re: GCC 4.4.0 Status Report (2009-03-13)

2009-03-20 Thread Daniel Berlin
On Fri, Mar 20, 2009 at 12:09 PM, David Edelsohn dje@gmail.com wrote:
 On Fri, Mar 20, 2009 at 11:42 AM, Daniel Berlin dber...@dberlin.org wrote:

 Okay then, as the leadership body of the GCC community, part of your
 responsibility is keeping your constituents (the rest of us!) informed
 of the status of things troubling them.
 I don't believe saying we have given the FSF a deadline to meet in
 the near future would at all endanger any diplomacy, and i'd love to
 see a counter argument that says otherwise.

 I am sorry that you did not receive the memo.

This is a fairly rude response for something that has been a
consistent problem for GCC developers (lack of status updates from the
SC on issues important to GCC developers).
I've said my piece. It's fairly obvious the SC has no plans to change
(they have no incentive to).

 Yet most of the others watching take their queues from the feelings of
 GCC insiders.
 I have yet to see them act particularly independently, anyway, so it
 seems silly to assume they will until something makes us think
 otherwise.

 Mark Mitchell and I receive different feedback.
This just makes it appear to the rest of the world that you are
greatly concerned with the appearance of GCC to other unnamed
outsiders without even anonymously relaying their thoughts and beliefs
that make you think this.
I guess we get to trust that it is currently more important to listen
to the thoughts and beliefs of outsiders (who are apparently
themselves not in contact with the rest of the community) than it is
to listen to the people actually doing the work on GCC.
This seems a bit strange.


Re: ARM compiler rewriting code to be longer and slower

2009-03-16 Thread Daniel Berlin
On Mon, Mar 16, 2009 at 12:11 PM, Adam Nemet ane...@caviumnetworks.com wrote:
 Ramana Radhakrishnan writes:
 [Resent because of account funnies. Apologies to those who get this twice]

 Hi,

   This problem is reported every once in a while, all targets with
  small
   load-immediate instructions suffer from this, especially since GCC
  4.0
   (i.e. since tree-ssa).  But it seems there is just not enough
  interest
   in having it fixed somehow, or someone would have taken care of it by
   now.
  
   I've summed up before how the problem _could_ be fixed, but I can't
   find where.  So here we go again.
  
   This could be solved in CSE by extending the notion of related
   expressions to constants that can be generated from other constants
   by a shift. Alternatively, you could create a simple, separate pass
   that applies CSE's related expressions thing in dominator tree
  walk.
 
  See http://gcc.gnu.org/ml/gcc-patches/2009-03/msg00158.html for
  handling
  something similar when related expressions differ by a small additive
  constant.  I am planning to finish this and submit it for 4.5.

 Wouldn't doing this in CSE only solve the problem within an extended basic
 block and not necessarily across the program ? Surely you'd want to do it
 globally or am I missing something very basic here ?

 No, you're not.  There are plans moving some of what's in CSE to a new LCM
 (global) pass.  Also note that for a global a pass you clearly need some more
 sophisticated cost model for deciding when CSEing is beneficial.  On a
 multi-scalar architecture, instructions synthesizing consts sometimes appear
 to be free whereas holding a value a in a register for an extended period of
 time is not.


Right. You probably want something closer to nigel horspool's
isothermal speculative PRE which takes into account (using
heuristics and profiles) where the best place to put things is based
on costs, instead of LCM, which uses a notion of lifetime optimality

See http://webhome.cs.uvic.ca/~nigelh/pubs.html for Fast
Profile-Based Partial Redundancy Elimination

There was a working implementation of this done for GCC 4.1 that used
profile info and execution counts.
If you are interested, and can hunt down David Pereira (He isn't at
uvic anymore, and i haven't talked to him since so i don't have his
email), he'd probably give you the code :)


Re: C++ FE stripping const qualifiers from function prototypes (bug or feature?)

2009-03-12 Thread Daniel Berlin
On Thu, Mar 12, 2009 at 9:32 AM, Paolo Carlini paolo.carl...@oracle.com wrote:
 Hi,
 Notice how the third argument is 'int' instead of 'const int'.  Is
 this the way C++ is supposed to behave or is this a bug in the FE?

 Well, I would say this is a rather well known C++ feature not a bug. It
 took me a little time finding the exact section of the standard where it
 is stated but I think finally I found it: but last item of 13.1/3. Also,
 8.3.5/3.


But if it was following this and removing const qualifiers, shouldn't
it have remove the const from const char * too?
Or am i missing something?


Re: C++ FE stripping const qualifiers from function prototypes (bug or feature?)

2009-03-12 Thread Daniel Berlin
On Thu, Mar 12, 2009 at 11:15 AM, Mark Mitchell m...@codesourcery.com wrote:
 Daniel Berlin wrote:

 But if it was following this and removing const qualifiers, shouldn't
 it have remove the const from const char * too?
 Or am i missing something?

 No, that is not a top-level qualifier.

Ah, it only removes top level qualifiers, that was the part i was missing :)


Re: Please block henry2000 from the wiki

2009-02-27 Thread Daniel Berlin
No, there is a list of wiki users considered superusers (IE able to
become other people on the wiki, remove spam, etc).
It requires no underlying permissions or accounts on sourceware itself.


On Fri, Feb 27, 2009 at 10:08 AM, Christopher Faylor
cgf-use-the-mailinglist-ple...@gnu.org wrote:
 On Thu, Feb 26, 2009 at 04:08:03PM -0500, Daniel Berlin wrote:
If you want to help admin the wiki, I am more than happy to make you a
super user.
That goes for Steven, etc.

 Wait.  Are we talking about giving people root access on sourceware
 just to clean up a wiki?  Hopefully this is not the case.

 cgf



Re: Please block henry2000 from the wiki

2009-02-26 Thread Daniel Berlin
If you want to help admin the wiki, I am more than happy to make you a
super user.
That goes for Steven, etc.



On Thu, Feb 26, 2009 at 12:31 PM, Manuel López-Ibáñez
lopeziba...@gmail.com wrote:
 2009/2/25 Gerald Pfeifer ger...@pfeifer.com:
 On Tue, 24 Feb 2009, Steven Bosscher wrote:
 Can someone *please* ban this nutcase from the wiki?
 There is almost weekly spam added to the wiki from this account.
 Thanks,

 Let me forward this to the overseers team...


 A solution for only this particular case is not useful. We have a lot
 of trusted users that use and take care of the wiki but they do not
 have privileges to mark content as spam or ban spammers. I think this
 is unfortunate because putting the burden of this solely on the admin
 of the wiki is frustrating for everybody.

 If it is technically possible, there should be a list of trusted users
 that are allowed to handle spam.

 Cheers,

 Manuel.



Re: Solve transitive closure issue in modulo scheduling

2009-02-07 Thread Daniel Berlin
On Fri, Jan 30, 2009 at 7:44 AM, Bingfeng Mei b...@broadcom.com wrote:
 Hello,
 I try to make modulo scheduling work more efficiently for our VLIW target. I 
 found one serious issue that prevents current SMS algorithm from achieving 
 high IPC is so-called transitive closure problem, where scheduling window 
 is only calculated using direct predecessors and successors. Because SMS is 
 not an iterative algorithm, this may cause failures in finding a valid 
 schedule. Without splitting rows, some simple loops just cannot be scheduled 
 not matter how big the II is. With splitting rows, schedule can be found, but 
 only at bigger II. GCC wiki (http://gcc.gnu.org/wiki/SwingModuloScheduling) 
 lists this as a TODO. Is there any work going on about this issue (the last 
 wiki update was one year ago)? If no one is working on it, I plan to do it. 
 My idea is to use the MinDist algorithm described in B. Rau's classic paper 
 iterative modulo scheduling 
 (http://www.hpl.hp.com/techreports/94/HPL-94-115.html). The same algorithm 
 can also be used to compute better RecMII. The biggest concern is complexity 
 of computing MinDist matrix, which is O(N^3). N is number of nodes in the 
 loop. I remember somewhere GCC coding guide says never write quadratic 
 algorithm :-) Is this an absolute requirement?

It's not an absolute requirement, just a general guideline.

We have plenty of quadratic and worse algorithms, and we'd rather see
less of them :)
Obviously, when it comes to things requiring transitive closure, you
can't really do better.


Re: Plugin API Comments (was Re: GCC Plug-in Framework ready to port)

2009-02-05 Thread Daniel Berlin
On Thu, Feb 5, 2009 at 5:59 AM, Ben Elliston b...@au1.ibm.com wrote:
 On Tue, 2009-02-03 at 01:59 -0500, Sean Callanan wrote:

 Our plugins do not break when switching compiler binaries.  In fact, I
 have had plug-in binaries that perform very simple tasks work fine
 when switching (minor!) compiler releases.

 Thinking about this made me realise that the plugin framework will
 require special consideration for utilities like ccache and distcc.
 ccache checksums (or at least stats) the cc1 binary to decide whether a
 cached object file is valid.  If you change a plugin, the cc1 binary
 won't change, but the generated code probably will.

Why not use the output of cc1 --version, and then have the loaded
plugins listed there.
(This also means bug reports should have the plugins listed assuming
the user uses the same command line with --version tacked on)


Re: Size of the GCC repository

2009-01-21 Thread Daniel Berlin
17,327,572 k
:)


On Wed, Jan 21, 2009 at 8:13 PM, Paolo Carlini paolo.carl...@oracle.com wrote:
 Hi,

 for the record, today I started an rsync to get a local copy of the
 repository and, at variance with the information in:

  http://gcc.gnu.org/rsync.html

 the size I'm seeing is already  17G, and counting... If somebody knows
 the total size and wants to update the web page, I think it would be a
 nice idea, otherwise I will take care of that... when it finishes... ;)

 Paolo.



Re: change to gcc from lcc

2008-11-20 Thread Daniel Berlin
On Thu, Nov 20, 2008 at 9:28 PM, Alexey Salmin [EMAIL PROTECTED] wrote:
 2008/11/20 Michael Matz [EMAIL PROTECTED]:
 Hi,

 On Wed, 19 Nov 2008, H.J. Lu wrote:

 On Wed, Nov 19, 2008 at 7:18 PM, Nicholas Nethercote
 [EMAIL PROTECTED] wrote:
  On Tue, 18 Nov 2008, H.J. Lu wrote:
 
  I used malloc to create my arrays instead of creating the in the stack.
  My program is working now but it is very slow.
 
  I use two-dimensional arrays. The way I access element (i,j) is:
  array_name[i*row_length+j]
 
  The server that I use has 16GB ram. The ulimit -a command gives the
  following output:
  time(seconds)unlimited
  file(blocks) unlimited
  data(kbytes) unlimited
  stack(kbytes)8192
 
  
 
  That limits stack to 8MB. Please change it to 1GB.
 
  Why?
 

 int buffer1[250][100];

 takes close to 1GB on stack.

 Read the lines you quoted carefully again.


 Ciao,
 Michael.


 Can you please talk in a more understandable way? I also think that 4
 * 250 * 100 is close to 1073741824 which is 1 Gb. And automatic
 variables are allocated in stack (which is 8Mb here) instead of data
 segment.



Here, let me help:


  I used malloc to create my arrays instead of creating the in the stack.
  My program is working now but it is very slow.

Do you see why he told HJ to reread the lines he quoted?


Re: Continuous builder up

2008-10-29 Thread Daniel Berlin
On Tue, Oct 28, 2008 at 8:02 PM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 2008/10/25 Daniel Berlin [EMAIL PROTECTED]:
 I have placed a continuous builder  (IE it does one build per svn
 change) for GCC for x86_64 on an 8 core machine (nicely provided by
 Google), and it has results here:
 http://home.dberlin.org:8010/waterfall

 I think this is great and pretty! Would it be possible to keep a list
 of the revisions that failed to build? That could be very useful for
 reg-hunting. Could the system send an email to the author of the
 revision that failed?

 (I have not made it summarize the warnings yet, and these deliberately
 do not run the testsuite in order to keep up with the repository.  I
 have more servers coming that will run builds that include running the
 testsuite).

 Well, it seems idle right now. And with the new parallel testsuite, it
 shouldn't take so much time, so I think it could keep up with the
 repository. It seems just a waste of resources to build once and then
 build again somewhere else to run the testsuite.

 In the next few days i will add a continuous builder for i686, as well
 as a server that accepts patches for testing on these platforms and
 spits back results in an hour or less, including the testsuite, so
 those who want to test things while continuing work can do so.

 Great. Although this does not seem such an important issue given the
 existing Compile Farm.

The compile farm requires a lot of manual work for people (SSH keys,
etc) who just want to submit a small patch, whereas upload through a
browser or email does not.
I will probably even make it provide a debuggable binary and core file
for crashes.

 On the other hand, I seriously miss the patch tracker and I think I
 was not the only one and we have probably lost a few patches along the
 way. Any plans to bring it back?
No
The patch tracker was an experiment in trying to see if it would
improve the rate of patches falling through the cracks.
It had the secondary effect of getting some other patches reviewed
quicker in some cases, because of those who paid attention to it.

In reality, it didn't improve the rate of patch dropping in the areas
that we were dropping patches.  It guess it turns out those people
specifically in charge of those areas didn't care if a long list of
patches on a web page pointed to them :)
It did improve the rate of patch dropping among those who have limited
time to wade through email, I think, but there are better ways to
present that info (IE i am Diego Novilllo, give me the list of
patches on the mailing list i could look at)

Given that it's main goal was a failure, I don't see why i would bring
it back, at least in that form.
OTOH, If you want just something to tell you, as an individual
reviewer, what patches sent to the mailing list are still waiting for
your review, or we want  to see about a general code review system
that works with email as well as web, i may take a gander.


Re: Continuous builder up

2008-10-29 Thread Daniel Berlin
On Wed, Oct 29, 2008 at 9:16 AM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 2008/10/29 Daniel Berlin [EMAIL PROTECTED]:
 The patch tracker was an experiment in trying to see if it would
 improve the rate of patches falling through the cracks.
 It had the secondary effect of getting some other patches reviewed
 quicker in some cases, because of those who paid attention to it.

 I will add that it was very useful for tracking patches to PRs.

 In reality, it didn't improve the rate of patch dropping in the areas
 that we were dropping patches.  It guess it turns out those people
 specifically in charge of those areas didn't care if a long list of
 patches on a web page pointed to them :)

 Well, those patches were in the list. With the patch tracker at least
 there is proof of which areas are dropping patches and probably need
 more reviewers. Otherwise the patches get silently lost. One of the
 reasons why sporadic contributors do not stick with us is that they
 feel ignored (or conversely that they do not have enough patience to
 ping 4 or 5 times). While the patch tracker was active, it also
 happened a few times that more veteran contributors sent some patch
 only to forget completely about it and never request a review. But
 such patches do not get lost in they are in the tracker.

 I agree that the patch tracker probably does not get more patches
 reviewed but it definitely gets less patches lost.

But in the end, it didn't solve the underlying problem, so it didn't
improve our rate of attrition of smaller contributors.


 It did improve the rate of patch dropping among those who have limited
 time to wade through email, I think, but there are better ways to
 present that info (IE i am Diego Novilllo, give me the list of
 patches on the mailing list i could look at)

 Not the same at all. If you have some time to review a patch, you
 probably want to do it right now. Not send an email and wait for
 answers. Moreover, that mail could be also missed by the contributors.
 Finally, I think I have never seen anyone asking for patches to
 review. Never. But some people did wander through the patch tracker.

I think you misunderstood whatI meant.
Basically you would enter your email address into the page, and it
would figure out, based on it's internal list of maintenance areas and
black magic, what patches are wiating around that you cold possibly
review.
It would not require sending an email, etc.

It would effectively be wandering through the patch tracker except
it would limit it's display to those things you could actually help
with, instead of a list of 100 patches, most of which yo may not be
able to do anything about.


 A bi-weekly status report of the patch tracker sent to gcc-patches
 would definitively make the list of unreviewed patches more visible. I
 believe this may also be a problem for the continuous builder: If
 there is no visible feedback from it, that is, if one needs to
 actively check its status, then it is likely to be missed/neglected.

I did this for about 2 weeks, and was asked privately by a few to stop
because they saw it as spam.

At this point, I don't know what i can do that actually helps the
problems we face as a community.


Re: Continuous builder up

2008-10-29 Thread Daniel Berlin
On Wed, Oct 29, 2008 at 11:42 AM, Ian Lance Taylor [EMAIL PROTECTED] wrote:
 Daniel Berlin [EMAIL PROTECTED] writes:

 A bi-weekly status report of the patch tracker sent to gcc-patches
 would definitively make the list of unreviewed patches more visible. I
 believe this may also be a problem for the continuous builder: If
 there is no visible feedback from it, that is, if one needs to
 actively check its status, then it is likely to be missed/neglected.

 I did this for about 2 weeks, and was asked privately by a few to stop
 because they saw it as spam.

 Sending those reports to the mailing list was always the main use I
 saw for the patch tracker.  I'd like to open this issue up.  Who would
 consider those reports to be spam?  Who would be opposed to
 reinstating the patch tracker and having it send out notes about
 patches which have not been reviewed?

 Back at Cygnus I wrote a script which sent out a daily report for bugs
 which had not been fixed, and I think it was very helpful.  A daily
 report is not appropriate here, but I think that a weekly report is.

I'm happy to reinstate the tracker for this purpose if i'm not going
to get yelled at :)


Re: Possible optimizer bug?

2008-10-27 Thread Daniel Berlin
On Mon, Oct 27, 2008 at 5:57 PM, Peter A. Felvegi
[EMAIL PROTECTED] wrote:
 Hello all,

 I've run today into an infinite loop when compiled a test suite w/
 optimizations. The original tests were to do some loops with all the
 nonnegative values of some integral types. Instead of hardwiring the max
 values, I thought testing for overflow into the negative domain is ok.

 Here is the source to reproduce the bug:

 8888
 #include stdint.h
 #include stdio.h

 int f(int(*ff)(int))
 {
int8_t i = 0; /* the real loop counter */
int ii = 0; /* loop counter for the test */
do {
if (ff(ii)) { /* test ii through a fn ptr call */
printf(ouch!\n); /* too many loops */
return ii;
}
++ii;
} while (++i  0);
/*
 * the loop should stop when i overflows from 0x7f to
 * 0x80 (ie -128) : 128 iterations
 * if optimizations are enabled, it won't stop.
 */
return 0;
 }

 extern int g_tr;

 int foo(int i)
 {
return i  g_tr;
 }

 int main(void)
 {
f(foo);

return 0;
 }

 int g_tr = 0x200;
 8888

 The call through the function pointer to test the loop counter is only for
 disabling inlining. If i put everything into f(), it just prints ouch and
 returns 0x201, the loop is optimized away completely.

 The expected behaviour is present (stopping after 128 iterations) if
 compiled w/ -O0 or -O1, however, -O2 and above, and -Os result in an
 infinite loop.

 The disassembly has an unconditional jump instruction after incrementing the
 loop counter.

 Tested on: Debian Lenny (i386 and amd64), gcc 4.1, 4.2 and 4.3.

 Compile as:
 $ gcc -g -O2 t.c

 then run as
 $ ./a.out

 Is the above code considered illegal, or is it an issue with the optimizer

It's illegal.
Signed overflow is undefined.
if you want to guarantee wraparound, use -fwrapv or unsigned math.


Continuous builder up

2008-10-25 Thread Daniel Berlin
I have placed a continuous builder  (IE it does one build per svn
change) for GCC for x86_64 on an 8 core machine (nicely provided by
Google), and it has results here:
http://home.dberlin.org:8010/waterfall

(I have not made it summarize the warnings yet, and these deliberately
do not run the testsuite in order to keep up with the repository.  I
have more servers coming that will run builds that include running the
testsuite).

In the next few days i will add a continuous builder for i686, as well
as a server that accepts patches for testing on these platforms and
spits back results in an hour or less, including the testsuite, so
those who want to test things while continuing work can do so.

Anybody who wishes to have their favorite platform do continuous
builds and report results on that page, let me know. The only
requirement is that you be able to contact home.dberlin.org on port
9989 (IE you an run a build slave on an internal nat'd machine if you
like), and have python.  It takes about 30 seconds to set up a new
buildslave on any machine (it requires you install buildbot, and tell
it to connect to the buildmaster, and that's it).

--Dan


Re: Use of compute_data_dependences_for_loop on GIMPLE representation

2008-10-23 Thread Daniel Berlin
Sure, that's why they are there.


On Thu, Oct 23, 2008 at 10:00 AM,
[EMAIL PROTECTED] wrote:
 Hello,

 Can i make use of functions defined in tree-data-ref.h for data dependency
 analysis on GIMPLE  trees ?

 Thanks.




Re: Rewrite of tree level PRE and vect-67.c failure

2008-10-16 Thread Daniel Berlin
I am still looking into this, it's on my stack of PRE weirdness :)


On Thu, Oct 16, 2008 at 11:39 AM, Steve Ellcey [EMAIL PROTECTED] wrote:

 On Thu, 2008-10-16 at 11:29 +0200, Richard Guenther wrote:

 Do we have a bug for these FAILs?  Maybe we should add the analysis that
 happened sofar.

 Richard.

 I have created PR 37853.

 Steve Ellcey
 [EMAIL PROTECTED]




Re: gcc moving memory reference across call

2008-10-13 Thread Daniel Berlin

 It's a field in the class$ structure.  class$ is initialized by creating a
 CONSTRUCTOR tree and calling CONSTRUCTOR_APPEND_ELT for each field.  The
 DECL_INITIAL of class$ points to the CONSTRUCTOR tree.

 _CD_pp is an array of void*.  These are initialized by DECL_INITIAL too.

 InitClass is passed class$$ (not class$) and that has a DECL_INITIAL
 that points to class$.  As far as I can tell all the types are correct.


If class$$ has an initial of class$ which has an initial (in it
somewhere) of CD_ppp it should definitely be noticing.
It used to walk these all and get it right, AFAIK, by disqualifying
all variables with their addresses taken (or escaping the function
unit).
Maybe the walking broke when we moved to tuples or something?


Re: P.S. to: plungins and licensing

2008-09-29 Thread Daniel Berlin
On Mon, Sep 29, 2008 at 10:37 AM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 You would not want a lawyer designing a compiler, so why...


Oh.
I guess i'll just hang up my hat then ...


Re: C/C++ FEs: Do we really need three char_type_nodes?

2008-09-22 Thread Daniel Berlin
On Mon, Sep 22, 2008 at 8:48 AM, Mark Mitchell [EMAIL PROTECTED] wrote:
 Richard Guenther wrote:

 char and signed char (if char is signed) are the same types for the
 middle-end (but not for the Frontend).

 Is that desirable?  Type-based alias analysis should be able to take
 advantage of the difference between them; a char ** and a signed char
 ** cannot point at the same thing, for example.


Should, but currently can't.
They will both have alias set 0, last time I checked (about 2 months ago).
I imagine if we started actually enforcing strict aliasing between
signed char * and char * we'd break even more code and have even more
complaints.
It's also going to be rare that this is a useful aliasing relationship
to disambiguate between.


Re: Etiquette when pinging patches?

2008-09-20 Thread Daniel Berlin
Honestly?
You should use whatever gets a response.
If you are at the point you have to ping a patch, it obviously has
fallen through the cracks, and you should do whatever is necessary to
make sure it gets attention.

To that end, I would just use new threads, as they make it clear it is
not part of an ongoing discussion, and something that needs attention.

I assume you are already also CC'ing the maintainers you want to
review it (I find this is the #1 best way to get a response).


On Sat, Sep 20, 2008 at 4:50 AM, Richard Sandiford
[EMAIL PROTECTED] wrote:
 I don't want to waste everyone's time with protocol, but I was wondering:
 what's the etiquette when pinging patches?  Should the ping be a reply
 to the original message (i.e. should it be in the same thread), or should
 it be a new thread?  I was once asked to use new threads instead of the
 old one, so that's what I've been doing, but there have been two cases
 recently in which people responded to the original message after the new
 thread developed.

 Richard



Re: Please, do not use the merged revisions log as the commit message when merging

2008-09-06 Thread Daniel Berlin
Feel free to edit the hook scripts to do this.


On Sat, Sep 6, 2008 at 1:26 PM, Joseph S. Myers [EMAIL PROTECTED] wrote:
 On Sat, 6 Sep 2008, Manuel López-Ibáñez wrote:

 Well, that is a property change and it is surprising that the log
 shows the diff of the change. Normally logs only show what has been
 changed but not the diff. Neither John, nor I expected this behaviour.

 Changes to *revision* properties deliberately show the diff because
 revision properties are not versioned so sending the diff to gcc-cvs
 provides the only audit trail for such changes in case mistakes are made.
 However, this case illustrates that such diffs should only go to gcc-cvs
 and not be used to extract PR numbers for gcc-bugzilla.

 --
 Joseph S. Myers
 [EMAIL PROTECTED]


Re: Please, do not use the merged revisions log as the commit message when merging

2008-09-05 Thread Daniel Berlin
I'll commit your patch.

On Fri, Sep 5, 2008 at 12:42 PM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 2008/9/5 Christopher Faylor [EMAIL PROTECTED]:
 On Sun, Aug 17, 2008 at 03:01:03PM -0500, John Freeman wrote:
 Daniel Berlin wrote:

 It's listed on the wiki that explains how to maintain branches :)

 I had no idea such a wiki even existed.  It would really help future
 contributors, I'm sure, if, perhaps during copyright assignment, there were
 some sort of introduction process that clearly communicated policies.
 Thank you for the heads up.

 But, you did do it again, just a little while ago.

 Please stop doing that!  You're swamping gcc.gnu.org and causing unnecessary
 work for me.

 I will be instituting some safeguards against this kind of mail-bombing going
 forward but, nevertheless, this really should not be a common occurrence.

 Please Christopher, feel free to take my patch at here, modify it,
 test it and ping whoever is responsible to review such changes:
 http://gcc.gnu.org/ml/gcc-patches/2008-08/msg01972.html

 Unfortunately, I cannot work on GCC at the moment, so I cannot do it myself.

 Cheers,

 Manuel.



Re: Please, do not use the merged revisions log as the commit message when merging

2008-08-17 Thread Daniel Berlin
It's listed on the wiki that explains how to maintain branches :)


On Sun, Aug 17, 2008 at 2:32 PM, John Freeman [EMAIL PROTECTED] wrote:
 Christopher Faylor wrote:

 On Sat, Aug 16, 2008 at 02:35:08PM +0200, Manuel L?pez-Ib??ez wrote:


 Dear GCC devs,

 Please do *not* use the full logs of the merged revisions as the
 commit message of a merge. Apart from making the output of svn log
 useless, commits messages are parsed are tracked for PR numbers, the
 commit message is added to the bugzilla page of the PR and people
 subscribed to the relevant PR in bugzilla are notified of the commit.
 Therefore a single merge of many revisions would result in a flood of
 mails sent for no good reason and they make a mess of the bugzilla
 page.

 I am sure many of you have been hit by this recently. Please, let's
 try to avoid this.


 If that isn't a good enough reason, doing this completely swamps
 gcc.gnu.org as it valiantly attempts to send all of the above email.
 This resulted in a load average of 24 on the system last night and kept
 me awake until 2:30AM trying to stabilize things.

 cgf


 I'm just going to come out and admit that it was probably me who caused all
 this.  I appreciate the anonymity afforded by everyone, and I apologize.  I
 promise I will not make this mistake again.  In my defense, I want to say
 that your reasons are good enough, but I did not know them beforehand.  No
 one informed me of any commit policies when I was given subversion access.
  I thought that since I was working on a branch, I had free reign.
  Education would go a long way in preventing future errors.

 - John



Re: [RFH] PR middle-end/179 gcc -O2 -Wuninitialized missing warning with var

2008-08-15 Thread Daniel Berlin
On Fri, Aug 15, 2008 at 8:06 AM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 2008/8/14 Daniel Berlin [EMAIL PROTECTED]:
 1. You can't assume VUSE's are must-aliases.  The fact that there is a
 vuse for something does not imply it is must-used, it implies it is
 may-used.

 We do not differentiate may-use from must-use in our alias system. You
 can do some trivial must-use analysis if you like (by computing
 cardinality of points-to set as either single or multiple and
 propagating/meeting it in the right place).

 Must-use is actually quite rare.

 Then, is it impossible to distinguish the following testcase and the
 one from my previous mail with the current infrastructure?

If by current you mean code that already exists, then yes :)
You could write code to do further analysis, but with the existing
code, it will not work.


 2.  if (!gimple_references_memory_p (def))
 +   return;
 +
 Is nonsensical the SSA_NAME_DEF_STMT of a vuse must contain a vdef,
 and thus must access memory.

 Two things here.

 1) The case I am trying to war about is:

  # BLOCK 2 freq:1
  # PRED: ENTRY [100.0%]  (fallthru,exec)
  [/home/manuel/src/trunk/gcc/testsuite/gcc.dg/uninit-B.c : 12] # VUSE
 iD.1951_4(D) { iD.1951 }
  i.0D.1952_1 = iD.1951;
  [/home/manuel/src/trunk/gcc/testsuite/gcc.dg/uninit-B.c : 12] if
 (i.0D.1952_1 != 0)

 The def_stmt of i.0 is precisely that one. There is no vdef there.

Sure but this is a default def, which are special, and do nothing anyway.



 2) I use that test to return early if the def_stmt of t does not
 reference memory. t is just a SSA_NAME (like i.0 above), I do not know
 whether its def_stmt has a VUSE like the above or not. I guess the
 test is redundant since SINGLE_SSA_USE_OPERAND will return NULL
 anyway. Is that what yo mean?

No, i mean any SSA_NAME_DEF_STMT for a vuse that is not a default_def
will reference memory.


Re: [RFH] PR middle-end/179 gcc -O2 -Wuninitialized missing warning with var

2008-08-15 Thread Daniel Berlin
On Fri, Aug 15, 2008 at 10:58 AM, Daniel Berlin [EMAIL PROTECTED] wrote:
 On Fri, Aug 15, 2008 at 8:06 AM, Manuel López-Ibáñez
 [EMAIL PROTECTED] wrote:
 2008/8/14 Daniel Berlin [EMAIL PROTECTED]:
 1. You can't assume VUSE's are must-aliases.  The fact that there is a
 vuse for something does not imply it is must-used, it implies it is
 may-used.

 We do not differentiate may-use from must-use in our alias system. You
 can do some trivial must-use analysis if you like (by computing
 cardinality of points-to set as either single or multiple and
 propagating/meeting it in the right place).

 Must-use is actually quite rare.

 Then, is it impossible to distinguish the following testcase and the
 one from my previous mail with the current infrastructure?

 If by current you mean code that already exists, then yes :)
 You could write code to do further analysis, but with the existing
 code, it will not work.

FWIW, it is actually worse than the cases you have posited so far.

Consider the following simple case (which are different from yours in
that the conditionals are not dependent on maybe uninitialized
variables), where you will miss an obvious warning.

extern int foo(int *);
extern int bar(int);
int main(int argc, char **argv)
{
  int a;

  if (argc)
 foo (a)
/* VUSE of a will be a phi node, but it still may be used uninitialized.  */
  bar(a);
}


Realistically, to get good results, you would have to track may-use vs
must-use and also propagate where the default def is being used when
the default_def is not from a parameter.

(noticing that the a is a must-use there and comes from a phi node
whose arguments contain the default def would prove it is
uninitialized along some path)
--Dan


Re: [RFH] PR middle-end/179 gcc -O2 -Wuninitialized missing warning with var

2008-08-15 Thread Daniel Berlin
On Fri, Aug 15, 2008 at 11:31 AM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 2008/8/15 Daniel Berlin [EMAIL PROTECTED]:
 On Fri, Aug 15, 2008 at 10:58 AM, Daniel Berlin [EMAIL PROTECTED] wrote:
 On Fri, Aug 15, 2008 at 8:06 AM, Manuel López-Ibáñez
 [EMAIL PROTECTED] wrote:
 2008/8/14 Daniel Berlin [EMAIL PROTECTED]:
 1. You can't assume VUSE's are must-aliases.  The fact that there is a
 vuse for something does not imply it is must-used, it implies it is
 may-used.

 We do not differentiate may-use from must-use in our alias system. You
 can do some trivial must-use analysis if you like (by computing
 cardinality of points-to set as either single or multiple and
 propagating/meeting it in the right place).

 Must-use is actually quite rare.

 Then, is it impossible to distinguish the following testcase and the
 one from my previous mail with the current infrastructure?

 If by current you mean code that already exists, then yes :)
 You could write code to do further analysis, but with the existing
 code, it will not work.

 FWIW, it is actually worse than the cases you have posited so far.

 Consider the following simple case (which are different from yours in
 that the conditionals are not dependent on maybe uninitialized
 variables), where you will miss an obvious warning.

 extern int foo(int *);
 extern int bar(int);
 int main(int argc, char **argv)
 {
  int a;

  if (argc)
 foo (a)
 /* VUSE of a will be a phi node, but it still may be used uninitialized.  */
  bar(a);
 }


 Realistically, to get good results, you would have to track may-use vs
 must-use and also propagate where the default def is being used when
 the default_def is not from a parameter.


 The problem in the original testcase is that the default def of
 variable 'c' is a VUSE in a statement that does not even use c.

It may-uses c, as we've been through.

  # BLOCK 2 freq:1
  # PRED: ENTRY [100.0%]  (fallthru,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 11095] # VUSE
 cD.68618_34(D) { cD.68618 }
  D.68627_3 = validate_argD.45737 (s1D.68612_2(D), 10);

 Moreover, if you check fold_builtin_strchr in builtins.c, it is clear
 that there is no path along which c is used uninitialized.

This is not a default def.

cD.68618_34(D) is the default def.
if you look at default_def (c) it will be a NOP_EXPR statement.


Bootstrap broken on x86_64-linux

2008-08-14 Thread Daniel Berlin
Failure:

../../../libgfortran/intrinsics/cshift0.c: In function 'cshift0':
../../../libgfortran/intrinsics/cshift0.c:124: warning: passing
argument 1 of 'cshift0_i16' from incompatible pointer type
../../../libgfortran/intrinsics/cshift0.c:236: error: 'GFC_INTGER_16'
undeclared (first use in this function)
../../../libgfortran/intrinsics/cshift0.c:236: error: (Each undeclared
identifier is reported only once
../../../libgfortran/intrinsics/cshift0.c:236: error: for each
function it appears in.)
make[3]: *** [cshift0.lo] Error 1
make[3]: *** Waiting for unfinished jobs

Caused by:

Changed by: tkoenig
Changed at: Thu 14 Aug 2008 14:38:46
Revision: 139111

Changed files:

libgfortran/generated/cshift0_r4.c
libgfortran/ChangeLog
libgfortran/generated/cshift0_c16.c
libgfortran/generated/cshift0_r8.c
libgfortran/generated/cshift0_i16.c
libgfortran/libgfortran.h
libgfortran/m4/cshift0.m4
libgfortran/generated/cshift0_r10.c
gcc/testsuite/ChangeLog
libgfortran/generated/cshift0_c4.c
libgfortran/intrinsics/cshift0.c
libgfortran/generated/cshift0_r16.c
libgfortran/generated/cshift0_i1.c
libgfortran/Makefile.am
libgfortran/generated/cshift0_c8.c
libgfortran/generated/cshift0_i2.c
libgfortran/generated/cshift0_i4.c
libgfortran/generated/cshift0_i8.c
gcc/testsuite/gfortran.dg/cshift_nan_1.f90
libgfortran/generated/cshift0_c10.c
libgfortran/Makefile.in
gcc/testsuite/gfortran.dg/char_cshift_3.f90
Comments:
2008-08-14  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/36886
* Makefile.am:  Added $(i_cshift0_c).
Added $(i_cshift0_c) to gfor_built_specific_src.
Add rule to build from cshift0.m4.
* Makefile.in:  Regenerated.
* libgfortran.h:  Addedd prototypes for cshift0_i1,
cshift0_i2, cshift0_i4, cshift0_i8, cshift0_i16,
cshift0_r4, cshift0_r8, cshift0_r10, cshift0_r16,
cshift0_c4, cshift0_c8, cshift0_c10, cshift0_c16.
Define Macros GFC_UNALIGNED_C4 and GFC_UNALIGNED_C8.
* intrinsics/cshift0.c:  Remove helper functions for
the innter shift loop.
(cshift0):  Call specific functions depending on type
of array argument.  Only call specific functions for
correct alignment for other types.
* m4/cshift0.m4:  New file.
* generated/cshift0_i1.c:  New file.
* generated/cshift0_i2.c:  New file.
* generated/cshift0_i4.c:  New file.
* generated/cshift0_i8:.c  New file.
* generated/cshift0_i16.c:  New file.
* generated/cshift0_r4.c:  New file.
* generated/cshift0_r8.c:  New file.
* generated/cshift0_r10.c:  New file.
* generated/cshift0_r16.c:  New file.
* generated/cshift0_c4.c:  New file.
* generated/cshift0_c8.c:  New file.
* generated/cshift0_c10.c:  New file.
* generated/cshift0_c16.c:  New file.

2008-08-14  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/36886
* gfortran.dg/cshift_char_3.f90:  New test case.
* gfortran.dg/cshift_nan_1.f90:  New test case.


Re: [RFH] PR middle-end/179 gcc -O2 -Wuninitialized missing warning with var

2008-08-14 Thread Daniel Berlin
1. You can't assume VUSE's are must-aliases.  The fact that there is a
vuse for something does not imply it is must-used, it implies it is
may-used.

We do not differentiate may-use from must-use in our alias system. You
can do some trivial must-use analysis if you like (by computing
cardinality of points-to set as either single or multiple and
propagating/meeting it in the right place).

Must-use is actually quite rare.

2.  if (!gimple_references_memory_p (def))
+   return;
+
Is nonsensical the SSA_NAME_DEF_STMT of a vuse must contain a vdef,
and thus must access memory.




On Thu, Aug 14, 2008 at 2:16 PM, Manuel López-Ibáñez
[EMAIL PROTECTED] wrote:
 Dear all,

 In order to fix PR 179, I need help either understanding why exactly
 the warning is triggered or obtaining a small self-contained testcase
 to reproduce it.

 Thanks in advance,

 Manuel.

 The attached patch triggers the warnings:

 /home/manuel/src/trunk/gcc/builtins.c: In function 'fold_builtin_strchr':
 /home/manuel/src/trunk/gcc/builtins.c:11095: error: 'c' is used
 uninitialized in this function
 /home/manuel/src/trunk/gcc/builtins.c: In function 'fold_builtin_memchr':
 /home/manuel/src/trunk/gcc/builtins.c:8963: error: 'c' is used
 uninitialized in this function

 Uncommenting  the following avoids the warning:

 +  /*
 +  if (is_call_clobbered (var))
 +{
 +  var_ann_t va = var_ann (var);
 +  unsigned int escape_mask = va-escape_mask;
 +  if (escape_mask  ESCAPE_TO_ASM)
 +   return false;
 +  if (escape_mask  ESCAPE_IS_GLOBAL)
 +   return false;
 +  if (escape_mask  ESCAPE_IS_PARM)
 +   return false;
 +}
 +  */

 The alias dump is:

 fold_builtin_strchr (union tree_node * s1D.68612, union tree_node *
 s2D.68613, union tree_node * typeD.68614)
 {
  union tree_node * temD.68620;
  const charD.1 * rD.68619;
  charD.1 cD.68618;
  const charD.1 * p1D.68617;
  union tree_node * D.68650;
  union tree_node * D.68649;
  long intD.2 D.68648;
  long intD.2 p1.2917D.68647;
  long intD.2 r.2916D.68646;
  union tree_node * D.68644;
  intD.0 D.68641;
  charD.1 c.2915D.68640;
  intD.0 D.68637;
  short unsigned intD.8 D.68631;
  union tree_node * D.68630;
  unsigned charD.10 D.68629;
  unsigned charD.10 D.68627;

  # BLOCK 2 freq:1
  # PRED: ENTRY [100.0%]  (fallthru,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 11095] # VUSE
 cD.68618_34(D) { cD.68618 }
  D.68627_3 = validate_argD.45737 (s1D.68612_2(D), 10);
  [/home/manuel/src/trunk/gcc/builtins.c : 11095] if (D.68627_3 == 0)
goto bb 10;
  else
goto bb 3;
  # SUCC: 10 [95.7%]  (true,exec) 3 [4.3%]  (false,exec)

  # BLOCK 3 freq:434
  # PRED: 2 [4.3%]  (false,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 11095] # VUSE
 cD.68618_34(D) { cD.68618 }
  D.68629_5 = validate_argD.45737 (s2D.68613_4(D), 8);
  [/home/manuel/src/trunk/gcc/builtins.c : 11095] if (D.68629_5 == 0)
goto bb 10;
  else
goto bb 4;
  # SUCC: 10 [90.0%]  (true,exec) 4 [10.0%]  (false,exec)

  # BLOCK 4 freq:43
  # PRED: 3 [10.0%]  (false,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 11102] # VUSE
 cD.68618_34(D), SMT.3811D.75594_35(D) { cD.68618 SMT.3811D.75594 }
  D.68631_6 = s2D.68613_4(D)-baseD.20795.codeD.19700;
  [/home/manuel/src/trunk/gcc/builtins.c : 11102] if (D.68631_6 != 23)
goto bb 10;
  else
goto bb 5;
  # SUCC: 10 [98.3%]  (true,exec) 5 [1.7%]  (false,exec)

  # BLOCK 5 freq:1
  # PRED: 4 [1.7%]  (false,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 11105] # cD.68618_37 = VDEF
 cD.68618_34(D)
  # SMT.3811D.75594_38 = VDEF SMT.3811D.75594_35(D)
  # SMT.3812D.75595_39 = VDEF SMT.3812D.75595_36(D) { cD.68618
 SMT.3811D.75594 SMT.3812D.75595 }
  p1D.68617_8 = c_getstrD.45477 (s1D.68612_2(D));
  [/home/manuel/src/trunk/gcc/builtins.c : 11106] if (p1D.68617_8 != 0B)
goto bb 6;
  else
goto bb 10;
  # SUCC: 6 [20.5%]  (true,exec) 10 [79.5%]  (false,exec)

  # BLOCK 6
  # PRED: 5 [20.5%]  (true,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 2] # cD.68618_40 = VDEF
 cD.68618_37
  # SMT.3811D.75594_41 = VDEF SMT.3811D.75594_38
  # SMT.3812D.75595_42 = VDEF SMT.3812D.75595_39 { cD.68618
 SMT.3811D.75594 SMT.3812D.75595 }
  D.68637_10 = target_char_castD.45483 (s2D.68613_4(D), cD.68618);
  [/home/manuel/src/trunk/gcc/builtins.c : 2] if (D.68637_10 != 0)
goto bb 10;
  else
goto bb 7;
  # SUCC: 10 [39.0%]  (true,exec) 7 [61.0%]  (false,exec)

  # BLOCK 7
  # PRED: 6 [61.0%]  (false,exec)
  [/home/manuel/src/trunk/gcc/builtins.c : 5] # VUSE cD.68618_40
 { cD.68618 }
  c.2915D.68640_12 = cD.68618;
  [/home/manuel/src/trunk/gcc/builtins.c : 5] D.68641_13 =
 (intD.0) c.2915D.68640_12;
  [/home/manuel/src/trunk/gcc/builtins.c : 5] # VUSE cD.68618_40
 { cD.68618 }
  rD.68619_14 = strchrD.689 (p1D.68617_8, D.68641_13);
  [/home/manuel/src/trunk/gcc/builtins.c : 7] if (rD.68619_14 == 0B)
goto bb 8;
  else
goto bb 9;
  # SUCC: 8 [10.1%]  (true,exec) 9 [89.9%]  (false,exec)

  # BLOCK 9
  # PRED: 7 [89.9%]  

Re: Build requirements for the graphite loop optimization passes

2008-08-04 Thread Daniel Berlin
On Mon, Aug 4, 2008 at 6:19 AM, Joseph S. Myers [EMAIL PROTECTED] wrote:
 On Mon, 4 Aug 2008, Ralf Wildenhues wrote:

 * Joseph S. Myers wrote on Sun, Aug 03, 2008 at 10:00:38PM CEST:
 
  (But the configure code also
  shouldn't allow configuring with a GPLv2 version of polylib.)

 Why?  Use is not forbidden by incompatible free software licenses here,
 only redistribution is.

 This is the same principle as config.host giving an error for an attempt
 to build on UWIN host: in both cases, avoid knowingly building something
 undistributable.

If we are doing that, we really shouldn't be.
One of the very explicit freedoms in the GPL is to be able to build
versions for internal use that are not publicly distributed.


Re: lto gimple types and debug info

2008-07-29 Thread Daniel Berlin
On Tue, Jul 29, 2008 at 11:20 AM, Paolo Bonzini [EMAIL PROTECTED] wrote:

 For that matter, print sizeof(X) should print the same value when
 debugging optimized code as when debugging unoptimized code, even if the
 compiler has optimized X away to an empty structure!

 I disagree.  sizeof(X) in the code will return a value as small as
 possible in that case (so that malloc-ing an array of structures) does not
 waste memory, and the debugger should do the same.

 I don't think that's a viable option.  The value of sizeof(X) is a
 compile-time constant, specified by a combination of ISO C and platform ABI
 rules.  In C++, sizeof(X) can even be used as a (constant) template
 parameter, way before we get to any optimization.

 Then you are right.  This adds another constraint...
You can't work around this.
If you built an AST that included sizeof before doing template
instantiation (which may not even be possible), you could at least
determine whether sizeof was used on a given type other than in a
malloc/new call (which we replace anyway).

Otherwise, you have to have a flag for the optimization which
basically declares it is safe to do it.


Re: lto gimple types and debug info

2008-07-29 Thread Daniel Berlin
On Tue, Jul 29, 2008 at 8:45 PM, Mark Mitchell [EMAIL PROTECTED] wrote:
 Daniel Berlin wrote:

 If you built an AST that included sizeof before doing template
 instantiation (which may not even be possible), you could at least
 determine whether sizeof was used on a given type other than in a
 malloc/new call (which we replace anyway).

 Even if you did, I am allowed to know the ABI.

 I can know that struct S has size 12 on this ABI, and do things like:

  struct S a[3];
  struct S* p = a[0];
  /* This points at a[1].  */
  struct S* q = ((char *)p) + 12;


Sure, but anything that reorgs the structure has to be able to handle
this anyway, and already does (in our case, it disallows reorg if you
do things like this).
The problem that sizeof creates is different, in that you have no way
to tell where it's been used.



 So, these kinds of optimizations, where you rip fields out of a structure
 are only safe if you can prove that addresses don't escape -- or if the user
 explicitly tells you that they are.

 --
 Mark Mitchell
 CodeSourcery
 [EMAIL PROTECTED]
 (650) 331-3385 x713



Re: lto gimple types and debug info

2008-07-27 Thread Daniel Berlin
On Sun, Jul 27, 2008 at 1:18 PM, Mark Mitchell [EMAIL PROTECTED] wrote:
 David Edelsohn wrote:

I do not expect LTO (or WHOPR) to work on AIX -- at least not
 without a lot of work on wrappers around the AIX linker.  However, I do
 not understand why enhancing GCC to support LTO -- when GCC is run without
 enabling LTO -- requires locking GCC completely into DWARF debugging.

 I agree that, at least in principle, it should be possible to emit the debug
 info (whether the format is DWARF, Stabs, etc.) once.

No, you can't.
You would at least have to emit the variables separate from the types
(IE emit debug info twice).

  So, I don't see a
 reason that this makes us a DWARF-only compiler either.

 Others have raised the issue of types which are fundamentally transformed by
 the compiler (such as by removing fields).  I think that such opportunities
 are going to be relatively rare; the global struct Window object in a GUI
 library full of functions taking struct Window * parameters probably isn't
 optimizable in this way.  But there will be situations where this is
 possible and profitable of course.

 In that case, I'm not sure that *type* ought to be modified at all, from the
 debug perspective.  To the extent there's still an object of type struct X
 around, it's type is still what it was.

Uh, except that if you only write things out once, and have already
written out the variables, the variable no longer has the correct type
if you've rewritten the type, and if we've already emitted debug info,
it won't display properly anymore (since the locations of data members
the type specifies will now be incorrect).

So are you suggesting we emit debug info at multiple times


Re: lto gimple types and debug info

2008-07-27 Thread Daniel Berlin
On Sun, Jul 27, 2008 at 3:10 PM, Mark Mitchell [EMAIL PROTECTED] wrote:
 Daniel Berlin wrote:

 I agree that, at least in principle, it should be possible to emit the
 debug
 info (whether the format is DWARF, Stabs, etc.) once.

 No, you can't.
 You would at least have to emit the variables separate from the types
 (IE emit debug info twice).

 Yes, of course; that's what everyone is talking about, I think.  Emit here
 may also mean cache in memory some place, rather than write to a file.
  It could mean, for example, fill in the data structures we already use for
 types in dwarf2out.c early, and then throw away the front-end type
 information
Okay, then let us go through the options, and you tell me which you
are suggesting:

If you assume LTO does not have access to the front ends, your options
look something like this:

When you first compile each file:
  Emit type debug info
  Emit LTO

When you LTO them all together
  Do LTO
  Emit variable debug info

Under this option, Emit variable info requires being able to
reference the types.  If you've lowered the types,  this is quite
problematic.  So either you get to store label names for the already
output type debug info with the variables (so you can still reference
the type you output properly when you read it back in).  This is
fairly fragile, to be honest.
Another downside of this is that you can't eliminate duplicate types
between units because you don't know which types are really the same
in the debug info. You have to let the

Another option is:

When you first compile each file:
  Emit type debug info
  Emit partial variable debug info (IE add pointers to outputted types
but not to locations)
  Emit LTO

When you LTO them all together:
  Do LTO
  Parse and update variable debug info to have locations
  Emit variable debug info

This requires parsing the debug info (in some format, be it DWARF or
some generic format we've made up) so that you can update the variable
info's location.
As a plus, you can easily update the types where you need to.
Unlike the first option, because you understand the debug info, you
can now remove all the duplicate types between units without having to
have the linker do it for you.

Unless  you link in every single frontend to LTO1 (Or move a lot to
the middle end), there is no way to do the following:

When you first compile each file:
  Emit LTO

When you LTO them all together:
  Emit type debug info
  Do LTO
  Emit variable debug info

If you don't want to link the frontends, you could also get away with
moving a lot of junk to the middle end (everything from being able to
distinguish between class and struct to namespaces, the context of
lexical blocks) because debug info outputting uses language specific
nodes all over the place right now.

Unless i've missed something, our least fragile and IMHO, best option
requires parsing back in debug info.
It is certainly *possible* to get debug info without parsing the debug
info back in.
Then again, I also don't see what the big deal about adding a debug
info parser is.

It's not like they are all that large.

[EMAIL PROTECTED]:/home/dannyb/util/debuginfo] wc -l bytereader.*
bytereader-inl.h dwarf2enums.h dwarf2reader*
   40 bytereader.cc
  110 bytereader.h
  118 bytereader-inl.h
  465 dwarf2enums.h
  797 dwarf2reader.cc
  373 dwarf2reader.h
 1903 total

(This includes both a callback style reader that simply hands you
thinks you tell it to, as well as something that can read back into a
format much like we use during debug info output)


Re: lto gimple types and debug info

2008-07-27 Thread Daniel Berlin
On Sun, Jul 27, 2008 at 7:41 PM, Daniel Berlin [EMAIL PROTECTED] wrote:
 On Sun, Jul 27, 2008 at 3:10 PM, Mark Mitchell [EMAIL PROTECTED] wrote:
 Daniel Berlin wrote:

 I agree that, at least in principle, it should be possible to emit the
 debug
 info (whether the format is DWARF, Stabs, etc.) once.

 No, you can't.
 You would at least have to emit the variables separate from the types
 (IE emit debug info twice).

 Yes, of course; that's what everyone is talking about, I think.  Emit here
 may also mean cache in memory some place, rather than write to a file.
  It could mean, for example, fill in the data structures we already use for
 types in dwarf2out.c early, and then throw away the front-end type
 information
 Okay, then let us go through the options, and you tell me which you
 are suggesting:

 If you assume LTO does not have access to the front ends, your options
 look something like this:

 When you first compile each file:
  Emit type debug info
  Emit LTO

 When you LTO them all together
  Do LTO
  Emit variable debug info

 Under this option, Emit variable info requires being able to
 reference the types.  If you've lowered the types,  this is quite
 problematic.  So either you get to store label names for the already
 output type debug info with the variables (so you can still reference
 the type you output properly when you read it back in).  This is
 fairly fragile, to be honest.
 Another downside of this is that you can't eliminate duplicate types
 between units because you don't know which types are really the same
 in the debug info. You have to let the

 Another option is:

 When you first compile each file:
  Emit type debug info
  Emit partial variable debug info (IE add pointers to outputted types
 but not to locations)
  Emit LTO

 When you LTO them all together:
  Do LTO
  Parse and update variable debug info to have locations
  Emit variable debug info

 This requires parsing the debug info (in some format, be it DWARF or
 some generic format we've made up) so that you can update the variable
 info's location.
 As a plus, you can easily update the types where you need to.
 Unlike the first option, because you understand the debug info, you
 can now remove all the duplicate types between units without having to
 have the linker do it for you.

 Unless  you link in every single frontend to LTO1 (Or move a lot to
 the middle end), there is no way to do the following:

 When you first compile each file:
  Emit LTO

 When you LTO them all together:
  Emit type debug info
  Do LTO
  Emit variable debug info

 If you don't want to link the frontends, you could also get away with
 moving a lot of junk to the middle end (everything from being able to
 distinguish between class and struct to namespaces, the context of
 lexical blocks) because debug info outputting uses language specific
 nodes all over the place right now.

Sorry, hit send a little too early.

This option also requires being able to serialize language specific
nodes (or again, you move things like namespaces and other language
specific contexts to the middle end), and to stop throwing this stuff
out at the point we do right now.

I'm not sure what most LTO compilers do.
At least when i was at IBM, XLC simply output the debug info in a
generic format (it was part of the definition of wcode), parsed it
back in, updated it, and transformed it into DWARF/etc at the backend.

This is a variant of the second option above.  Again, i'm not saying
it's the best option, and in fact i'm very curious what most compilers
do.


Re: lto gimple types and debug info

2008-07-27 Thread Daniel Berlin
On Sun, Jul 27, 2008 at 7:48 PM, Kenneth Zadeck
[EMAIL PROTECTED] wrote:
 Daniel Berlin wrote:
 you may of course be right and this is what we will end up doing, but the
 implications for whopr are not good.   The parser is going to have to work
 in lockstep with the type merger

Why?

You don't want to merge the types in the debuginfo.

You only have to parse the debuginfo types that correspond to types
you've changed in some fashion
(and if you don't want to do that you only have to parse to update the
variable info, which means you don't even have to parse or follow the
DW_AT_type references)


Re: lto gimple types and debug info

2008-07-27 Thread Daniel Berlin
On Sun, Jul 27, 2008 at 7:50 PM, Mark Mitchell [EMAIL PROTECTED] wrote:
 Daniel Berlin wrote:

 Then again, I also don't see what the big deal about adding a debug
 info parser is.

 OK, yes, we may need to read debug info back in.

 I don't see it as a big deal, either -- and I also don't see it as locking
 us into DWARF2.  We can presumably read in any formats we are about, so if
 we want to add a stabs reader, we can do that to support stabs platforms.
  And, until we have a stabs reader, we can just drop debug info on those
 platforms when doing LTO.  So, we just have to design LTO with some
 abstraction over debug info in mind.

Yes, this is what i would suggest.

I'll also note that GDB already contains such an abstraction, which
was based on STABS, rather than DWARF.


 In fact, we could probably treat DWARF as canonical, and have a STABS-DWARF
 input filter and DWARF-STABS output filter, if we like.

Sure. Again, this input filter is basically what GDB does, converting
DWARF - internal debuginfo abstraction.


Re: lto gimple types and debug info

2008-07-26 Thread Daniel Berlin
On Sat, Jul 26, 2008 at 1:55 PM, Richard Guenther
[EMAIL PROTECTED] wrote:
 On Sat, Jul 26, 2008 at 7:48 PM, David Edelsohn [EMAIL PROTECTED] wrote:
 Kenny 2) Generate the debugging for the types early, and then add an
 Kenny interface that would parse and regenerate the debugging info with
 Kenny the changes.  It is quite likely that this would lock gcc
 Kenny completely into dwarf, but that appears to only be a problem for
 Kenny AIX at this point, so that may not be that much of a problem.

 Mark This is the approach I would suggest.  Dump out the debug info for 
 types
 Mark before you throw away all the information, and then leave it aside.

I do not expect LTO (or WHOPR) to work on AIX -- at least not
 without a lot of work on wrappers around the AIX linker.  However, I do
 not understand why enhancing GCC to support LTO -- when GCC is run without
 enabling LTO -- requires locking GCC completely into DWARF debugging.

The emails propose generating the debugging for types early, which
 has no dependency on DWARF.  If no LTO IPA transformations are performed
 (such as on AIX), there is no need to parse and regenerate the debugging
 info.

If GCC needs to emit sections for each function early to define
 the types for the functions, such as using ELF sections, AIX XCOFF has
 similar functionality in CSECTs, which GCC already generates.  This is a
 requirement for multiple sections, not ELF nor DWARF.

I don't think that the original assertion about locking into DWARF
 is correct and I hope that LTO would not be designed and implemented to
 intentionally break AIX.

 I don't see why we should need to parse the debug info again (which would
 lock us in to whatever we want to parse).  Instead I was suggesting simply to
 emit the debug information for types and declarations from the frontends
 and be done with it.

So how do you plan to keep the debug info up to date in the presence
of structure reordering/etc transforms?

The only way to do this is to read the debug info back in and update
it, or output pieces of the debug info at different times (which is an
even larger mess - see the mess that is debug info now)


Re: Bootstrap failures on ToT, changes with no ChangeLog entry?

2008-07-24 Thread Daniel Berlin
The easiest way to not delete trunk is to not delete trunk.


On Thu, Jul 24, 2008 at 10:06 AM, Peter Bergner [EMAIL PROTECTED] wrote:
 On Thu, 2008-07-24 at 18:48 +0200, Andreas Schwab wrote:
 Definitely something fishy around that time.  svn log says:

 
 r138082 | meissner | 2008-07-23 13:18:03 +0200 (Mi, 23 Jul 2008) | 1 line

 Add missing ChangeLog from 138075
 
 r138078 | meissner | 2008-07-23 13:06:42 +0200 (Mi, 23 Jul 2008) | 1 line

 undo 138077
 
 r138075 | meissner | 2008-07-23 12:28:06 +0200 (Mi, 23 Jul 2008) | 1 line

 Add ability to set target options (ix86 only) and optimization options on a 
 func
 

 And svn diff says:

 $ svn diff -c138078
 svn: Unable to find repository location for '' in revision 138077
 $ svn diff -c138077
 svn: The location for '' for revision 138077 does not exist in the 
 repository or refers to an unrelated object

 Apparently the repository has some issues with revision 138077.

 Maybe it's related to this #gcc comment:

 meissner [snip]
   However, I did accidentily delete the trunk when I was trying to 
 delete
   the branch, and did a copy from the previous version.  Is there 
 anyway on
   the svn pre-commits to prevent somebody deleting the trunk?

 Peter






Re: lto gimple types and debug info

2008-07-24 Thread Daniel Berlin
On Thu, Jul 24, 2008 at 2:13 PM, Chris Lattner [EMAIL PROTECTED] wrote:
 On Jul 24, 2008, at 10:16 AM, Kenneth Zadeck wrote:

 I thought the whole idea of the LTO project was to keep as much language
 specific type information as late as possible.  If you start stripping out
 useful information about types, it becomes harder to do high level
 optimizations like devirtualization and other source-specific
 transformations.  This is one of the major advantages of LTO, no?

 I think that there is a lot of front end information in the types that
 really is not useful to the middle ends.   That can be stripped away.  I
 certainly do not want to strip anything that could be used for something
 like devirtualization.
 As a (possibly flawed example), the private attribute in c++ is completely
 useless for optimization because it is legal for functions that otherwise
 have no access to a private field to gain access by pointer arithmetic.
  However, in a truly strongly typed language, the private attribute can be
 used to limit the scope of a variable to a single compilation unit.

 Ok, but how do you decide whether something is important or not to keep?
  Why go through the work of removing the information if you may need it
 later?  How much will you really be able to take out?  Is this about
 removing a bit here and a bit there, or is there a large volume of the info
 that can be removed?

I dunno, this seems like a thing you could better figure out by trying
it and seeing where the problems are than by trying to anticipate
every single possible problem
(not that there should be no design, but that it would be better to
start with a design and iterate it than try to figure out perfect
ahead of time).

 -Chris



Re: Anyone/anything still using CVS on gcc.gnu.org?

2008-07-22 Thread Daniel Berlin
Patches welcome :)

On Tue, Jul 22, 2008 at 3:55 AM, Andreas Schwab [EMAIL PROTECTED] wrote:
 Dave Korn [EMAIL PROTECTED] writes:

   It's pretty obvious the moment you read the content of any of the posts
 that it can't be cvs and has to be svn, even more so if you follow one of
 the viewvc links... but it couldn't hurt to make it explicit, I'm sure.

 FWIW, the links still use the viewcvs URL, btw.

 Andreas.

 --
 Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
 SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
 PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
 And now for something completely different.



Re: [tuples] Bootstrap failure building libjava on ppc64

2008-07-14 Thread Daniel Berlin
On Mon, Jul 14, 2008 at 5:22 PM, Diego Novillo [EMAIL PROTECTED] wrote:
 We are failing to build libjava on PPC64 because of this:

 /home/dnovillo/perf/sbox/tuples/local.ppc64/bld/./gcc/xgcc -shared
 -libgcc -B/home/dnovillo/perf/sbox/tuples/local.ppc64/bld/./gcc
 -nostdinc++ -L/home/d
 novillo/perf/sbox/tuples/local.ppc64/bld/powerpc64-unknown-linux-gnu/libstdc++-v3/src
  
 -L/home/dnovillo/perf/sbox/tuples/local.ppc64/bld/powerpc64-unknown-linux-gnu/libstd
 c++-v3/src/.libs
 -B/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/powerpc64-unknown
 -linux-gnu/bin/
 -B/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/powerpc64-unknown-
 linux-gnu/lib/ -isystem
 /home/dnovillo/perf/sbox/tuples/local.ppc64/inst/powerpc64-un
 known-linux-gnu/include -isystem
 /home/dnovillo/perf/sbox/tuples/local.ppc64/inst/pow
 erpc64-unknown-linux-gnu/sys-include -DHAVE_CONFIG_H -I.
 -I/home/dnovillo/perf/sbox/t
 uples/local.ppc64/src/libjava -I./include -I./gcj
 -I/home/dnovillo/perf/sbox/tuples/l
 ocal.ppc64/src/libjava -Iinclude
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/li
 bjava/include 
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/classpath/inc
 lude -Iclasspath/include
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/cl
 asspath/native/fdlibm
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/../bo
 ehm-gc/include -I../boehm-gc/include
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/sr
 c/libjava/libltdl
 -I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/libltdl -
 I/home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/.././libjava/../gcc
 -I/home/
 dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/../zlib
 -I/home/dnovillo/perf/sbox/
 tuples/local.ppc64/src/libjava/../libffi/include -I../libffi/include
 -fno-rtti -fnon-
 call-exceptions -fdollars-in-identifiers -Wswitch-enum
 -D_FILE_OFFSET_BITS=64 -mminim
 al-toc -Wextra -Wall -D_GNU_SOURCE
 -DPREFIX=\/home/dnovillo/perf/sbox/tuples/local.p
 pc64/inst\ 
 -DTOOLEXECLIBDIR=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/lib/.
 ./lib64\ -DJAVA_HOME=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst\
 -DBOOT_CLA
 SS_PATH=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/share/java/libgcj-4.4.0.ja
 r\ 
 -DJAVA_EXT_DIRS=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/share/java/ext
 \ 
 -DGCJ_ENDORSED_DIRS=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/share/java/
 gcj-endorsed\ 
 -DGCJ_VERSIONED_LIBDIR=\/home/dnovillo/perf/sbox/tuples/local.ppc64/i
 nst/lib/../lib64/gcj-4.4.0-10\ -DPATH_SEPARATOR=\:\
 -DECJ_JAR_FILE=\\ -DLIBGCJ_D
 EFAULT_DATABASE=\/home/dnovillo/perf/sbox/tuples/local.ppc64/inst/lib/../lib64/gcj-4
 .4.0-10/classmap.db\
 -DLIBGCJ_DEFAULT_DATABASE_PATH_TAIL=\gcj-4.4.0-10/classmap.db\
  -g -O2 -D_GNU_SOURCE -MT stacktrace.lo -MD -MP -MF
 .deps/stacktrace.Tpo -c /home/dn
 ovillo/perf/sbox/tuples/local.ppc64/src/libjava/stacktrace.cc  -fPIC
 -DPIC -o .libs/s
 tacktrace.o
 /home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/stacktrace.cc:
 In static memb
 er function 'static _Unwind_Reason_Code
 _Jv_StackTrace::UnwindTraceFn(_Unwind_Context
 *, void*)':
 /home/dnovillo/perf/sbox/tuples/local.ppc64/src/libjava/stacktrace.cc:105:
 internal c
 ompiler error: in copy_reference_ops_from_ref, at tree-ssa-sccvn.c:615
 Please submit a full bug report,
 with preprocessed source if appropriate.
 See http://gcc.gnu.org/bugs.html for instructions.
 make[3]: *** [stacktrace.lo] Error 1
 make[3]: *** Waiting for unfinished jobs


This error implies you have something in a tcc_reference operation
(IE load or store) that we've never seen before (or we are improperly
classifying something as a reference when it isn't).


Can you go up a frame and print out the first argument to
copy_reference_ops_from_ref using debug_tree (or it's equivalent. The
important thing is to know what operations it contains, not that it
look nice).
The argument is reused locally during the function, hence the request
to go up a frame :P

 Do you think you could take a look at it.  I could probably be able to
 give you a reduced .ii file if that helps.
 Otherwise, it's easy to reproduce by doing a ppc64 bootstrap on tuples.

 Thanks.  Diego.



Re: Byte permutation optimization

2008-07-13 Thread Daniel Berlin
On Sun, Jul 13, 2008 at 6:29 AM, Andi Kleen [EMAIL PROTECTED] wrote:
 Nils Pipenbrinck [EMAIL PROTECTED] writes:

 Since the codebase is huge I have the feeling that I have overlooked
 something. Does some kind of infrastructure to detect patterns within
 a SSA tree already exists somewhere else?

 FWIW some time ago I wanted to do some other arithmetic optimization
 on expressions and didn't find a nice generic misc transformation
 pass or an generic pattern matcher. Probably one would need to be added.

The closest we have to misc transforms like this at the tree level is
either forwprop or reassoc
Probably better, as you say, to add a misc pattern matcher.


Re: Recent libstdc++ regressions

2008-07-12 Thread Daniel Berlin
Okay, i isolated the problem (we are folding based on the wrong type
for constants, so we have a case where 1  63 becomes 0 instead of a
very large value).
Working on a patch now.


On Fri, Jul 11, 2008 at 1:56 PM, Daniel Berlin [EMAIL PROTECTED] wrote:
 On Fri, Jul 11, 2008 at 1:17 PM, Paolo Carlini [EMAIL PROTECTED] wrote:
 Hi,

 This is likely to have been my patch.
 I'm minimizing the check_construct_destroy failure right now.
 If someone could give me some idea of what is causing the execution
 failures while i do that, i may be able to fix them faster :)


 Thanks for fixing the check_construct_destroy problem.

 I would suggest concentrating next on the vector/bool failure, which is
 about a standard feature, not an extension. Frankly, I don't have any
 special advice, the testcase is pretty straightforward...

 Working on it now :)



Re: Recent libstdc++ regressions

2008-07-11 Thread Daniel Berlin
On Fri, Jul 11, 2008 at 1:17 PM, Paolo Carlini [EMAIL PROTECTED] wrote:
 Hi,

 This is likely to have been my patch.
 I'm minimizing the check_construct_destroy failure right now.
 If someone could give me some idea of what is causing the execution
 failures while i do that, i may be able to fix them faster :)


 Thanks for fixing the check_construct_destroy problem.

 I would suggest concentrating next on the vector/bool failure, which is
 about a standard feature, not an extension. Frankly, I don't have any
 special advice, the testcase is pretty straightforward...

Working on it now :)


Re: Recent libstdc++ regressions

2008-07-09 Thread Daniel Berlin
This is likely to have been my patch.
I'm minimizing the check_construct_destroy failure right now.
If someone could give me some idea of what is causing the execution
failures while i do that, i may be able to fix them faster :)

On Wed, Jul 9, 2008 at 10:31 AM, Paolo Carlini [EMAIL PROTECTED] wrote:
 Hi,

 just to be sure people notice: there are new (1-2 days max) regressions in 
 libstdc++, caused by compiler changes. See, for example:

  http://gcc.gnu.org/ml/gcc-testresults/2008-07/msg00824.html

 I can look a bit more into this in the next days, but again I pretty much 
 exclude this has been caused by library changes, that's why this heads up...

 Paolo.




Re: gcc-in-cxx branch created

2008-07-03 Thread Daniel Berlin
On Wed, Jul 2, 2008 at 2:30 PM, Hendrik Boom [EMAIL PROTECTED] wrote:
 On Wed, 25 Jun 2008 20:11:56 -0700, Ian Lance Taylor wrote:

 Ivan Levashew [EMAIL PROTECTED] writes:

 Your comment makes little sense in context.  Nobody could claim that
 the existing gengtype code is simple.  Not many people understand how
 it works at all.  In order to support STL containers holding GC
 objects, it will need to be modified.

 I'm sure there is a library of GC-managed components in C++.

 I'm sure there is too.  In gcc we use the same data structures to
 support both GC and PCH.  Switching to a set of C++ objects is likely to
 be a complex and ultimately unrewarding task.


 I don't know what you mean by your reference to the Cyclone variant of
 C, unless you are trying to say something about gcc's use of garbage
 collection.


 Cyclone has many options for memory management. I don't know for sure
 if there is a need for GC in GCC at all.

 I would prefer it if gcc didn't use GC, but it does, and undoing that
 decision will be a long hard task which may never get done.

 Cyclone has a roots not only in C, but also ML. Some techniques like
 pattern matching, aggregates, variadic arrays, tuples looks to be more
 appropriate here than their C++'s metaprogrammed template analogues.

 I guess I don't know Cyclone.  If you are suggesting that we use Cyclone
 instead of C++, I think that is a non-starter.  We need to use a
 well-known widely-supported language, and it must be a language which
 gcc itself supports.

 Ian

 There are a number of languages that would probably be better-suited to
 programming gcc than C or C++, on technical grounds alone.


That's great.
We have more than just technical concerns.

   But if it is a requirement for using a language that everyone
 already knows it, we will forever be doomed to C and its insecure
 brethren.

This has never been listed as a requirement.
It is certainly a consideration.
The main requirement for communities like GCC for something like
changing languages is consensus or at least a large set of active
developers willing to do something and the rest of them willing to not
commit suicide if it happens.
There are secondary requirements like not stalling for years while
moving languages, not losing serious performance, etc.

You are free to propose whatever language you like. It is unlikely you
will get support from any of the active contributors simply saying we
should use X because Y.
The best way to show us the advantages of using some other languages
is to convert some part of GCC to use it and show how much better it
is.

This is a big job, of course.  Then again, tree-ssa was started by
diego as a side project, and gained supporters and helpers as others
decided to spend their time on it.
You may find the same thing, in which case you may find it is not hard
to convince people to move to some other language.
You may find nobody agrees with you, even after seeing parts of gcc in
this new language.
I can guarantee you you will find nobody agrees with you if you sit on
the sidelines and do nothing but complain.

--Dan


Re: gcc-in-cxx: Garbage Collecting STL Containers

2008-06-25 Thread Daniel Berlin
Maybe at some point then we should just stop using gengtype and just
hand-write the walkers once.

One of the reasons gengtype exists is because you can't easily have an
abstract interface with member functions that you can force people to
implement in C.

In C++, we can.

This is of course, a large change, but i'm not sure how much more work
it really is than trying to understand gengtype and rewrite it to
properly parse C++/support STL containers.


On Wed, Jun 25, 2008 at 10:49 AM, Tom Tromey [EMAIL PROTECTED] wrote:
 Daniel == Daniel Jacobowitz [EMAIL PROTECTED] writes:

 On Wed, Jun 25, 2008 at 08:35:41AM -0600, Tom Tromey wrote:
 I think most of the needed changes will be in gengtype.  If you aren't
 familiar with what this does, read gcc/doc/gty.texi.

 Daniel Also - I may regret saying this but - doesn't gengtype have a
 Daniel simplistic C parser in it?  How upset is it likely to get on C++
 Daniel input?

 Yeah, it does -- see gengtype-parse.c.
 I haven't done extensive hacking there; I don't really know how upset
 it will be.  I assume it won't work the first time :)

 Tom



Re: gcc-in-cxx branch created

2008-06-19 Thread Daniel Berlin
On Thu, Jun 19, 2008 at 1:26 PM, Ian Lance Taylor [EMAIL PROTECTED] wrote:
 Jens-Michael Hoffmann [EMAIL PROTECTED] writes:

 No.  I've flipped the branch to start compiling the source files in
 gcc with C++.  Unfortunately a number of issues will need to be
 addressed before all the code will compile in C++.  Most of this work
 can and will be contributed back to mainline gcc as well.

 I'll send out a note when everything on the branch compiles in C++.

 Is there a todo list? I would like to contribute to this branch, how can I
 help?

 Well, one approach would be to compile code on the branch.  Where it
 fails, fix it so that it compiles.  Then, if appropriate, move the
 patch back to mainline, test the patch there, and submit it for
 mainline.

 The other major TODO is to work out the details of using STL
 containers with GC allocated objects.  This means teaching gengtype
 how to generate code to traverse STL containers, which would then be
 used during GC.  This is not a task for the faint-hearted.


One way to avoid having gengtype generate the walks is to have a
container base class that implements walking using iterators.   Then
we can have gcc::vector instead of std::vector, etc.

Gengtype would then just have to use this interface when walking
container roots, instead of having to generate it's own walking
functions for containers.

Then again, it's not clear this is worth it, since at some point you
will probably want to have a base class for gc'd objects and have the
walking function be a member, instead of what gengtype does now, so
gengtype will have to learn some stuff anway.


Re: gccbug parser?

2008-06-16 Thread Daniel Berlin
I haven't touched it in well over a year.
I'll look what's up.


On Mon, Jun 16, 2008 at 12:40 PM, Rainer Orth
[EMAIL PROTECTED] wrote:
 Daniel,

 I've submitted a bug report via gccbug about an hour ago, but so far have
 neither received a confirmation of the report nor a bounce.  Is the gccbug
 parser at [EMAIL PROTECTED] still operational?

 Regards.
Rainer

 -
 Rainer Orth, Faculty of Technology, Bielefeld University



Re: [lto] function to DECL associations for WPA repackaging

2008-06-12 Thread Daniel Berlin
On Thu, Jun 12, 2008 at 4:39 PM, Diego Novillo [EMAIL PROTECTED] wrote:
 On 2008-06-12, Kenneth Zadeck [EMAIL PROTECTED] wrote:

  I have no idea how to make sure, in whopr, that function x sees foobar if
 you are going to cherry pick the globals also.

 I'm not sure I see the problem that you are pointing to.  In this program:

 int N;
 int foobar;
 int *bar = foobar;
 int **foo = bar;
 x ()
 {
  int **x = foo;
  return **x;
 }

 All of 'foobar', 'bar' and 'foo' will be in the list of symbols
 referenced by x().

Why do you think foobar will be in the list?

(I'm just curious, i'm not saying you are wrong).


Re: [lto] Streaming out language-specific DECL/TYPEs

2008-06-05 Thread Daniel Berlin
On Thu, Jun 5, 2008 at 5:57 AM, Jan Hubicka [EMAIL PROTECTED] wrote:
 Jan Hubicka wrote:

 Sure if it works, we should be lowering the types during gimplification
 so we don't need to store all this in memory...
 But C++ FE still use its local data later in stuff like thunks, but we
 will need to cgraphize them anyway.

 I agree.  The only use of language-specific DECLs and TYPEs after
 gimplification should be for generating debug information.  And if
 that's already been done, then you shouldn't need it at all.

 For LTO with debug info we will probably need some frontend neutral
 debug info representaiton in longer run, since optimization modifying
 the data types and such will need to compensate.

 We can translate stuff to in-memory dwarf and update it but that would
 limit amount of debug info format we will want to support probably.
DWARF is not exactly memory or space efficient, sadly.
Then again,  what most other compilers have done is bite the bullet
and define their own debug info data, then transform that to dwarf2
at the very end.
Im not sure we want to do that either :(


  1   2   3   4   5   6   7   8   9   10   >