Re: svn problems

2006-05-03 Thread Giovanni Bajo
Mike Stump <[EMAIL PROTECTED]> wrote:

>> Also, with svn 1.4 dev (all i have on this machine)
>
> Cool, fixed in 1.4 dev.  Now I'm curious if it is fixed in 1.3.x.  I
> really want to update, but, the fortunes of a large company with lots
> of revenue are predicated on this stuff actually working.  :-)  Can I
> rely, given that, on 1.4 dev if it isn't fixed in 1.3.x?

Pay attention: SVN 1.4 performs a silent upgrade of your working copy to a new
format the first time it writes to it. This allows to deliver much increased
performances with large working copies like GCC (IIRC there will be half of the
stat operations for a "svn status", for instance), but makes the working copy
totally incompatible with SVN 1.3 and previous versions. And there are no
official downgrade script (even if google might come up with some unofficial
script I saw around).

Giovanni Bajo



Re: Git and GCC

2007-12-07 Thread Giovanni Bajo
On Fri, 2007-12-07 at 14:14 -0800, Jakub Narebski wrote:

> > >> Is SHA a significant portion of the compute during these repacks?
> > >> I should run oprofile...
> > > SHA1 is almost totally insignificant on x86. It hardly shows up. But
> > > we have a good optimized version there.
> > > zlib tends to be a lot more noticeable (especially the
> > > *uncompression*: it may be faster than compression, but it's done _so_
> > > much more that it totally dominates).
> > 
> > Have you considered alternatives, like:
> > http://www.oberhumer.com/opensource/ucl/
> 
> 
>   As compared to LZO, the UCL algorithms achieve a better compression
>   ratio but *decompression* is a little bit slower. See below for some
>   rough timings.
> 
> 
> It is uncompression speed that is more important, because it is used
> much more often.

I know, but the point is not what is the fastestest, but if it's fast
enough to get off the profiles. I think UCL is fast enough since it's
still times faster than zlib. Anyway, LZO is GPL too, so why not
considering it too. They are good libraries.
-- 
Giovanni Bajo



Re: Git and GCC

2007-12-07 Thread Giovanni Bajo

On 12/7/2007 6:23 PM, Linus Torvalds wrote:


Is SHA a significant portion of the compute during these repacks?
I should run oprofile...


SHA1 is almost totally insignificant on x86. It hardly shows up. But we 
have a good optimized version there.


zlib tends to be a lot more noticeable (especially the uncompression: it 
may be faster than compression, but it's done _so_ much more that it 
totally dominates).


Have you considered alternatives, like:
http://www.oberhumer.com/opensource/ucl/
--
Giovanni Bajo



Re: Inlining and estimate_num_insns

2005-02-27 Thread Giovanni Bajo
Steven Bosscher <[EMAIL PROTECTED]> wrote:

>> In the end we surely want to watch CiSBE and SPEC testers.
>
> Maybe so, but your timings already show this is pretty unacceptable.


I strongly object this. Benchmarks show that we are doing *much* worse at
inlining in 4.0, and we are seeing bad regressions in code generation because
of that. Richard's patches are simply restoring the 3.4 behaviour and fixing
this *bad* regression. If compile time goes high, then sorry, it's because of
other things. Maybe tree-ssa, who knows. If we got good compile times because
an optimization was accidentally disabled, keeping it disabled is not the good
thing to do.

-O3 means we want to try to inline everything, and that will cost something.
Keeping the mainline broken in this regard because we save some compilation
time at -O3 is nonsense. Otherwise, please give me -O4 and I'll let you use
this broken -O3 which does nothing. And I am only half joking here. If saving
some compilation time at -O3 and losing so much in code generation is that
important to you, then please allow me to have a
flag -finline-functions-non-broken, and -O4.

Personally, I find Richard's patches so correct in fixing this big regression
which I am surprised they find so much opposition. I kindly ask you to
reconsider your position.

Giovanni Bajo



Re: Inlining and estimate_num_insns

2005-02-27 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

> 1. The C++ community has split into two sub-communities.  One is very
> heavily focused on template metaprogramming.  The other is doing more
> "traditional" object-oriented programming.

I would postulate that most of the times the former group are those writing
*libraries*, and the latter group is those using the libraries to produce
applications.

> The kinds of programs
> written by these two communities are drastically different, even
> though, obviously, there is overlap between the two, mixed programs,
> etc.  But, programmers in one camp tend to think that their style is
> "very common" while that of the other camp is "fancy" or
> "complicated".  The truth is that both are now mainstream uses of
> C++, and we need to try to work well with both kinds of code.

Let's remember also that template metaprogramming is used in v3 itself (and
much more so when TR1 is fully merged), in Boost, in UBLAS, and many other
libraries. There are thousands of applications which are written in the
traditional object-oriented paradigm, and still make heavy use the above
libraries.

> 2. I don't think we should be make any changes at this point without a
> relatively comprehensive benchmarking effort.  That means more than a
> few programs, compiled on multiple architectures, with measurements of
> both compile-time and runtime performance.  I'd suggest that tramp3d
> is fine, but I'd also suggesting looking at KDE, Boost, and
> libstdc++-v3. I'd try to measure total compilation time, total code
> size, and some measure of performance.  (Since C++ inlining should
> generally -- but not always, of course -- result in smaller code due
> to removal of calls, code size is probably a rough proxy of
> performance.)

Richard showed already how his patches help on some of the benchamarks you
suggested. They also fix a regression since 3.4 was so much better in this
regard. Of course, compilation time goes up somehow because we are doing more
work now, but the compile time *regression* is probably because of something
else and is merely uncovered by Richard's patches. We also got reports from
other people saying that Richard's patches help.

I'm not sure we should hold such a patch to save a few percent of compilation
time at -O3. -O3 (compared to -O2) means "try to inline harder". Richard's
patches are doing exactly that. I'm sure who uses -O3 doesn't mind us doing
better what we are asked for. And if compilation time is critical for them,
they can still use -O2.

Also, I would like to see detailed reports of where the compilation time goes,
after the patch. I'm sure we can go blaming other optimizers (probably some of
the new ones) for the compile time regression.

Giovanni Bajo



Re: Extension compatibility policy

2005-02-28 Thread Giovanni Bajo
Marc Espie <[EMAIL PROTECTED]> wrote:

> Personally, I would even say that it would be *great* if gcc would
> start warning if you use any extension, unless you explicitly disable
> those warnings... (except for __extension__ in header files, and then
> I've stumbled upon situations where it hurts still).


That is *exactly* what -pedantic is meant to be. If it does not, patches are
welcome.

Giovanni Bajo



Re: Extension compatibility policy

2005-02-28 Thread Giovanni Bajo
Mike Hearn <[EMAIL PROTECTED]> wrote:

> As recent releases have broken more and more code, I would like to
> understand what GCCs policy on source compatibility is. Sometimes the
> code was buggy, in this particular case GCC simply decided to pull an
> extension it has offered for years. Is it documented anywhere? Are
> there any more planned breakages? How do you make the cost:benefit
> judgement call? Are there any guidelines you follow?

First, you must understand the difference between a real extension and
something that always worked with GCC but was never supposed to. In the former
group, you place things which are well-documented in the GCC manual, they are
disabled if you add the -pedantic option, are probably widely known (like many
of those GCC extensions that make it into C99 in some form, say designated
initializer), etc. In the latter group, there are all those mostly obscure
cases where there is a piece of code which is illegal by ISO standards but GCC
used to accept somehow.

The former group is handled with care, more often than not. Deprecations
usually involve a full release cycle (which is around 12 months), and they are
notified in the release notes (for instance, see
http://gcc.gnu.org/gcc-3.4/changes.html for the release notes of GCC 3.4).
Sometimes, deprecation time can be increased if users complain enough.

The latter group is basically made of bugs of the GCC compiler. "Accepting a
piece of code which is invalid" is the specular version of "rejecting a piece
of code which is valid", and I am sure everybody agrees that this is a bug. In
fact, in Bugzilla we have two keywords to mark these bugs: "accepts-invalid"
and "rejects-valid". We cannot really have a deprecation cycle to fix bugs: GCC
would be moving even slower than it does (or used to). Notice that this
situation is mostly common with C++ code, where we used to have wa to many
"accepts-invalid" bugs. Every once in a while people jump in complaining about
no deprecation cycle for having fixed this or that bug which greatly affect
their codebases, but there is really nothing we can do about that.

In your __FUNCTION__ case, we are basically in the latter group. __FUNCTION__
is a well-documented extension in C90 (it's part of C99 in some form now), and
it was never documented to be a macro. The fact that was named like a macro and
worked like a macro for years is indeed unfortunate. Notwithstanding that, GCC
maintainers acknowledged its widespread use and the bug of it working like a
macro was deprecated for around 3 years. We cannot do more than that.

GCC 2.95 is really old, people *should* expect trouble while upgrading to 3.4,
or 4.0. I would be *really* surprised to hear a notransition from 2.95 to 4.0
which does not require any modification (for non-trivial codebases of course).

Giovanni Bajo



Re: http://gcc.gnu.org/gcc-3.4/changes.html

2005-02-28 Thread Giovanni Bajo
Johan Bergman (KI/EAB) <[EMAIL PROTECTED]> wrote:

> I propose the following change to
> http://gcc.gnu.org/gcc-3.4/changes.html. (The "alternative solution"
> was proposed by myself a while ago,
> but now I have realized that it is not backwards compatible.)

Please post the patch as an unified diff, and attach it to the mail rather than
copying within it.

Giovanni Bajo



Re: invoke.texi: reference to web page that does not exist

2005-03-05 Thread Giovanni Bajo
Devang Patel <[EMAIL PROTECTED]> wrote:

> invoke.texi mentions following URL for further info on visibility
> #pragmas.
>http://www.nedprod.com/programs/gccvisibility.html
> but it does not exist.


The page was up and working no longer than 2 weeks ago. Niall Douglas (which
I'm CC:ing) is the owner of that page.
Niall, are you planning to get the site back up and running soon? If not, can
we get a copy of gccvisibility.html so that we can extract the text and host it
locally?

Thanks
Giovanni Bajo



Re: [PING] [PATCH]: New Port(MAXQ)

2005-03-05 Thread Giovanni Bajo
Konark Goel, Noida <[EMAIL PROTECTED]> wrote:

>  We submitted a patch for a new port for MAXQ architecture.
>  The original patch is
> (http://gcc.gnu.org/ml/gcc-patches/2004-12/msg02138.html)
>  The revised patch after incorporating few comments is
> (http://gcc.gnu.org/ml/gcc-patches/2005-01/msg00521.html)
>  Please let us know if that patch is acceptable.

Can you also post the result of a testsuite run to the testresult mailing list
to make sure the port is somewhat stable?
Was the copyright assignment to FSF filed for either the whole company or the
authors of the port?

Mark, how does this port fit in the 4.1 plan? Since it is totally
self-contained (it does not modify anything in GCC proper but the usual
configuration additions), I think it could be merged any time, assuming a
global maintainer can spend some time on a review. To be fair, this patch was
posted last December with mainline in Stage 3 (and got almost no comments), so
if the patch is approved, it could be added to 4.0 too (given absolutely zero
risk).

Giovanni Bajo



Using fold() in frontends

2005-03-07 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

>> If the parser can't be used to provide useful context, we could
>> potentially postpone the calling of "fold" for C++ COND_EXPRs during
>> parsing, but call "fold" on COND_EXPRs while lowering to gimple, where
>> we'll always know whether we're expecting lvalues/rvalues.
>
> We should do that for *all* calls to fold.  A patch to do that would be
> most welcome!


It looks like the general consensus is that we should not use fold() anymore
in the frontends (I consider gimplify part of middle-end already, as it is
mostly language-independent). I know Java people also are trying to remove
usage of fold() since it produces simplifications which are not allowed in
the Java semantics of constant expressions.

But how are you proposing to handle the fact that the C++ FE needs to fold
constant expressions (in the ISO C++ sense of 'constant expressions)? For
instance, we need to fold "1+1" into "2" much before gimplification. Should
a part of fold() be extracted and duplicated in the C++ frontend?
-- 
Giovanni Bajo



Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-07 Thread Giovanni Bajo
Steven Bosscher <[EMAIL PROTECTED]> wrote:

>> The way I think about this is that G++ has long supported the GNU
>> min/max expression extension -- and it's long been broken.  Over the
>> years, I've fielded several bug reports about that extension, and we've
>> gradually cleaned it up, but mostly it's just been neglected.
>
> Indeed, not so very long a go I helped pinning down a bug in the RTL
> expanders for MIN_EXPR and MAX_EXPR (I think that Roger fixed it) that
> made it impossible for that extension to work relyably.  The reason we
> found it is that the tree optimizers produces a few (and hopefully it
> will produce more of them soon), but the bug in the expanders had been
> there since the dawn of time.  And nobody noticed it before.

Well, that sounds largely impossible. Can you point exactly which bug are
you talking of? I know for a fact that the extension itself has always
worked for basic rvalue usage, with basic types. Instead, I would not be
surprised if some more complex usage of it used to be (or still is) broken,
like weird lvalue contexts, usage in templates, operator overloading or
similar.
-- 
Giovanni Bajo



Deprecating min/max extension in C++

2005-03-08 Thread Giovanni Bajo
Andrew Pinski <[EMAIL PROTECTED]> wrote:

>> Well, that sounds largely impossible. Can you point exactly which bug
>> are
>> you talking of? I know for a fact that the extension itself has always
>> worked for basic rvalue usage, with basic types. Instead, I would not
>> be
>> surprised if some more complex usage of it used to be (or still is)
>> broken,
>> like weird lvalue contexts, usage in templates, operator overloading or
>> similar.
>
> Yes this was PR 19068 and bug 18548.


Thanks. Nonetheless, both are regressions, and both shows a rather complex
situation which includes pointer tricks. My statement that basic usage of
the extension has always worked still holds.
-- 
Giovanni Bajo



Deprecating min/max extension in C++

2005-03-08 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

> IMO, if these are C++-only, it's relatively easy to deprecate these
> extension -- but I'd like to hear from Jason and Nathan, and also the
> user community before we do that.  Of all the extensions we've had, this
> one really hasn't been that problematic.

I would prefer them to stay. My reasons:

1) std::min() and std::max() are not exact replacements. For instance, you
cannot do std::min(3, 4.0f) because the arguments are of different type.
Also, you cannot use reference to non-const types as arguments. The min/max
exensions do not suffer from these problems (I consider the former very
problematic, and the latter just annoying).

2) The min/max assignments are very useful. I'm speaking of the
(undocumented?) ">?=" and "?= Compute(i) * factor;

instead of:

for (int i=0;i<100;i++)
{
 float cur = Compute(i) * factor;
 if (max_computed < cur)
max_computed = cur;
}

I find the former more compact, more expressive and much easier to read (of
course, you have to know to syntax). I find it also less error-prone since
there is no duplication, nor the use of a variable to prevent side-effects.
I suppose that if we drop ">?" and "?=" and
"

Re: Deprecating min/max extension in C++

2005-03-09 Thread Giovanni Bajo
Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

>>> IMO, if these are C++-only, it's relatively easy to deprecate these
>>> extension -- but I'd like to hear from Jason and Nathan, and also the
>>> user community before we do that.  Of all the extensions we've had, this
>>> one really hasn't been that problematic.
>>
>> I would prefer them to stay. My reasons:
>>
>> 1) std::min() and std::max() are not exact replacements. For instance,
you
>> cannot do std::min(3, 4.0f) because the arguments are of different type.
>
> That is a rather weak argument.  What is the type of the argument if
> it were possible?

"float" of course, like we do for 3 + 4.0f.

> If float, why can't you write 3f?  If int, why can't
> you write 4?

Because the example was just an example. In real code, "3" is probably a
variable of integer type, and "4.0f" is probably a variable of floating
point type.
Going on this way, why don't we to propose to deprecate "+" in C++ because
std::plus can be used as a replacement? I'm sure people won't be happy to be
unable to add an integer and a long.

> With the ominpresence of function templates and the
> rather picky template-argument deduction process, how useful is that
> fuzzy-typed constructs with rather dubious semantics and implementation?

It is useful for me. It has been many times. It avoided me to write casts
which are otherwise necessary. I don't think I will have to explain you the
domain of the problem, why that single variable was float or int, and so on.
My statement is that std::min() is *not* an exact replacement for  I would like to see those extensions deprecated and go with no return.


I would like to propose them for standardization. It is just too bad I don't
have time to prepare the papers.
-- 
Giovanni Bajo



Re: Deprecating min/max extension in C++

2005-03-09 Thread Giovanni Bajo
Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

>> Because the example was just an example. In real code, "3" is probably a
>> variable of integer type, and "4.0f" is probably a variable of floating
>> point type.
>
> Which we have not seen yet, for the purpose of assessing the purpoted
> usefulness in real codes.  Arguments are easily made with xyz examples.


Are you disputing the usefulness of promotion rules with operators? If you
agree that promotion is useful, I cannot see why it should not be for
min/max operators.
-- 
Giovanni Bajo



Re: Deprecating min/max extension in C++

2005-03-09 Thread Giovanni Bajo
Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

>>>> Because the example was just an example. In real code, "3" is probably
a
>>>> variable of integer type, and "4.0f" is probably a variable of floating
>>>> point type.
>>>
>>> Which we have not seen yet, for the purpose of assessing the purpoted
>>> usefulness in real codes.  Arguments are easily made with xyz examples.
>>
>>
>> Are you disputing the usefulness of promotion rules with operators? If
you
>
> I'm disputing the extensions ? when we have a standard
> min()and max().

... which do not handle promotions. So you do not consider useful to have a
min/max operator with promotion (so that it would work exactly like any
other operator) just because there is a cheap version without promotion. And
my statement that min() and max() are not exact replacements still stands.

We'll disagree, as happens.

Since there is no exact replacements (especially for the min/max assignment
operators), and the extensions are definitely not so troublesome, I would
like them to stay.
-- 
Giovanni Bajo



Re: Bad link on webpage

2005-03-10 Thread Giovanni Bajo
Marcus <[EMAIL PROTECTED]> wrote:

> On the page, http://gcc.gnu.org/gcc-4.0/changes.html, the link
> http://www.nedprod.com/programs/gccvisibility.html (near the end of the
> document) contains
>
> ``DOMAIN HOLDING PAGE
>
>  This is a holding page for a domain registered by Total Registrations on
> behalf of a customer. At this present time the owner of the domain name
has
> simply registered the name and has not chosen to host a web site or
receive
> email through it. ''


I'm taking care of this. If the site does not appear again soon, I'll remove
the link.
I would like to still get hold of the information that used to be present at
that page because they were in fact very useful.
-- 
Giovanni Bajo



Re: Bad link on webpage

2005-03-11 Thread Giovanni Bajo
James E Wilson <[EMAIL PROTECTED]> wrote:

>> I would like to still get hold of the information that used to be present
at
>> that page because they were in fact very useful.
>
> This is what web search engines are for.  Going to yahoo, typing gcc
> visibility, and then clicking on the "cached" link for the first search
> result, I get a copy of the page.
>
http://216.109.117.135/search/cache?p=gcc+visibility&sm=Yahoo%21+Search&toggle=1&ei=UTF-8&u=www.nedprod.com/programs/gccvisibility.html&w=gcc+visibility&d=C31F2BCCE8&icp=1&.intl=us


Ah thank you! It's obvious, but for some reason I tried webarchive.com but
google didn't occur to me! :) I'll put this into the Wiki, and update the
webpage.

(notice: I use the Wiki because otherwise I'll have to wait weeks for the
approval, and I don't have the time nor the willing of pushing a patch for
weeks just for this. I believe we should either be more liberal with the
contents of our website, or get more reviewers. For instance, we could think
of a policy where a www patch can be applied after 48hrs if nobody says
otherwise).
-- 
Giovanni Bajo



Re: Bad link on webpage

2005-03-11 Thread Giovanni Bajo
James E Wilson <[EMAIL PROTECTED]> wrote:

>> I would like to still get hold of the information that used to be present
at
>> that page because they were in fact very useful.
>
> This is what web search engines are for.  Going to yahoo, typing gcc
> visibility, and then clicking on the "cached" link for the first search
> result, I get a copy of the page.
>
http://216.109.117.135/search/cache?p=gcc+visibility&sm=Yahoo%21+Search&toggle=1&ei=UTF-8&u=www.nedprod.com/programs/gccvisibility.html&w=gcc+visibility&d=C31F2BCCE8&icp=1&.intl=us

OK, done. Link: http://gcc.gnu.org/wiki/Visibility.

Can someone patch changes.html and restore the link in the documentation
that was deleted a few days ago?
-- 
Giovanni Bajo



Revamp WWW review process?

2005-03-11 Thread Giovanni Bajo
Joseph S. Myers <[EMAIL PROTECTED]> wrote:

>> (notice: I use the Wiki because otherwise I'll have to wait weeks for the
>> approval, and I don't have the time nor the willing of pushing a patch
for
>> weeks just for this. I believe we should either be more liberal with the
>> contents of our website, or get more reviewers. For instance, we could
think
>> of a policy where a www patch can be applied after 48hrs if nobody says
>> otherwise).
>
> You may not have noticed that Gerald is away until 13 March.  Otherwise
> website patches do get reviewed quickly.


I think they are not reviewed quickly enough anyway. I do not have evidence
(statistics) to bring forward, so feel free to ignore my opinion. I know for
sure that some of my www patches had to be pinged, or went unreviewed for
weeks. I'm not trying to accuse Gerald, I just believe that we should just
find a faster path to get www patches in. I find it funny that Dorit has to
ping two times a patch that describes changes to the vectorizer, for
instance.

A problem is that a technical www patch is often deferred to a maintainer of
the specific area of the compiler. For instance, if I want to write
something on the site which speaks about C++, I need a buy-in from both
Gerald and a C++ maintainer. This much often than not requires pings and
long waits. Getting a patch which basically explains a known change in
Bugzilla into changes.html can easily take a week, between Gerald's and
Mark/Jason/Nathan. With such an overhead, I'm unimpressed that changes.html
is always incomplete, and develepors often update it only after explicit
prompts from the Bugmasters.

My personal feeling I think the success of the Wiki is that it does not
require review, rather than the fact that the Wiki syntax is partially
lighter than HTML. The 48-hrs rule I propose seems sensible to me. The worst
thing that can happen is that something incorrect goes live on the site, and
it'll eventually get fixed when someone reads the patch a few days later.
-- 
Giovanni Bajo



Re: documentation on writing testcases?

2005-03-11 Thread Giovanni Bajo
Per Bothner <[EMAIL PROTECTED]> wrote:

> The general frustration is: where is dg-error documented?
> I looked in:
> - the README* files in gcc/testsuite and in gcc.dg;
> - the Test Suites chapter of the internals manual
> (which mentions "special idioms" but not the basics);
> - the "Testsuite Conventions" of codingconventions.html;
> - contribute.html;
> - install/test.html;
> - and of course Google.
>
> In *none* of these did I find documentation on how to write or
> understand a test-case, or a link to such documentation.  I'm inclined
> to think that most of these above places should have such a link.


I had provided this patch in the past, but was rejected:
http://gcc.gnu.org/ml/gcc-patches/2004-06/msg00313.html

I never had time to split, rewrite in tex, and update it as requested. Janis
recently incorporated some parts into the internal manuals, but I believe
that we still nedd provide a "tutorial for GCC testcase writing". Like I'm
trying to explain in another thread, I believe that we are being way too
picky on www/documentation patches than we should be.

For instance, my patch could have been committed immediatly and been refined
over time. In fact, I should find a couple of hours to add it to the Wiki.
-- 
Giovanni Bajo



Re: Documentation on writing testcases now in GCC Wiki

2005-03-11 Thread Giovanni Bajo
Michael Cieslinski <[EMAIL PROTECTED]> wrote:

> I formatted the infomation from Giovanni Bajo's patch and put it in the
> Wiki: http://gcc.gnu.org/wiki/HowToPrepareATestcase

Many thanks!
-- 
Giovanni Bajo


Re: Questions about trampolines

2005-03-14 Thread Giovanni Bajo
Robert Dewar <[EMAIL PROTECTED]> wrote:

>>> Well as I said above, trampolines or an equivalent are currently
critically
>>> needed by some front ends (and of course by anyone using the (very
useful
>>> IMO) extension of nested functions in C).
>>
>> This is your opinion, but I've yet to find an actual piece of code in a
>> real project that uses that extension.
>
> I have certainly seen it used, but you may well be right that it is
> seldom used. It is certainly reasonable to consider removing this
> extension from C and C++. Anyone using that feature? Or know anyone
> who is.

Last time this was discussed on gcc@, there was an agreement that since we
have to support trampolines for Ada & co., we can as well keep the extension
in C, which allows easier testcases, reductions (as most developers do not
understand Ada) and let the feature be tested even within gcc.dg.
-- 
Giovanni Bajo



Re: Merging calls to `abort'

2005-03-15 Thread Giovanni Bajo
Richard Kenner <[EMAIL PROTECTED]> wrote:

> However, the idea that users could search for previous bug
> reports is new to me.  That might be an additional reason for
> using fancy_abort.
>
> It's not just users, but first level tech-support.  There, it can help
> in suggesting a workaround to the user and knowing which file the
> abort is in may help assign the bug to the appropriate developer.

Absolutely true. As a GCC bugmaster, I can confirm that receiving bug reports
with clear indication of the file name and the function name is incredibly
useful. Not only it lets us find duplicates in seconds, or assign recent
regressions to the responsible in a shot, but it also provides an immediate
indication of which kind of bug it is. Otherwise, we would be forced to run GDB
on the testcases just to categorize a bug.

The abuse of abort() in GNU software is unfortunate. I agree with Mark when he
says that a naked abort should be used only after useful information has
already been printed to the user. In fact, we are in the middle of a conversion
of the whole GCC codebase from abort() to assert() (even if our abort() is a
fancy_abort() in disguise!).

Giovanni Bajo



Re: Why aren't assignment operators inherited automatically?

2005-03-16 Thread Giovanni Bajo
Topi Maenpaa <[EMAIL PROTECTED]> wrote:

> In short, anything inherited from the base class can be used as
> expected, except the assignment operator. What's the deal? I'm doing
> this on Mandrake 
> 10.1, gcc 3.4.1, if that matters.


This is what the standard says.

Giovanni Bajo



Re: Newlib _ctype_ alias kludge now invalid due to PR middle-end/15700 fix.

2005-03-16 Thread Giovanni Bajo
Hans-Peter Nilsson <[EMAIL PROTECTED]> wrote:

> So, the previously-questionable newlib alias-to-offset-in-table
> kludge is finally judged invalid.  This is a heads-up for newlib
> users.  IMHO it's not a GCC bug, though there's surely going to
> be some commotion.  Maybe a NEWS item is called for, I dunno.


It will be in NEWS, since RTH already updated
http://gcc.gnu.org/gcc-4.0/changes.html. I hope newlib will be promptly fixed.

Giovanni Bajo



Re: Compiler chokes on a simple template - why?

2005-03-17 Thread Giovanni Bajo
Topi Maenpaa <[EMAIL PROTECTED]> wrote:

> ---
> template  class A
> {
> public:
>   template  void test(T value) {}
> };
>
> template  void test2(A& a, T val)
> {
>   a.test(val);
> }
>
> int main()
> {
>   A a;
>   a.test(1); //works fine
> }
> ---

This is ill-formed. You need to write:

a.template test(val);

because 'a' is a dependent name.


> The funny thing is that if I change the name of the "test2" function
> to "test", everything is OK. The compiler complains only if the
> functions have different names. Why does the name matter?

This is surely a bug. Would you please file a bug report about this?

> The code compiles if "test2" is not a template function. Furthermore,
> calling A::test directly from main rather than through the
> template function works fine.

This is correct, because if "test2" is not a template function name anymore,
then 'a' is not a dependent name, and the 'template' keyword is not needed to
disambiguate the parser.

Giovanni Bajo



Re: Known regression ? gcc-4.0.0-20050312 FPE's on C++

2005-03-18 Thread Giovanni Bajo
John Vickers <[EMAIL PROTECTED]> wrote:

> I can have another go without the "--disable-checking" if that's
> likely to help. Anything else you'd like in the bug report ?

Please submit the smallest preprocessed source you can machine-generate which
shows the bug.

Thanks!

Giovanni Bajo



Re: reload-branch created (was: What to do with new-ra for GCC 4.0)

2005-03-18 Thread Giovanni Bajo
Bernd Schmidt <[EMAIL PROTECTED]> wrote:

>>> It might also be easier for those of us who want to play with the
>>> code, without having to find a suitable sync point between the
>>> patch and
>>> mainline sources.
>>
>> I have created a new branch, "reload-branch", on which I'm going to
>> check in these changes.  Once I've done that, I'll commit the patch
>> below to mention the branch in the documentation.

Thanks! You should also mention in cvs.html that you are the maintainer of the
branch (together with Ulrich, maybe?).

What is your plan for this branch? Is there more code refactoring/rewriting
planned, or are you just going to give it a wider testing and fix fallout bugs,
in preparation for a merge?

Giovanni Bajo



Re: AVR indirect_jump addresses limited to 16 bits

2005-03-19 Thread Giovanni Bajo
Paul Schlie <[EMAIL PROTECTED]> wrote:

> - Sorry, I'm confused; can you give me an example of legal C
>   expression specifying an indirect jump to an arbitrary location
> within a function?


It is possible in GNU C at least:

int foo(int dest)
{
   __label__ l1, l2, l3;
   void *lb[] = { &&l1, &&l2, &&l3 };
   int x = 0;

   goto *lb[dest];

l1:
   x += 1;
l2:
   x += 1;
l3:
   x += 1;
   return x;
}

I would not design a backend so that such a feature is deemed to be impossible
to support.

Giovanni Bajo



Re: maybe a gcc bug

2005-03-25 Thread Giovanni Bajo
zouq <[EMAIL PROTECTED]> wrote:

> /testcom.c
> int main (void)
> {
> int i,j;
> int u[100][100], v[100][100],
> p[100][100], unew[100][100],
> vnew[100][100],pnew[100][100],
> uold[100][100],vold[100][100],
> pold[100][100],cu[100][100],
> cv[100][100],z[100][100],h[100][100],psi[100][100];
>
>  int tdts8=2;
>  int tdtsdx=3;
>  int tdtsdy=4;
>
>  for (i=0;i<100;i++)
>for (j=0;j<100;j++)
>{
>unew[i+1][j]=uold[i+1][j]+tdts8*(z[i+1][j]+z[i+1][j])*
> (cv[i+1][j+1]+cv[i][j+1]+cv[i][j]+cv[i+1][j])
> -tdtsdx*(h[i+1][j]-h[i][j]);
>/*vnew[i][j+1]=vold[i][j+1]-tdts8*(z[i+1][j+1]+z[i][j+1])
> *(cu[i+1][j+1]+cu[i][j+1]+cu[i][j]+cu[i+1][j])
> -tdtsdy*(h[i][j+1]-h[i][j]);*/
>/*pnew[i][j]=pold[i][j]-tdtsdx*(cu[i+1][j]-cu[i][j])-
> tdtsdy*(cv[i][j+1]-cv[i][j]);*/
>
>}
>
>  for (i=0;i<100;i++)
>for (j=0;j<100;j++)
>  printf ("%d\n%d\n%d\n",unew[i][j], vnew[i][j], pnew[i][j]);
>
>  return 1;
> }
>
> first i made gcc-4.1-20050320 a cross-compiler for powerpc.
> when i compile the above program,  it goes like this:
>
> testcom.c:34: internal compiler error: in schedule_insns, at
sched-rgn.c:2549
>
> who can tell me why?
> why can it bring compiler error?

Any compiler error, *whichever* source file you use, is a bug of GCC. Would
you please submit this as a proper bugreport in Bugzilla? Read the
instructions at http://gcc.gnu.org/bugs.html.

Thanks
-- 
Giovanni Bajo



Re: GCC 4.1 bootstrap failed at ia64-*-linux

2005-03-31 Thread Giovanni Bajo
James E Wilson <[EMAIL PROTECTED]> wrote:

>> IA64 bootstrap failed at abi_check stage reporting undefined
>> references from libstdc++ (see log at the bottom).
> 
> This seems indirectly related to bug 20964.  Mark's proposed fix to
> stop building abi-check at bootstrap time means the IA-64 bootstrap
> should now succeed.  This testcase will still be broken, but now it
> will only be a make check failure instead of a make bootstrap failure.

Typo, you meant PR 20694.

Giovanni Bajo



Re: 4.0 regression: g++ class layout on PPC32 has changed

2005-04-04 Thread Giovanni Bajo
Andrew Haley <[EMAIL PROTECTED]> wrote:

> public:
>   long long __attribute__((aligned(__alignof__( ::java::lang::Object 
l;

I don't recall the exact details, but I have fixed a couple of bugs about
the use of __alignof__ and attribute aligned on members of classes (maybe
templates only?). Are you positive this attribute declaration *does* have an
effect at all in 3.4?
-- 
Giovanni Bajo



Re: 4.0 regression: g++ class layout on PPC32 has changed

2005-04-04 Thread Giovanni Bajo
Andrew Haley <[EMAIL PROTECTED]> wrote:

>  > > public:
>  > >   long long __attribute__((aligned(__alignof__(
::java::lang::Object 
>  > l;
>  >
>  > I don't recall the exact details, but I have fixed a couple of bugs
about
>  > the use of __alignof__ and attribute aligned on members of classes
(maybe
>  > templates only?). Are you positive this attribute declaration *does*
have
>  an > effect at all in 3.4?
>
> It seems to be the other way around.
>
> The attribute declaration does have an effect in 3.4 -- the offset
> changes from 52 to 56 -- but it does not have any effect in 4.0, and
> this is what breaks my code.


Is __alignof__( ::java::lang::Object ) the same under 3.4 and 4.0 in the
first place?

My fixes were to parse more attributes, not less. So this has to be
unrelated to my patches. I suggest you file a bugreport in Bugzilla, and
mark it as ABI breaking. It should get fixed before 4.0 gets out.
-- 
Giovanni Bajo



Re: GCC 4.0 Status Report (2005-04-05)

2005-04-04 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

>> I've attached a revised summary of the critical bugs open against
>> 4.0. The good news is that there are fewer than last week.

Earlier today, Andrew Haley posted a small C++ snippet showing an ABI change
affecting gcj on PPC32:
http://gcc.gnu.org/ml/gcc/2005-04/msg00139.html

I hope he'll open a PR soon, but you probably want to consider this for 4.0.

Giovanni Bajo



Re: GCC 4.0 Status Report (2005-04-05)

2005-04-05 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

>>> Earlier today, Andrew Haley posted a small C++ snippet showing an ABI
>>> change
>>> affecting gcj on PPC32:
>>> http://gcc.gnu.org/ml/gcc/2005-04/msg00139.html
>>> 
>>> I hope he'll open a PR soon, but you probably want to consider this
>>> for 4.0.
>> 
>> 
>> Note it also effects all targets too by shown by my testcase.
> 
> Please put this into a PR.

This is now PR 20763. I took the liberty of CC'ing you in it.
-- 
Giovanni Bajo


Re: Q: C++ FE emitting assignments to global read-only symbols?

2005-04-08 Thread Giovanni Bajo
Dale Johannesen <[EMAIL PROTECTED]> wrote:

>> I do think the C++ FE needs fixing before Diego's change gets merged,
>> though.  I can make the change, but not instantly.  If someone files
>> a PR, and assigns to me, I'll get to it at some not-too-distant
>> point.
>
> It would be good to have a way to mark things as "write once, then
> readonly" IMO.
> It's very common, and you can do some of the same optimizations on
> such things
> that you can do on true Readonly objects.


We had this once, it was called RTX_UNCHANGING_P, and it was a big mess.
Progressively, we have been removing TREE_READONLY from C++ const variables
(which are "write once" from the GCC IL point of view), and this is another
instance of the same problem.

We probably need a better way to describe C++ constructors, maybe something
like WRITEONCE_EXPR which is a MODIFY_EXPR with a READONLY on its lhs, and
which is supposed by the frontends when initialing such variables.

Giovanni Bajo



Re: GCC 4.0 RC1 Available

2005-04-12 Thread Giovanni Bajo
Andrew Haley <[EMAIL PROTECTED]> wrote:

> There was.  We are now, for the first time ever, in a position where
> we can run a large number of big Java applications using entirely free
> software.


This is a really great news!

So great that I wonder why changes.html does not mention it (and news.html
as well). Would you please prepare a patch about this?
-- 
Giovanni Bajo



Re: Problem with weak_alias and strong_alias in gcc-4.1.0 with MIPS...

2005-04-17 Thread Giovanni Bajo
Steven J. Hill <[EMAIL PROTECTED]> wrote:

> ../sysdeps/ieee754/dbl-64/s_isinf.c:29: error: 'isinf' aliased to
> undefined symbol '__isinf'
> ../sysdeps/ieee754/dbl-64/s_isinf.c:31: error: '__isinfl' aliased
> to undefined symbol '__isinf'
> ../sysdeps/ieee754/dbl-64/s_isinf.c:32: error: 'isinfl' aliased to
> undefined symbol '__isinf'
>
> I am attempting to try and figure out what changed so drastically to
> cause this. I also looked in GCC and glibc Bugzilla databases, but did
> not find anything addressing this problem. Has anyone seen this
> behavior? Thanks.

http://gcc.gnu.org/gcc-4.0/changes.html

Quote:
Given __attribute__((alias("target"))) it is now an error if target is not a
symbol, defined in the same translation unit. This also applies to aliases
created by #pragma weak alias=target. This is because it's meaningless to
define an alias to an undefined symbol. On Solaris, the native assembler would
have caught this error, but GNU as does not.

Giovanni Bajo



GCC 3.3 status

2005-04-20 Thread Giovanni Bajo
Hello Gaby,

do you still confirm the release date which was last reported here:
http://gcc.gnu.org/ml/gcc/2005-01/msg01253.html

that is, will GCC 3.3.6 released on April, 30th? And will it be the last
release of the GCC 3.3 series?

Thanks,
Giovanni Bajo



Re: http://gcc.gnu.org/gcc-3.4/changes.html

2005-04-25 Thread Giovanni Bajo
Gerald Pfeifer <[EMAIL PROTECTED]> wrote:

>> Here is a unified diff for the proposed change (I think).
>
> Johan, Giovanni, I just noticed that this one apparently feel trough
> the cracks?
>
> I had assumed that Giovanni would just go ahead an apply it since he's
> an expert in that area and the patch even was rather short, but I do
> not see it in CVS, so I just committed it.

I either forgot about it or waited for an approval. Either way, thank you for
taking care of this!


Giovanni Bajo



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-30 Thread Giovanni Bajo
Lars Segerlund <[EMAIL PROTECTED]> wrote:

>  I have to agree with Richard's assessment, gcc is currently on the
>  verge of being unusable in many instances.
>  If you have a lot of software to build and have to do complete
>  rebuilds it's painful, the binutils guys have a 3x speedup patch
>  coming up, but every time there is a speedup it gets eaten up.

This is simply not true. Most of the benchmarks we have seen posted to the gcc
mailing lists show that GCC4 can be much faster than GCC3 (especially on C++
code). There are of course also regressions, and we are not trying to hide
them.

Did you *ever* provide a preprocessed source code which shows a compile-time
regression? If not, please do NOT hesitate! Post it in Bugzilla. People that
did it in the past are mostly satisfied by how GCC developers improved things
for them.

Otherwise, I do not want to sound rude, but your posts seem more like trolling
to me. I am *ready* to admit that GCC4 is much slower than GCC3 or GCC2, but I
would like to do this in front of real measurable data, not just random
complaints and told legends. Thus, I am really awaiting your preprocessed
testcases which prove your points.

Please.

Giovanni Bajo



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-30 Thread Giovanni Bajo
Jason Thorpe <[EMAIL PROTECTED]> wrote:

>> Maybe the older platform should stick to the older compiler then,
>> if it is too slow to support the kind of compiler that modern
>> systems need.
>
> This is an unreasonable request.  Consider NetBSD, which runs on new
> and old hardware.  The OS continues to evolve, and that often
> requires adopting newer compilers (so e.g. other language features
> can be used in the base OS).
>
> The GCC performance issue is not new.  It seems to come up every so
> often... last time I recall a discussion on the topic, it was thought
> that the new memory allocator (needed for pch) was cause cache-thrash
> (what was the resolution of that discussion, anyway?)


There is no outcome, because it is just the Nth legend. Like people say "I
believe GCC is slow because of pointer indirection" or stuff like that.

Please, provide preprocessed sources and we *will* analyze them. Just file a
bugreport in Bugzilla, it took 10 minutes of your time.

Giovanni Bajo



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-30 Thread Giovanni Bajo
Richard Earnshaw <[EMAIL PROTECTED]> wrote:

>> The GCC build times are not unreasonable compared to other,
>> commercial compilers with similar functionality.  And the GCC
>> developers
>> ave plans to address inefficiencies -- GCC 4.0 often is faster than
>> GCC
>> 3.4.
>
> If you are going to make sweeping statements like this you need to
> back them up with hard data.


It is surely faster for C++ code thanks to the work done by Codesourcery on the
lexing upfront and the name lookup. "Faster" here means 20-30% faster, so it's
not 1% or something like that.

There are many posts on gcc@ that show this, I can dig them up in the archive
for you if you want, but I'm sure you can use google as well as I do. Two of
them are very recente (see Karel Gardas' post on this thread, and the recent
benchmark posted by Rene Rebe).

Giovanni Bajo



Re: GCC 4.1: Buildable on GHz machines only?

2005-04-30 Thread Giovanni Bajo
Ian Lance Taylor  wrote:

>> Except it's not just bootstrapping GCC.  It's everything.  When the
>> NetBSD Project switched from 2.95.3 to 3.3, we had a noticeably
>> increase in time to do the "daily" builds because the 3.3 compiler
>> was so much slower at compiling the same OS source code.  And we're
>> talking almost entirely C code, here.
>
> Well, there are two different issues.  Matt was originally talking
> about bootstrap time, at least that is how I took it.  You are talking
> about speed of compilation.  The issues are not unrelated, but they
> are not the same.
>
> The gcc developers have done a lot of work on speeding up the compiler
> for 3.4 and 4.0, with some success.  On many specific test cases, 4.0
> is faster than 3.3 and even 2.95.  The way to help this process along
> is to report bugs at http://gcc.gnu.org/bugzilla.
>
> In particular, if you provide a set of preprocessed .i files, from,
> say, sys, libc, or libcrypto, whichever seems worst, and open a gcc PR
> about them, that would be a great baseline for measuring speed of
> compilation, in a way that particularly matters to NetBSD developers.

I would also like to note that I *myself* requested preprocessed source code to
NetBSD developers at least 6 times in the past 2 years. I am sure Andrew Pinski
did too, a comparable amound of times. These requests, as far as I can
understand, were never answered. This also helped building up a stereotype of
the average NetBSD developer being "just a GCC whine troll".

I am sure this is *far* from true, but I would love to see NetBSD developers
*collaborating* with us, especially since what we are asking (filing bug
reports with preprocessed sources) cannot take more than 1-2 hours of their
time.

Giovanni Bajo



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-01 Thread Giovanni Bajo
Jason Thorpe <[EMAIL PROTECTED]> wrote:

>> I would also like to note that I *myself* requested preprocessed
>> source code to
>> NetBSD developers at least 6 times in the past 2 years. I am sure
>> Andrew Pinski
>> did too, a comparable amound of times. These requests, as far as I
>> can understand, were never answered. This also helped building up a
>> stereotype of
>> the average NetBSD developer being "just a GCC whine troll".
>
> While I have not had much time for a quite a while to work on GCC
> myself, I am listed as NetBSD maintainer... you can always drop me a
> note directly when this sort of thing happens.


Thanks! Are you then going to file in Bugzilla some preprocessed sources that
show the 2.95 -> 3.3 slowdown experimented by NetBSD folks?

Giovanni Bajo



Re: volatile semantics

2005-05-03 Thread Giovanni Bajo
Mike Stump <[EMAIL PROTECTED]> wrote:

> int avail;
> int main() {
>while (*(volatile int *)&avail == 0)
>  continue;
>return 0;
> }
>
>
> Ok, so, the question is, should gcc produce code that infinitely
> loops, or should it be obligated to actually fetch from memory?
> Hint, 3.3 fetched.


I agree it should fetch. Did you try -fno-strict-aliasing? Open a bugreport,
I'd say.

Giovanni Bajo



Re: big slowdown gcc 3.4.3 vs gcc 3.3.4 (64 bit)

2005-05-03 Thread Giovanni Bajo
Kenneth P.. Massey wrote:

> The code below runs significantly slower when compiled in 64 bit with
> 3.4.3 than
> it does in 3.3.4, and both are significantly slower than a 32 bit
> compile.

Thanks for the report. Would you please open a bugreport in Bugzilla?
-- 
Giovanni Bajo




Re: GCC 4.0.0 Performance Regressions?

2005-05-09 Thread Giovanni Bajo
Jason Bucata <[EMAIL PROTECTED]> wrote:

> Is anybody collecting information on performance regressions in 4.0.0
> (vs.
> 3.4.3)?  I've got some results on POVRAY and BYTEmark, and BYTEmark
> saw some performance regression, particularly with profiled
> optimization (-fprofile-{generate,use}):
> http://forums.gentoo.org/viewtopic-t-329765.html


You should try and isolate a single BYTEmark test which shows the biggest
regression. It's better if you manage to pack the whole test as a single
preprocessed source file. Theoretically, this file should be compilable and
linkable, and the resulting binary should run for a while doing computations.

With this kind of help, we can analyze the regression and see why it's slower
with 4.0.0.

Giovanni Bajo



Re: GCC 4.0.0 Performance Regressions?

2005-05-10 Thread Giovanni Bajo
Jason Bucata <[EMAIL PROTECTED]> wrote:

>> You should try and isolate a single BYTEmark test which shows the
>> biggest regression. It's better if you manage to pack the whole test
>> as a single preprocessed source file. Theoretically, this file
>> should be compilable and linkable, and the resulting binary should
>> run for a while doing computations. 
>> 
>> With this kind of help, we can analyze the regression and see why
>> it's slower with 4.0.0.
> 
> It was rather time-consuming but I managed to do it.  I picked the
> numsort benchmark which had a serious regression:
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21485


Many, many thanks!

Giovanni Bajo



Re: GCC 3.4.4 RC1

2005-05-11 Thread Giovanni Bajo
Etienne Lorrain <[EMAIL PROTECTED]> wrote:

>   Some of those problem may also exist in GCC-4.0 because this
>  version (and the 4.1 I tested) gives me an increase of 60% of the
>  code size compared to 3.4.3.


This is a serious regression which should be submitted in Bugzilla. Would
you please take care of that? It is sufficient to provide a single
preprocessed source which shows the code size increase in compilation. GCC4
still needs some tuning for -Os.

Thanks!
-- 
Giovanni Bajo



Re: GCC 3.4.4 RC1

2005-05-11 Thread Giovanni Bajo
Etienne Lorrain <[EMAIL PROTECTED]> wrote:

>   If I compile that with GCC-3.4, I get:
>
> $ size tmp.o
> textdata bss dec hex filename
>  243   0   0 243  f3 tmp.o
>
>   With GCC-4.0:
>
> $ size tmp.o
> textdata bss dec hex filename
>  387   0   0 387 183 tmp.o
>
>   Can someone confirm the problem first?


I can confirm that this function shows a regression simply with -Os, on
x86-linux:

GCC 3.4.3:
   textdata bss dec hex filename
194   0   0 194  c2 test.o

GCC 4.1.0 CVS 20050323
   textdata bss dec hex filename
280   0   0 280 118 test.o

So it's a 44% increase. Definitely worth a bugreport in Bugzilla!
-- 
Giovanni Bajo



Re: Proposed resolution to aliasing issue.

2005-05-11 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

> Our proposed approach is to -- by default -- assume that "g" may
> access all of "b".  However, in the event that the corresponding parameter to
> "g" has an attribute (name TBD, possibly the same as the one that
> appears in Danny's recent patch), then we may assume that "g" (and its
> callees) do not use the pointer to obtain access to any fields of "b".
>
> For example:
>
>void g(A *p __attribute__((X)));
>
>void f() {
>  B b;
>  g(&b.a); /* Compiler may assume the rest of b is not accessed
>  in "g".  */
>}

It is not clear if you are speaking here of the C language or GENERIC, but
anyway I'll comment.

Can the meaning of the attribute be reversed? I would like it to mark places
where upcasts can happen. In any code base you may pick up, there are probably
just a couple of points where you'd need to say "I'll upcast this pointer",
while *all* the other uses are ok as they are. So I guess it makes sense to
have the default *reversed*. Then, you can make TBAA disabled by default for
the C and C++ language.

After all, there is absolutely no point in enabling it by default if, to make
it useful, you have to decorate billions of function declarations. And if it is
disabled by default, the meaning of the attribute can be reversed without any
risk. You can then state in the documentation how to use the attribute if one
really needs TBAA.

So to recap, my proposal is:

- For the C and the C++ frontend, do not enable TBAA through -Ox, just keep it
a separate option (-ftree-tbaa).
- Add __attribute__((upcast)) that can be used to decorate pointers in
parameter declarations which might be subject to upcasting during the
function's body execution.
- For C++ frontend, we can have a -fpod-upcast which adds attribute upcast
automatically for all pointers to POD types.
- Explain in the documentation that -ftree-tbaa is risky for C/C++ unless the
code base is audited and __attribute__((upcast)) added wherever appropriate.
- For the Java/Fortran/Ada frontend, probably TBAA can be on by default at -Ox,
but I dunno.

I'm positive that for C++ there would be zero need for manual decoration in,
say, a whole OS distribution (Qt, KDE, Boost, and whatnot) if we use
"-ftree-tbaa -fpod-upcast".

Giovanni Bajo



Re: GCC-4.0 vs GCC-3.3.6 ia32 -Os: code size increase from 261 to 5339 bytes

2005-05-20 Thread Giovanni Bajo
Etienne Lorrain <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED]:~/projet/gujin$ gcc -Os tst.c -c -o tst.o && size tst.o
>textdata bss dec hex filename
> 261   0   0 261 105 tst.o
> [EMAIL PROTECTED]:~/projet/gujin$ ../toolchain/bin/gcc -Os tst.c -c -o tst.o
&&
> size tst.o
>textdata bss dec hex filename
>5339   0   0533914db tst.o
> [EMAIL PROTECTED]:~/projet/gujin$

It's another issue with SRA and -Os. With an old 4.1, I have:

$ ./xgcc -c -Os -B. btst.c && size btst.o
   textdata bss dec hex filename
   5339   0   0533914db btst.o
$ ./xgcc -c -Os -fno-tree-sra -B. btst.c && size btst.o
   textdata bss dec hex filename
224   0   0 224  e0 btst.o

So we're actually better than 3.3, after we disable -ftree-sra. I guess SRA
should be tuned (disabled?) for -Os.

Please, do file a bugreport.
-- 
Giovanni Bajo



Re: GCC-4.0 vs GCC-3.3.6 ia32 -Os: code size increase from 261 to5339 bytes

2005-05-21 Thread Giovanni Bajo
Daniel Berlin <[EMAIL PROTECTED]> wrote:

>> $ ./xgcc -c -Os -B. btst.c && size btst.o
>>textdata bss dec hex filename
>>5339   0   0533914db btst.o
>> $ ./xgcc -c -Os -fno-tree-sra -B. btst.c && size btst.o
>>textdata bss dec hex filename
>> 224   0   0 224  e0 btst.o
>> 
>> So we're actually better than 3.3, after we disable -ftree-sra. I
>> guess SRA should be tuned (disabled?) for -Os.
> 
> Structure aliasing should be able to make up for turning off SRA, i'm
> guessing, at least as far as propagating constants and DCE is
> concerned. 
> 
> You could test this by seeing if -fno-tree-sra -fno-tree-salias
> produces 
> an increased code size over the above.

Not really: with -fno-tree-salias I get exactly the same result (224 bytes).

Giovanni Bajo



hidden enum constants (Was: Compiling GCC with g++: a report)

2005-05-25 Thread Giovanni Bajo
Zack Weinberg <[EMAIL PROTECTED]> wrote:

>>> This doesn't do what I want at all.  The goal is to make the *symbolic
>>> enumeration constants* inaccessible to most code.
>>
> ...
>> If it's OK to have the enums in a header, provided you can't *use*
them...
>>
>> enum {
>> #ifdef TVQ_AUTHORITATIVE_ENUMS
>>  TVQ_FOO1,
>>  TVQ_FOO2,
>>  TVQ_FOO3,
>>  TVQ_NUM_ENTRIES,
>> #endif
>>  TVQ_INT_SIZER = 32767;
>> } TheValQuux;
>>
>> This won't stop a suitably enthusiastic programmer from getting to
>> them anyway, but that's always the case.
>
> Ooh, I like this one for enum machine_mode.

I think this is an ODR failure for C++ and I suspect program-at-a-time would
flag it with an error. So even this solution to hide enum constants (a
legitimate design request) does not appear to be C++ compatible to me.
-- 
Giovanni Bajo



Removal of 4.0.0 last minute page from Wiki?

2005-05-27 Thread Giovanni Bajo
Mark,

is it OK to remove the link "Last-Minute Requests for 4.0.0" from the Wiki
main page? The page is obviously unneded there. If you want, we can keep the
link somewhere else (like collected in a page "obsolete misc pages").

-- 
Giovanni Bajo



Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-29 Thread Giovanni Bajo
Michael Veksler <[EMAIL PROTECTED]> wrote:

> Unfortunately, this is not 100% correct. Recently I have filed several
> duplicates, *after* searching Bugzilla.

That is not a problem. Bugmasters are there exactly for that. We realize that
finding duplicates can be very hard (in fact, sometimes duplicates are
acknowledged as such only after a fix appears), so that is our work. We just
ask users for some *basic* duplicate checking, and I think most users do that
before filing bugreports. So it's ok.


> 3. Nontrivial search of GCC Bugzilla are, sometimes,
>extremely slow (over a minute).

3 could be worked on (Daniel?)


> Technically speaking, these are not GCC bugs. However, from
> user's POV they *are* GCC bugs. This mismatch between user
> and developer perceptions is not very healthy and may inhibit
> other PRs. Maybe developers should be more open to "fixing"
> things that are not purely GCC bugs?

Maybe. But I don't think people don't file bugreports because of this. In every
software company I have been working on there has always been some debates
about bugreports ("it's your code's fault" -- "no, it's yours"), I expect GCC
users to know/expect this. Anyway, we are speaking of a really minority of the
bug reports we get. Most of the bugreports filed *are* related to GCC, and
among those that are not, most are *obviously* so (in that, people realize
immediately it's not GCC's fault).

> [2] GCC could implement a better error message.

This is a bug, too. You can file a PR in Bugzilla explictly asking for a more
informative error message.

Giovanni Bajo



Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-29 Thread Giovanni Bajo
Vincent Lefevre <[EMAIL PROTECTED]> wrote:

>> At this point, I wonder what is wrong with Bugzilla, that those
>> programmers don't fill a proper bug report. If there is a problem
>> with GCC, that is so annoying to somebody, I think that at least
>> developers could be informed about it via their standard channels of
>> communication.
>
> Perhaps because GCC developers think that GCC isn't buggy when the
> processor doesn't do the job for them? (I'm thinking of bug 323.)


You are mistaken, we think GCC isn't buggy about 323 because the C/C++
standards do not tell us to do better than this. If you have higher
expectations about floating point and C/C++, you should file a bugreport
against the C/C++ standards.

Really, we can't make everybody happy. The best we can do is to adhere the
international well-known ISO/ANSI standards.

Giovanni Bajo



Use of check_vect() in vectorizer testsuite

2005-06-09 Thread Giovanni Bajo
Hello,

I have some questions about the use of check_vect() in the vectorizer
testsuite:

1) All the ifcvt tests (vect-ifcvt*) seem to require SSE2 capability to be
vectorized but they do not call check_vect(). Is this a bug? They surely fail
on my platform (which does not have SSE2).

2) The same applies for a vect_int test, vect-dv-2.c. I assume also vect_int
tests require SSE2 capability and thus should call check_vect()?

Thanks,
Giovanni Bajo



Re: Use of check_vect() in vectorizer testsuite

2005-06-09 Thread Giovanni Bajo
Devang Patel <[EMAIL PROTECTED]> wrote:

> check_vect() is used to test whether to run the test or not. Now,
> vect.exp does the job more efficiently.
>
>   29 # If the target system supports vector instructions, the
> default action
>   30 # for a test is 'run', otherwise it's 'compile'.  Save
> current default.
>   31 # Executing vector instructions on a system without hardware
> vector support
>   32 # is also disabled by a call to check_vect, but disabling
> execution here is
>   33 # more efficient.
>
>
> And dg-require-... should control whether this test should be enabled
> or not. So depending upon the failure (runtime or compiler not
> vectorizing loops), you want to update appropriate code block in
> vect.exp for your platform.

The point is that my target is i686-pc-linux-gnu, which supports vector
instruction (through -msse2), but whether the instructions can actually be
run or not depends on the given processor (e.g. Pentium 3 vs Pentium 4).
Even if my processor cannot *execute* the vectorized tests, I would still
like to test whether vectorization succeeds or not (that is, at least as
compile-time tests).

So, the point is that you cannot select between compile-time/run-time based
on a target triplet check, at least for this target. What do you suggest?
All the other tests use check_vect() exactly for this reason, as far as I
can see, so it looks to me that the sensible thing to do is to use
check_vect there as well.

For an example of the failures, see for instance:
http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg00553.html (which is
Diego's tester).
-- 
Giovanni Bajo



Re: Use of check_vect() in vectorizer testsuite

2005-06-09 Thread Giovanni Bajo
Janis Johnson <[EMAIL PROTECTED]> wrote:

> It sounds as if there should be a check in target-supports.exp for
> SSE2 support that determines whether the default test action is 'run'
> or 'compile' for i686 targets.

I am not able to code TCL/Expect. Instead, I can easily provide a patch to
make the few missing testcases call check_vect(), as the rest of the
testsuite does. This would shut down the bogus regressions, while a more
correct solution is being developed. Would such a patch be ok?
-- 
Giovanni Bajo



Re: Getting started with contributing

2005-06-09 Thread Giovanni Bajo
Lee Millward <[EMAIL PROTECTED]> wrote:

> I have spent the last few weeks reading the gcc-patches mailing list
> and the documentation available on GCC from the Wiki and various other
> documents I have found on the Internet to try and get a feel for how
> everything works. I also have the latest CVS code and have spent time
> reading through the source to become familiar with the coding
> conventions in use. I've read through the "beginner" GCC projects on
> the website and would like to hear peoples opinion on how useful
> submitting patches for some of the these projects would be. Some of
> the work being carried out and posted on the gcc-patches mailing list
> makes those projects seem insignificant in comparision.

There are a couple of ongoing transitions that might be carried out by
beginners. For instance, we are in the process of converting all calls to
"abort()" to calls to gcc_assert()/gcc_unreachable(). Or we are trying to
convert all the VARRAY data structures into VEC data structures. You might have
seen some of these patches in gcc-patches. I think those are a good fit for a
beginner.

Also, remember to file papers for copyright assignment to the FSF, which is a
prerequisite for accepting any patch.

Giovanni Bajo



Question about new warning system

2005-06-10 Thread Giovanni Bajo
Hello DJ,

I'm updating a piece of code of the C++ frontend which involves a warning,
so I was planning on adding the correct warning option to it. The original
code is the following:

  if (warn_missing_braces)
   warning (0, "missing braces around initializer");

So, what is the correct way of fixing this:

[1]
  if (warn_missing_braces)
   warning (OPT_Wmissing_braces, "missing braces around
initializer");

[2]
  if (OPT_Wmissing_braces)
   warning (OPT_Wmissing_braces, "missing braces around
initializer");

[3]
  warning (OPT_Wmissing_braces, "missing braces around initializer");

What is the difference between [1], [2], [3]?

Thanks,
-- 
Giovanni Bajo



Re: Question about new warning system

2005-06-10 Thread Giovanni Bajo
DJ Delorie <[EMAIL PROTECTED]> wrote:

>>   if (OPT_Wmissing_braces)
>>warning (OPT_Wmissing_braces, "missing braces around
>> initializer");
>
> FYI OPT_Wmissing_braces is an enum constant; it will always be nonzero.


So, I assume this patch is wrong in this regard:
http://gcc.gnu.org/ml/gcc-cvs/2005-06/msg00392.html

>> [3]
>>   warning (OPT_Wmissing_braces, "missing braces around initializer");
>
> That is what we decided to do.
>
> Note, however, if the logic required to determine if a warning is
> warranted is sufficiently complex, *also* checking the variable is
> considered an optimization:

OK, thanks!
-- 
Giovanni Bajo



Re: Question about new warning system

2005-06-10 Thread Giovanni Bajo
DJ Delorie <[EMAIL PROTECTED]> wrote:

>> So, I assume this patch is wrong in this regard:
>> http://gcc.gnu.org/ml/gcc-cvs/2005-06/msg00392.html
> 
> Yes, it's wrong in that way.

Gaby, can you please fix it then?
-- 
Giovanni Bajo


Re: Question about new warning system

2005-06-10 Thread Giovanni Bajo
Nathan Sidwell <[EMAIL PROTECTED]> wrote:

> I'm inclined to agree it is confusing. especially as in one place one has
to
> write warn_ and in the other place one writes OPT_W.  It'd be
> nice if one just wrote
> if (warn_foo && frobbed)
>   warning ("foo is frobbed");
>
> I don't care if it's spelt warn_foo, OPT_Wfoo, warning_p(foo) or whatever,
> so long as it's spelt only one way.  The 'warning (OPT_Wfoo, ...)' syntax
> helps only where there is no conditional before the warning -- how often
> does that occur?  The way it currently is, one runs the risk of writing
> if (warn_c_cast
>&& .
> && .
> && .)
>   warning (OPT_Wconst_cast, ...)

Actually, the point is that you *never* need to explicitally name the
"warn_" variable unless you are optimizing. In other words, code which
presently is:

if (warn_foo
&& frobbed)
   warning (0, "foo is frobbed");

should be replaced with:

if (frobbed)
   warning (OPT_Wfoo, "foo is frobbed");

You need to pass the option to warning() also for another reason: we want to
be able to optionally print which flag can be used to disable each warning,
so warning() has to be smarter than it used to be.
-- 
Giovanni Bajo



Re: In current gcc trunk: warning: dereferencing type-punned pointer will break strict-aliasing rules

2005-06-13 Thread Giovanni Bajo
Nathan Sidwell <[EMAIL PROTECTED]> wrote:

> Christian Joensson wrote:
>> I'd just like to ask if this is noticed:
>>
>> /usr/local/src/trunk/gcc/gcc/unwind-dw2.c:324: warning: dereferencing
>> type-punned pointer will break strict-aliasing rules
>> /usr/local/src/trunk/gcc/gcc/unwind-dw2.c:789: warning: dereferencing
>> type-punned pointer will break strict-aliasing rules
>> /usr/local/src/trunk/gcc/gcc/unwind-dw2.c:1005: warning:
>> dereferencing type-punned pointer will break strict-aliasing rules
>> /usr/local/src/trunk/gcc/gcc/unwind-dw2-fde.c:1024: warning:
>> dereferencing type-punned pointer will break strict-aliasing rules
>> /usr/local/src/trunk/gcc/gcc/unwind-dw2-fde-glibc.c:393: warning:
>> dereferencing type-punned pointer will break strict-aliasing rules
>
> I had not noticed this, but looking at the first one it must have
> been caused by my patch to the type punning warning.  It also appears
> to be a correct warning, in that we are breaking aliasing.

I see also these in my builds:

../../../gcc/libmudflap/mf-runtime.c:320: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:323: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:326: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:329: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:333: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:336: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:339: warning: dereferencing type-punned
pointer will break strict-aliasing rules
../../../gcc/libmudflap/mf-runtime.c:342: warning: dereferencing type-punned
pointer will break strict-aliasing rules

The code is accessing a variable of enum type through an unsigned pointer.

Giovanni Bajo



Re: Fixing Bugs (Was: A Suggestion for Release Testing)

2005-06-14 Thread Giovanni Bajo
Scott Robert Ladd <[EMAIL PROTECTED]> wrote:

> Consider, as an example, the bug/non-bug at http://gcc.gnu.org/PR323,
> which was a matter of recent discussion on this list. Some rather
> respectable people (e.g., Vincent Lefèvre) consider this a bug, and
> have proposed solutions. Yet here's a quote from an earlier message by
> Giovanni Bajo.
> [...]

First of all, I would consider polite to CC: me on the mail if you quote and
debate my statements.

> The ISO Standard doesn't prevent GCC from being *better* than
> specified, does it? Are we somehow breaking ISO compliance by doing
> math right? Is it so wrong to try and fix a problem that frustrates
> many people and makes GCC look bad?

Where exactly do I say that it is wrong to provide a patch that makes GCC
better in this regard? As a bugmaster, I just decided to consider this not a
bug, in the strictest meaning of "bug". If you want to file in Bugzilla an
enhancement proposal about adding options/modes about higher FPU precision, I
would not object.

Also, it seems you have the wrong belief that, if bug 323 were confirmed in
Bugzilla, a patch would automatically appear. I think the two events are
totally unrelated, especially for an issue which has been debated so much in
the past years, and where most people have already a formed opinion on the
matter.

> With the attitude shown by Giovanni, there's really no point in
> submitting a patch, is there? Dozens of people have reported this
> problem, potential solutions exist, but any patch is going to be
> ignored because the bug isn't considered a bug by the Powers That Be.

You seem to believe that a patch can be accepted in GCC only if it fixes a bug.
That is wrong: a valid patch can also add a new feature. While it is probably
hard to convince the Powers That Be (which, by the way, are surely not me) that
the default setting of GCC should be whatever Bug 323 requests, it is much much
easier that a patch adding a new command line option to request higher FP
accuracy be reviewed and accepted. Of course, people would have to *see* such a
patch. And nobody is blocking people from writing it.

I hope to have clarified my position.

Giovanni Bajo



PR 14814

2005-06-14 Thread Giovanni Bajo
Jeff,

g++.dg/tree-ssa/pr14814.C has been failing from the first day it was added. It
is unclear to me what happened in detail, but in the PR you suggest to XFAIL
the testcase. That would be fine: would you please take care of that?

Also, notice that there is a typo in the ChangeLog entry for this testcase. It
reads:

2005-05-17  Jeff Law  <...>

* g++.dg/tree-ssa/pr18414.C: New test.
* gcc.dg/tree-ssa/pr18414.C: New test.

while the correct PR number is "14814". Would you please also fix this?

Thanks,
Giovanni Bajo



Re: Bug in transparent union handling?

2005-06-15 Thread Giovanni Bajo
Richard Henderson <[EMAIL PROTECTED]> wrote:

> This needs to use a VIEW_CONVERT_EXPR, at minimum.

What about a compound literal instead?

Giovanni Bajo



Re: Reporting bugs: there is nothing to gain in frustrating reporters

2005-06-15 Thread Giovanni Bajo
Roberto Bagnara <[EMAIL PROTECTED]> wrote:

> 1) I submit a bug report;
> 2) someone looks at it superficially, too superficially, and
> posts a comment that tends to deny there is a problem;
> 3) I and/or someone else explain that the problem is indeed
> there, possibly citing the points of the standards that
> are being violated;
> 4) the person who said the bug was not (exactly) a bug does
> not even care to reply, but the superficial comments
> remain there, probably killing the bug report.

While I agree that it can happen to make a too superficial comment on a bug
(I surely had done this in the past, and Andrew seems to do this very
often), I believe that this does not spoil or kill the bug report itself,
once it was agreed that there is indeed a bug.

Surely, it does annoy the reporter though, which is a serious problem.

> I wonder what is the rationale here.  Discouraging bug
> reporters may be an effective way to avoid bug reports pile up,
> but this is certainly not good for GCC.

I totally agree here. I believe Mark already asked us (bugmasters) to be
more polite in their work, and I believe that another official statement in
this direction would help.

> My advice to people filtering bug reports is: if you only had
> time to look at the issue superficially, either do not post
> any comment or be prepared to continue the discussion on more
> serious grounds if the reporter or someone else comes back
> by offering more insight and/or precise clauses of the
> relevant standards.

Agreed. But keep in mind that it is not necessary to reply: once the bug is
open and confirmed, the last comment "wins", in a way. If the bugmaster
wanted to close it, he would just do it, so an objection in a comment does
not make the bug invalid per se.
-- 
Giovanni Bajo



Re: Bug in transparent union handling?

2005-06-15 Thread Giovanni Bajo
Richard Henderson <[EMAIL PROTECTED]> wrote:

>>> This needs to use a VIEW_CONVERT_EXPR, at minimum.
>>
>> What about a compound literal instead?
>
> I suppose.  Why?

Nothing specifically. I just believe it is a cleaner way to transform the
argument into an aggregate, casts are ugly.

Giovanni Bajo



Your rtti.c changes broke some obj-c++ tests

2005-06-17 Thread Giovanni Bajo
Nathan,

I see some failures in the testsuite which appear to be related by your recent
changes to rtti.c (VECification). For instance:

FAIL: obj-c++.dg/basic.mm (test for excess errors)
Excess errors:/home/rasky/gcc/mainline/gcc/libstdc++-v3/libsupc++/exception:55:
internal compiler error: vector VEC(tinfo_s,base) index domain error, in
get_tinfo_decl at cp/rtti.c:373

Would you please check and possibly fix this?

Thanks,
Giovanni Bajo



Re: How to write testcase with two warnings on one line?

2005-06-21 Thread Giovanni Bajo
Feng Wang <[EMAIL PROTECTED]> wrote:

> I want to write a testcase. The compiler gives two separated warnings on
one
> statement. How to write this with Dejagnu?


http://gcc.gnu.org/wiki/HowToPrepareATestcase

-- 
Giovanni Bajo



Re: toplevel bootstrap (stage 2 project)

2005-06-22 Thread Giovanni Bajo
Paolo Bonzini <[EMAIL PROTECTED]> wrote:

> To recap, toplevel bootstrap has several aims, including:
>[...]

I suggest you to add this to the Wiki.

> To enable toplevel bootstrap, just configure with --enable-bootstrap.
> Then, "make" will do more or less what "make bubblestrap" used to do:

What does "./configure --enable-botstrap; make bootstrap" do?

> It supports all the bells and whistles like bubblestraps and restageN,
> which help during development.  "make restrap" is not supported. "make
> restageN" is called "make all-stageN", and there is also "make
> all-stageN-gcc" to rebuild gcc only.

It would help also if you add to the wiki explanation of what exactly all
these options do. Especially bubblestrap vs quickstrap vs restrap.

Thanks!
-- 
Giovanni Bajo



Re: toplevel bootstrap (stage 2 project)

2005-06-28 Thread Giovanni Bajo
Gerald Pfeifer <[EMAIL PROTECTED]> wrote:

>> It would help also if you add to the wiki explanation of what exactly all
>> these options do. Especially bubblestrap vs quickstrap vs restrap.
>
> Why to the WIki??  This should be part of the regular documentation,
> and if anything is to improve, the improvements should be made there
> instead of having that on the Wiki (or, even worse, causing duplication).


Well, because Wiki is more attractive to people writing documentation for
several reasons (faster than writing a HTML/TeX patch and submitting it for
review, etc.). Maybe we should think if we want to use the Wiki as our rapid
documentation prototyping: people could write documentation there for
review, be refined by others, and eventually converted to real TeX
documentation.
-- 
Giovanni Bajo



Re: toplevel bootstrap (stage 2 project)

2005-06-28 Thread Giovanni Bajo
Daniel Berlin <[EMAIL PROTECTED]> wrote:

>> Well, because Wiki is more attractive to people writing
>> documentation for several reasons (faster than writing a HTML/TeX
>> patch and submitting it for review, etc.). Maybe we should think if
>> we want to use the Wiki as our rapid documentation prototyping:
>> people could write documentation there for review, be refined by
>> others, and eventually converted to real TeX documentation.
>
> This is what i did with the decl hierarchy documentation i submitted
> as part of the decl cleanup patch.
>
> The Texinfo version took significantly longer than the wiki portion,
> but it was mainly just mechanical formatting, etc, where wiki knows what
> to do automatically or with a trivial command and Texinfo doesn't.

I believe we could conceive something to convert raw wiki text into texinfo
with our commands. It would handle the boring part of the conversion.

Giovanni Bajo



Re: G++ and ISO C++

2005-06-28 Thread Giovanni Bajo
Mirza <[EMAIL PROTECTED]> wrote:

> Can someone point me to list of ISO C++ vs. g++ incompatibilities.

There are very few large issues at this point. The only big feature missing is
"export", then it's a bunch of relatively minor nits (access checking in friend
declarations within templates, correct semantic for old-style access
declarations vs using declarations, etc.), plus of course the usual bag of
bugs.

I am not aware of any official list though.

Giovanni Bajo



Re: [RFH] - Less than optimal code compiling 252.eon -O2 for x86

2005-06-30 Thread Giovanni Bajo
Joe Buck <[EMAIL PROTECTED]> wrote:

>>> I'd tend to agree.  I'd rather see the option go away than linger on
>>> if the option is no longer useful.
>>
>> I wouldn't mind that, but I'd also like to point out that there are
>> Makefiles out there which hard-code things like -fforce-mem.  Do we want
>> to keep the option as a stub to avoid breaking them?
>
> It could produce a "deprecated" or "obsolete" warning for 4.1, and then be
> removed for 4.2.

Personally, I don't see a point. -fforce-mem is just an optimization option
which does not affect the semantic of the program in any way. If we remove
it, people would need to just drop it from their Makefiles. There is no
source code adjustment required (which would justify the deprecation cycle).

Or convert it to a noop as Bernd suggested.
-- 
Giovanni Bajo



Re: PARM_DECL of DECL_SIZE 0, but TYPE_SIZE of 96 bits

2005-06-30 Thread Giovanni Bajo
Daniel Berlin <[EMAIL PROTECTED]> wrote:

> So we've got a parm decl that if you ask it for the DECL_SIZE, says 0,
> but has a TYPE_SIZE of 12 bytes, and we access fields in it, etc.


I am not sure of what the exact relations between DECL_SIZE and TYPE_SIZE
are, but probably some verification could be done, eg, at gimplification
time, rather than waiting for latent bugs in optimizers to produce wrong
code?
-- 
Giovanni Bajo



Re: potential simple loop optimization assistance strategy?

2005-07-01 Thread Giovanni Bajo
Paul Schlie <[EMAIL PROTECTED]> wrote:

> Where then the programmer could then choose
> to warrant it by preconditioning the loop:
>
>   assert((x < y) && ((y - x) % 4)); // which could throw an exception.
>
>   for ((i = x; i < y; i++){ ... }

There has been some opposition in the past about allowing conditions in
asserts to be used as hints to the optimizers. In fact, I would like to know
if there is a current statement of purpose about this. That is, would there
be strong oppositions to patches doing this?
-- 
Giovanni Bajo



Re: potential simple loop optimization assistance strategy?

2005-07-01 Thread Giovanni Bajo
Diego Novillo <[EMAIL PROTECTED]> wrote:

>> There has been some opposition in the past about allowing conditions in
>> asserts to be used as hints to the optimizers. In fact, I would like to
>> know if there is a current statement of purpose about this. That is,
would
>> there be strong oppositions to patches doing this?
>>
> VRP naturally takes advantage of assert (though not in some
> occassions involving unsigned types).  Try:
>
> #include 
>
> foo (int i)
> {
>   assert (i != 0);
>   if (i == 0)
> return 3;
>
>   return 2;
> }

Agreed, but my point is whether we can do that when NDEBUG is defined.
-- 
Giovanni Bajo



Re: potential simple loop optimization assistance strategy?

2005-07-02 Thread Giovanni Bajo
Tom Tromey <[EMAIL PROTECTED]> wrote:

> Giovanni> Agreed, but my point is whether we can do that when NDEBUG
> Giovanni> is defined.
>
> I thought when NDEBUG is defined, assert expands to something like
> '(void) 0' -- the original expression is no longer around.


Yes, but the condition is still morally true in the code. NDEBUG is meant to
speed up the generated code, and it's actually a pity that instead it
*disables* some optimizations because we don't see the condition anymore. My
suggestion is that assert with NDEBUG might expand to something like:

if (condition)
   unreachable();

where unreachable is a function call marked with a special attribute saying
that execution can never get there. This way the run-time check is removed from
the code, but the range information can still be propagated and used.

Notice that such an attribute would be needed in the first place for
gcc_unreachable() in our own sources. Right now we expand it to gcc_assert(0),
but we could do much better with a special attribute.

Giovanni Bajo



Re: potential simple loop optimization assistance strategy?

2005-07-02 Thread Giovanni Bajo
Michael Veksler <[EMAIL PROTECTED]> wrote:

> have no side effects. So if "condition" contains any side effect,
> or potential side effect (e.g. through function call), then the
> compiler should not generate the code for "condition".


Right. Thus we can find a way to avoid generating e.g. function calls or
similar things, and get information from the most basic conditions.

Giovanni Bajo



Re: Can I trace the process of C++ template Instantiation?

2005-07-04 Thread Giovanni Bajo
zheng wang <[EMAIL PROTECTED]> wrote:

>How can I trace the process of C++ template
> Instantiation?  I study Loki and some library is very
> complex, so I want to see how gcc compiler instance
> the template class.

The source code for template processing is mostly contained in the file
cp/pt.c. The instantiation happens in the function tsubst_copy_and_build.
You can place a breakpoint there and see what happens within GDB.
-- 
Giovanni Bajo



Re: Bugzilla does not let me in

2005-07-04 Thread Giovanni Bajo
Ilya Mezhirov <[EMAIL PROTECTED]> wrote:

> I've got a problem: bugzilla sends me no registration e-mails.
> I tried two times with different e-mail addresses and got nothing.
> So I cannot report a gcj bug :(
>
> You've got a nice page "Reporting Bugs" that explains what needs to be
> sent, but doesn't contain a word about *where* to send. That's not a
> flame, just a silly user's troubles.

I see you already have an account. Login is [EMAIL PROTECTED] Try
logging in with the password you provided. Otherwise, I can reset the
password for you.
-- 
Giovanni Bajo



Re: some errors compiling a free program with gcc-3.2, gcc-3.4.4 and gcc-4.0.0 on i386 freebsd -5.2.

2005-07-06 Thread Giovanni Bajo
wangxiuli <[EMAIL PROTECTED]> wrote:

> why? are those errors related to the version of gcc?
> please help

This mailing list is dedicated to the development of GCC itself. Questions
about problems using GCC should generally be directed to
[EMAIL PROTECTED] Anyway, this is not strictly a GCC question, but
mainly a C++ question, your program is likely invalid and you do not know
why. So probably the best forum to ask is some newsgroup like
comp.lang.c++.moderated.
-- 
Giovanni Bajo



Stage 2 ends?

2005-07-07 Thread Giovanni Bajo
Mark,

I have a simple C++ patch which I need to clean up for submission (it makes us
not print default arguments for template parameters in diagnostics, which is
much requested from the Boost community). It doesn't qualify for Stage 3
though, so I would like to know what's the current status of Stage 2. As things
stand now, Stage 2 ends tomorrow.

Giovanni Bajo



Re: Some notes on the Wiki (was: 4.1 news item)

2005-07-11 Thread Giovanni Bajo
Gerald Pfeifer <[EMAIL PROTECTED]> wrote:

> It was reviewed the very same day it was submitted:
>
>   http://gcc.gnu.org/ml/gcc-patches/2004-06/msg00313.html
>   http://gcc.gnu.org/ml/gcc-patches/2004-06/msg00321.html

Yes. And the review was very detailed, and suggested that I had to redone to
work almost from scratch, scattering the infos across a dozen of files in
both WWW and TeX format where they really belong (including Dejagnu's
documentation), *plus* I could also provide the existing page as a tutorial
with references.

I would like to thank you and Joseph Myers for the accurate and fast review.
The point is that it took me already long enough to prepare that patch, and
I never had time (nor will, let me admit) to go back and redo the work in a
different way.

What happened next is that I started providing the link to the gcc-patches
mail over IRC, and then in the list, and then in private mail. And people
were thanking me for it, because it was very helpful. So, I realized that,
while the improvements you suggested were legitimate and correct, that text
was already very useful *the way it is* *right now*. So, it was put in the
Wiki. And I know many people have read it and found it useful. If you want,
you can add a plea at the bottom of the Wiki page, summing the reviews and
asking volunteers to incorporate it into the documentation in the proper
places.

I already expressed my concerns about the way documentation patches work in
other threads. I myself am uninterested in contributing documentation
patches to TeX (and pretty discouraged about the WWW patches, even if I do
that regularly, as you well know). Instead, I contributed many things to the
Wiki. Given the way things like Wikipedia work out, I think we need to
either review our documentation system, or, if there is too much politics
going on with the FSF, accept the fact that the documentation *is* going to
be forked, and setup a workflow to contribute stuff back from the wiki to
the official documentation.

My personal position is that making documentation patches *blocked* by
review (as happens with code) is wrong. The worst thing it can happen is
that the documentation patch is wrong, and the doc maintainer can revert it
in literally seconds (using the Wiki; in minutes/hours using the TeX).
Nobody is going to be blocked by this; no bootstrap will be broken; no wrong
code will be generated. This ain't code. In many common cases, the
documentation will be useful effectively immediatly, and
typos/subtleties/formatting can be refined by others over time.
-- 
Giovanni Bajo



Re: Some notes on the Wiki (was: 4.1 news item)

2005-07-11 Thread Giovanni Bajo
Joseph S. Myers <[EMAIL PROTECTED]> wrote:

>> Nobody is going to be blocked by this; no bootstrap will be broken; no
>> wrong code will be generated. This ain't code. In many common cases, the
>
> Wrong code will be generated when someone relies on subtly wrong
> information in the documentation.  It is a well-established failure mode
> even on the best wikis (i.e. Wikipedia) that someone inserts subtly wrong
> information because of not knowing better (or not knowing that there is
> controversy among experts about which of two versions is correct) and this
> does not get corrected soon if at all.

That is right, but there are level of interests. The debate can be on some
small detail, while the big picture is still good enough for beginners or
intermediates. Having something which is possibly wrong on some subtle
details is still better than having nothing at all. As I said, I am a strong
sustainer of "commit now, refine later".

> (b) I don't see Wikipedia edits routinely getting
> reverted simply for lack of substantiating evidence.

Yeah, but I widely use it and it is still very useful. It may not be 1000%
correct in each word, so what? Are printed books any better? Or is there a
real only truth? We're entering philosphy here. What is certain is that
Wikipedia wouldn't be 1/1000th of that if there was a review process for
each patch being submitted.

> Perhaps the wiki could automatically send all changes to gcc-patches to
> assist in review?


I strongly support this (and was going to suggest this myself). I'd rather
it be another list though, say wiki-patches or doc-patches, because of the
amount of traffic that is going to be generated (think of all those small
typo fixes, or spam reverts).
-- 
Giovanni Bajo



[ATTN: Steering Committee] Management of new ports

2005-07-12 Thread Giovanni Bajo
Hello,

there seems to be a problem with the management of new ports.

- Documentation is inappropriate. There is a picky checklist to follow, but
it does not cover the interesting things, that is the list of de-facto
technical standards that we expect new ports to follow (with respect to
incomplete transition). For instance, most maintainers have expressed
negative feelings toward integrating new ports using cc0, or with
define_peephole, or with ASM prologues/epilogues. I have proposed a patch in
the past to document this and it has been rejected on the basis that there
is no official statement on this. I think that the SC should take a position
on how to resolve this issue (see below for my suggestion), otherwise we are
going to really annoy people.

- At this point, there are at least three pending new ports, one of which
have not been reviewed for 9 months already. The only comments they have
received are the usual generic comments like "document every function and
arguments", or "you need a copyright assignments and you should volunteer to
maintain this". Once these nitpicks are done, the patch goes unreviewed
forever. I believe we are losing valuable contributions with this behaviour.
The only new port that went in is Blackfin, because it was contributed by a
GWP which was allowed to commit it without waiting for the review.

- I read several times messages by users which believe that GCC is changing
trend, going towards supporting only a handful of very common systems,
instead of being a generic compiler open to every small platform the open
source community needs to support. I do not believe this is the official
mission statement of GCC, but the fact that some users *start* to believe
this should make the SC think that maybe we need to be more open towards new
ports.

I'm kindly asking the SC:

1) An official statement about whether GCC is going to still accept every
new port which is technically fit and actively maintained, or is now going
to restrict the number of targets to the few most important systems for
maintenance reasons.

2) An official up-to-date list of technical requirements for a new port to
be accepted.

3) Some sort of formal procedure which gives some basic guarantees that new
ports are going to be reviewed and eventually accepted. For instance, one
solution could be auto-approval for new ports which follow the new technical
list and with active maintainers, if the patch went unreviewed for some
months.

I think it could be a good idea to appoint a new maintainer in charge for
new ports, at least to do the leg-work (checklist validation of the port)
and to provide some official feedback to the contributors.

Thanks.
-- 
Giovanni Bajo



Re: more on duplicate decls

2005-07-13 Thread Giovanni Bajo
Kenneth Zadeck <[EMAIL PROTECTED]> wrote:

> Are you saying that if I wait til the front end is finished that I do
> not have to worry about the default args problem you mention above?

Yes. The middle-end does not know anything about default arguments: they are
used only internally by the C++ frontend. When the trees enter the
middle-end, the FUNCTION_DECLs do not need the default arguments, the
CALL_EXPR are filled correctly, and the programmer errors have been already
reported.

> Should I fix it this simple way or should I let a c++ front end person
> fix it as the decls are created?


I believe the simple way is faster for you to continue with you work. The
proper fix does not look easy at all.
-- 
Giovanni Bajo



Re: MEMBER_TYPE and CV qualifiers

2005-07-18 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

> The function type is no more
> cv-qualified than any other function type; the only thing that's
> cv-qualified is the type pointed to by the first argument.

The standard does not agree with you though, see 9.3.1/3. In fact, what
happens is that we currently deduce the type wrong (PR 8271) because of the
fact that the METHOD_TYPE is not cv-qualified. So either we special-case
this, or we represent it correctly.

FWIW, Jason agreed that the right way to fix this problem is to put cv
qualification on METHOD_TYPE, rather than special-case hacks:

http://gcc.gnu.org/ml/gcc-patches/2004-07/msg00550.html
http://gcc.gnu.org/ml/gcc-patches/2004-07/msg00630.html
-- 
Giovanni Bajo



Re: Problems on Fedora Core 4

2005-07-20 Thread Giovanni Bajo
Michael Gatford <[EMAIL PROTECTED]> wrote:

> std::map::const_iterator functionIterator =
> quickfindtag.find(funcname);

It's missing a typename keyword here:

typename std::map::const_iterator functionIterator = 

See: http://gcc.gnu.org/gcc-3.4/changes.html, C++ section.
-- 
Giovanni Bajo


Re: extension to -fdump-tree-*-raw

2005-07-22 Thread Giovanni Bajo
Ebke, Hans-Christian <[EMAIL PROTECTED]> wrote:

> So to resolve that problem I took the gcc 4.0.1 source code and patched
> tree.h and tree-dump.c. The patched version introduces two new options for
> -fdump-tree: The "parseable" option which produces unambiguous and easier
to
> parse but otherwise similar output to "raw" and the "maskstringcst" option
> which produces output with the string constants masked since this makes
> parsing the output even easier and I'm not interested in the string
> constants.


You could write some code to escape special characters, so to write
something like:

@54 string_cst   type: @61 strg: "wrong
type:\n\0\0\xaf\x03\x03foo\"bar"  lngt: 19

This would not need a different special option.
-- 
Giovanni Bajo



Re: extension to -fdump-tree-*-raw

2005-07-22 Thread Giovanni Bajo
Ebke, Hans-Christian <[EMAIL PROTECTED]> wrote:

>   I have to write this in Outlook, so I don't even try to get the quoting
> right. Sorry. :-(

http://jump.to/outlook-quotefix

> But it would break applications relying on the old format.

There is no format either. dump-tree is *very* specific of GCC inners, and
it can dramatically changes between releases. OK, maybe not the syntax, but
the semantic. I wouldn't care of the syntax at that point.
-- 
Giovanni Bajo



Re: gcc 3.3.6 - stack corruption questions

2005-07-25 Thread Giovanni Bajo
Louis LeBlanc <[EMAIL PROTECTED]> wrote:

> The problem is I'm getting core dumps (SEGV) that appears to come from
> this code when I know it shouldn't be in the execution path.  The code
> in question is switched on by a command line argument only, and the
> process is managed by a parent process that monitors and manages it's
> execution - reporting crashes and restarting it if necessary.

Looks like a bug hidden in your code. Several things you could try:

- valgrind
- GCC 4.0 with -fmudflap
- GCC 4.1 CVS with -fstack-protect
-- 
Giovanni Bajo


Re: gcc 3.3.6 - stack corruption questions

2005-07-25 Thread Giovanni Bajo
Louis LeBlanc <[EMAIL PROTECTED]> wrote:

> I added the -fstack-check switch to my makefile and recompiled with
> various optimizations.  I was pretty surprised at the file sizes that
> showed up:
> 
> No Optimization:
> -rwxr-xr-x  1 leblanc  daemon  1128660 Jul 25 16:25 myprocess*
> 
> Optimized with -O2
> -rwxr-xr-x  1 leblanc  daemon  1058228 Jul 25 17:36 myprocess*
> 
> Optimized with -O3
> -rwxr-xr-x  1 leblanc  daemon  1129792 Jul 25 17:32 myprocess*
> 
> I would have expected much different results.  Shouldn't the file
> sizes be smaller (at least a little) with the -O3 switch?  Maybe
> there's a loop unrolled to make it faster, resulting in a larger
> codebase?


Or inlining, or many other things. If you care about size, use -Os.
-- 
Giovanni Bajo


  1   2   3   >