Re: Fix random_sample_n and random_shuffle when RAND_MAX is small

2019-01-20 Thread Giovanni Bajo
Il giorno mar 15 gen 2019 alle ore 03:38 Jonathan Wakely 
ha scritto:

> On 12/12/18 22:31 +0100, Giovanni Bajo wrote:
> >Hello,
> >
> >we hit a bug today while cross-compiling a C++ program with mingw32:
> >if random_shuffle or random_sample_n are called with a sequence of
> >elements whose length is higher than RAND_MAX, the functions don't
> >behave as expected because they ignore elements beyond RAND_MAX. This
> >does not happen often on Linux where glibc defines RAND_MAX to 2**31,
> >but mingw32 (all released versions) relies on the very old msvcrt.lib,
> >where RAND_MAX is just 2**15.
> >
> >I found mentions of this problem in 2011
> >(
> http://mingw-users.1079350.n2.nabble.com/RAND-MAX-still-16bit-td6299546.html
> )
> >and 2006 (
> https://mingw-users.narkive.com/gAIO4G5V/rand-max-problem-why-is-it-only-16-bit
> ).
> >
> >I'm attaching a proof-of-concept patch that fixes the problem by
> >introducing an embedded xorshift generator, seeded with std::rand (so
> >that the functions still depend on srand — it looks like this is not
> >strictly required by the standard, but it sounds like a good thing to
> >do for backward compatibility with existing programs). I was wondering
> >if this approach is OK or something else is preferred.
>
> I'd prefer not to introduce that change unconditionally. The existing
> code works fine when std::distance(first, last) < RAND_MAX, and as we
> have random access iterators we can check that cheaply.
>
> We'd prefer a bug report in Bugzilla with a testcase that demonstrates
> the bug. A portable regression test for our testsuite might not be
> practical if it needs more than RAND_MAX elements, but one that runs
> for mingw and verifies the fix there would be needed.
>
> See https://gcc.gnu.org/contribute.html#patches for guidelines for
> submitting patches (and the rest of the page for other requirements,
> like copyright assignment or disclaimers).
>

Thanks Jonathan. We have opened a Bugzilla report here:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88935

In the bug, we highlighted that the current algorithm is also (less
severely) broken when the number of elements is less but close to RAND_MAX;
the farther you move away from RAND_MAX, the better it becomes. Would you
still prefer to have a different version of the algorithm, gated by a
comparison to RAND_MAX? Our patch fixes everything by switching to an
inline 64-bit PRNG which is seeded by std::rand().
-- 
Giovanni Bajo   ::  ra...@develer.com
Develer S.r.l.  ::  http://www.develer.com


Fix random_sample_n and random_shuffle when RAND_MAX is small

2018-12-12 Thread Giovanni Bajo
Hello,

we hit a bug today while cross-compiling a C++ program with mingw32:
if random_shuffle or random_sample_n are called with a sequence of
elements whose length is higher than RAND_MAX, the functions don't
behave as expected because they ignore elements beyond RAND_MAX. This
does not happen often on Linux where glibc defines RAND_MAX to 2**31,
but mingw32 (all released versions) relies on the very old msvcrt.lib,
where RAND_MAX is just 2**15.

I found mentions of this problem in 2011
(http://mingw-users.1079350.n2.nabble.com/RAND-MAX-still-16bit-td6299546.html)
and 2006 
(https://mingw-users.narkive.com/gAIO4G5V/rand-max-problem-why-is-it-only-16-bit).

I'm attaching a proof-of-concept patch that fixes the problem by
introducing an embedded xorshift generator, seeded with std::rand (so
that the functions still depend on srand — it looks like this is not
strictly required by the standard, but it sounds like a good thing to
do for backward compatibility with existing programs). I was wondering
if this approach is OK or something else is preferred.

-- 
Giovanni Bajo   ::  ra...@develer.com
Develer S.r.l.  ::  http://www.develer.com


rand.diff
Description: Binary data


Re: Git and GCC

2007-12-07 Thread Giovanni Bajo
On Fri, 2007-12-07 at 14:14 -0800, Jakub Narebski wrote:

   Is SHA a significant portion of the compute during these repacks?
   I should run oprofile...
   SHA1 is almost totally insignificant on x86. It hardly shows up. But
   we have a good optimized version there.
   zlib tends to be a lot more noticeable (especially the
   *uncompression*: it may be faster than compression, but it's done _so_
   much more that it totally dominates).
  
  Have you considered alternatives, like:
  http://www.oberhumer.com/opensource/ucl/
 
 quote
   As compared to LZO, the UCL algorithms achieve a better compression
   ratio but *decompression* is a little bit slower. See below for some
   rough timings.
 /quote
 
 It is uncompression speed that is more important, because it is used
 much more often.

I know, but the point is not what is the fastestest, but if it's fast
enough to get off the profiles. I think UCL is fast enough since it's
still times faster than zlib. Anyway, LZO is GPL too, so why not
considering it too. They are good libraries.
-- 
Giovanni Bajo



Re: Git and GCC

2007-12-07 Thread Giovanni Bajo

On 12/7/2007 6:23 PM, Linus Torvalds wrote:


Is SHA a significant portion of the compute during these repacks?
I should run oprofile...


SHA1 is almost totally insignificant on x86. It hardly shows up. But we 
have a good optimized version there.


zlib tends to be a lot more noticeable (especially the uncompression: it 
may be faster than compression, but it's done _so_ much more that it 
totally dominates).


Have you considered alternatives, like:
http://www.oberhumer.com/opensource/ucl/
--
Giovanni Bajo



Type system functions to their own file?

2007-06-20 Thread Giovanni Bajo

Hi Richard,

what about moving all the type-system related functions to a new file, 
eg: tree-ssa-type.c? I think that makes the intent even clearer.

--
Giovanni Bajo



Re: More vectorizer testcases?

2007-06-18 Thread Giovanni Bajo

On 6/18/2007 1:26 PM, Dorit Nuzman wrote:


these 3 are actually not so simple... the main thing that's blocking 2 of
them right now is that they need support for stores with gaps, which can be
added except the other problem is that the vectorizer thinks it's not
profitable to vectorize them (or rather 2 of them. as does ICC by the way).


When you say not profitable, is that target-dependent? I would be 
satisfied when the vectorizer can vectorize it *but* prefer not to do it 
because it can be done more efficiently on the specific target.


Of course, it would interesting to still force the vectorizer to 
produce the code, so to compare the vectorized version with the 
non-vectorized version and see if it is really right. Is there (will 
there be) an option to turn off cost-based estimation within the vectorizer?



Since the time you opened these PRs we came quite a bit closer to
vectorizing these (the support for interleaved accesses and for multiple
data-types were also required). It will be fun to add the last missing bit
- the support for the stores-with-gaps. I hope we'll get to it before too
long...


Nice! I'm looking forward to it!


If you have other (hot) code examples that expose different missing
features I think that's always interesting to know about (but if it's like
the codes above then maybe it will not have much added value...).


I have dozens and dozens of loops which I believe that could be 
vectorized and are not. I don't whether they are related to 
store-with-gaps or not, though. So, well, I'll open the bugreports and 
let you do the analysys. Feel free to close them as duplicates if you 
think they're not worth to keep opened on their own.

--
Giovanni Bajo



More vectorizer testcases?

2007-06-17 Thread Giovanni Bajo

Hi Dorit,

some years ago I posted these testcases to Bugzilla's GCC:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18437
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18438
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18439

It looks like none of those are vectorized as of GCC 4.3. I read today that 
you're asking for more vectorizer testsuite so I was wondering:


1) Shall we add a GCC bugzilla component for the vectorizer? Currently the 
bugs are filed under tree-optimization which might be a little too generic, 
these days.


2) Do you need more testcases from geometric code like those above? Those 3 
above are pretty simple in fact, so I doubt more complex ones can be of help, 
but I can extract something more from my code if you want...

--
Giovanni Bajo



Activate -mrecip with -ffast-math?

2007-06-17 Thread Giovanni Bajo

Hello,

I was wondering if there are objects to automatically activating Uros' new 
-mrecip flag when -ffast-math is specified. It looks like a good match since 
-mrecip is exactly about fast non-precise mathematics.

--
Giovanni Bajo



Re: Activate -mrecip with -ffast-math?

2007-06-17 Thread Giovanni Bajo

On 17/06/2007 20.20, Uros Bizjak wrote:

I was wondering if there are objects to automatically activating Uros' 
new -mrecip flag when -ffast-math is specified. It looks like a good 
match since -mrecip is exactly about fast non-precise mathematics.


There is a discussion in gcc-patches@ mailing list about this topic, in 
Re: [PATCH, middle-end, i386]: reciprocal rsqrt pass + full recip x86 
backend support thread [1]. The main problem is, that one of the 
polyhedron tests segfaults with this patch (not the problem of the recip 
patch, but usage of questionable FP equivalence tests and FP indexes in 
the array).


My own humble 2c on this is that what Roger Sayle calls the black  white 
approach is what most users understand. I am no expert of floating point 
arithmetics standard; I do understand that by default GCC is very accurate to 
the standards, and that -ffast-math is the option for less accuracy, more 
speed. Simple users have simple needs.


I reckon simple users like me want an option that means: activate all options 
that speed up floating point calculations at the cost of accuracy. I believe 
that option is -ffast-math today. If that's the semantic of the option, then 
-mrecip should be added to it.


But if you dispute this, and you believe that the current semantic of 
-ffast-math is different (that is: there are track records of -ffast-math 
only including a selection of optimizations by some standards -- like -O2 
which doesn't mean every optimization), that's fine by me either. But 
please, give me a -ffaster-math or -fuber-fast-math that really means turn on 
everything, thanks.


Either way, -ffast-math should be documented to explain its intended semantic, 
and not only how that semantic is currently implemented in GCC. This way, this 
discussion will not need to be reopened in the future.

--
Giovanni Bajo



Re: svn problems

2006-05-03 Thread Giovanni Bajo
Mike Stump [EMAIL PROTECTED] wrote:

 Also, with svn 1.4 dev (all i have on this machine)

 Cool, fixed in 1.4 dev.  Now I'm curious if it is fixed in 1.3.x.  I
 really want to update, but, the fortunes of a large company with lots
 of revenue are predicated on this stuff actually working.  :-)  Can I
 rely, given that, on 1.4 dev if it isn't fixed in 1.3.x?

Pay attention: SVN 1.4 performs a silent upgrade of your working copy to a new
format the first time it writes to it. This allows to deliver much increased
performances with large working copies like GCC (IIRC there will be half of the
stat operations for a svn status, for instance), but makes the working copy
totally incompatible with SVN 1.3 and previous versions. And there are no
official downgrade script (even if google might come up with some unofficial
script I saw around).

Giovanni Bajo



Re: GNU Pascal branch

2006-03-31 Thread Giovanni Bajo
Adriaan van Os [EMAIL PROTECTED] wrote:

 and people are
 responsible for fixing all front ends when they do backend changes.

 I don't believe that, they would just say, oh, it is broken or oh,
 it is not a primary language or whatever excuse.

You probably don't follow GCC development enough. Every middle-end/backend
change *must* go through a full bootstrap and regression free cycle, for all
active languages, either minor or major. So it should never happen that a
change to the optimizer breaks the Pascal frontend. If it does, you can blame
it on the person who did the change, and ask him to fix it (or revert it). This
is how GCC development.

Even for cases where the frontend is not active by default (eg. Ada), people
are still helpful and often test and fix the Ada frontend, when the bugs are
properly reported in Bugzilla. There is cooperation going on. You can often see
GCC middle-end/back-end maintainers cooperating and producing patches to fix
Ada bugs.


 Also, flexibility in choosing the back-end
 version sometimes has its advantages, dependent on the platform,
 given the fact that reported gcc bugs are not always fixed.

 So you could help fix them, instead of forcing people to stick to
 older backends ;-)

 We are not forcing anybody, we offer full choice. Not fixing
 backend-end bugs is what is actually forcing people. And even patches
 that do fix bugs are often not accepted.

There are often reasons for this. Sometimes the patches refer to bugs which are
triggered by GNU pascal frontend only, and you need a way to reproduce the bug
to have the patch accepted. Having the GNU pascal integrated, means that you
can show a pascal testacase for the bug, have a proper PR number in Bugzilla
for the issue, and thus having the patch reviewed and accepted. If you enter
the full development cycle of GCC, you are making this process many times
easier.

Giovanni Bajo



Re: [PATCH, RFC] Enable IBM long double for PPC32 Linux

2006-02-06 Thread Giovanni Bajo
Mark Mitchell [EMAIL PROTECTED] wrote:

 As I've indicated before, I'm not pleased with this situation either.
 It was as much a surprise to me as anyone.  There is no question that
 this change is not in keeping with our branch policy.

 [...]

 Also, at the time these changes were suggested for 4.1, there were
 none (minimal?) objections; at this point, the developers have been
 working
 on the changes for quite some time.  If there were significant
 objections, they should have been made immediately, and, if necessary,
 the SC involved at that point.

This is a little unfair, though. So now the burden on enforcing the policy is
not on the maintainers that prepare the patches? The people involved in this
change have been working on GCC much longer than those who (later) objected.
They should have known our rules much better, and they should have asked a
buy-in from SC before starting this work, instead of silently forcing it in,
and then see if they could shut up the people who object (if any).

I won't buy the argument I won't hold up the release for this as well, since
it misses the point that many important resources in GCC are being used in
fixing and testing this new feature, instead of putting GCC in shape for the
release. So the release has been already delayed because of this, and will be
even more. That's something which already happened.

Sorry for the rant, but as a small, minor, spare-time contributor, I have seen
5-lines patches of mine being delayed because hey, we are in Stage 3 now, are
you crazy. I am not stupid enough to believe that the rules for RedHat will
ever be the same of those enforced against me, but I wouldn't want to hear that
it was *my* duty to monitor RedHat's changes. SC could be a little more
proactive, rather than waking up only when explicitly called for.

Giovanni Bajo



Attribute data structure rewrite?

2006-01-25 Thread Giovanni Bajo
Hi Geoff,

re this mail:
http://gcc.gnu.org/ml/gcc/2004-09/msg01357.html

do you still have the code around? Are you still willing to contribute it?
Maybe you could upload it to a branch just to have it around in case someone is
willing to update/finish it.

Thanks!
Giovanni Bajo



Re: Attribute data structure rewrite?

2006-01-25 Thread Giovanni Bajo
Geoffrey Keating [EMAIL PROTECTED] wrote:

 re this mail:
 http://gcc.gnu.org/ml/gcc/2004-09/msg01357.html
 
 do you still have the code around? Are you still willing to
 contribute it?
 Maybe you could upload it to a branch just to have it around in
 case someone is
 willing to update/finish it.
 
 It's on the stree-branch, I think.  Yes, I'm still willing to
 contribute it and would be very happy to see someone else update 
 commit it.

svn log --stop-on-copy svn://gcc.gnu.org/svn/gcc/branches/stree-branch

shows me only stree-related commits, but not anything about attributes.

Giovanni Bajo



Re: Status and rationale for toplevel bootstrap (was Re: Example of debugging GCC with toplevel bootstrap)

2006-01-16 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

 I would never use ../configure --disable-bootstrap  make bootstrap,
but
 I most certainly *would* use ../configure --disable-bootstrap; cd gcc;
make
 bootstrap and would be *very* annoyed if it stoped working.

This *will* stop working, but you have good substitutes. Instead of speaking
of makefile target names, let's speak of features. You will always have a
bootstrap sequence and a non-bootstrap sequence, but you'll need to
reconfigure to switch between the two. What it used to be make and make
bootstrap are (and will be)  ./configure --disable-bootstrap  make and
./configure  make.

So please, propose your usage case. Don't tell us which commands you expect
to be working, tell us of your workflow and why you think it's broken by the
new system. Probably it's just a misunderstanding, since there are no real
features that are being lost with the new system (while many bugs and
annoyances are/will be fixed): it's just about learning how to reproduce
your workflow in the new system.

Paolo posted mails explaining complex workflows like debugging a stage1/2
miscompilation, and how that can be done with the new system. What is that
*you* think you can't do anymore with the new system?
-- 
Giovanni Bajo



Re: Pending bugs for GNU

2006-01-14 Thread Giovanni Bajo
Alfred M. Szmidt [EMAIL PROTECTED] wrote:

Please read the web page:
http://gcc.gnu.org/contribute.html
 
 This assumes a stable access to the 'net so that such information can
 be extracted when one is reading the documentation.  Which isn't
 always the case for everyone.  URL's shouldn't point to important
 information of this type in a info manual.  Is there any way to get it
 included directly?


Yes, contribute a patch using the instructions at:
http://gcc.gnu.org/contribute.html

Giovanni Bajo



Re: merges

2006-01-12 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 mysql delete from longdescs where length(thetext)  100;
 Query OK, 251 rows affected (2 min 12.11 sec)
 
 Thank you.
 
 I may just set up a pre-commit hook to check the log message length and
 hav eit not let you commit if it's ridiculously large.
 
 Maybe checkins on a branch shouldn't affect the bugzilla database at all?

Not sure. We badly need them at least for release branches...
-- 
Giovanni Bajo


Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 Just tried these instructions on the reload-branch, and it doesn't
 appear to be working as documented:

 [EMAIL PROTECTED]:/local/src/egcs/reload-branch svnmerge.py init
 property 'svnmerge-integrated' set on '.'

This was the correct command to do, assuming that you *never* merged your
branch since its original creation. I inspected the history of the branch
(through 'svn log') and it seems this assumption is correct. So let's spot
the problem...

 [EMAIL PROTECTED]:/local/src/egcs/reload-branch svn status
 ?  svnmerge-commit-message.txt
   M .
 [EMAIL PROTECTED]:/local/src/egcs/reload-branch cat
 svnmerge-commit-message.txt
 Initialized merge tracking via svnmerge with revisions 1-96656 from

svn+ssh://gcc.gnu.org/svn/gcc/branches/CYGNUS/libjava/testsuite/libjava.lang
/Invoke_1.out
 [EMAIL PROTECTED]:/local/src/egcs/reload-branch svn diff

You see that svnmerge believes that the head of your branch is
svn+ssh://gcc.gnu.org/svn/gcc/branches/CYGNUS/libjava/testsuite/libjava.lan
g/Invoke_1.out. This is obviously incorrect: it is a branch of the trunk,
so this message should say svn+ssh://gcc.gnu.org/svn/gcc/trunk instead.
This appears to be a bug in svnmerge: it is confused by the weird history
created by cvs2svn. Look at the confusing output of svn log -v -r96657
svn://gcc.gnu.org/svn/gcc/branches/reload-branch, which is the commit that
created the branch. It should just list A /branches/reload-branch (from
/trunk:96656) and instead it contains many other entries.

Anyway, I have fixed the bug in svnmerge and attached a new version for you
in this mail. I have verified that running svnmerge init in a checkout of
the reload-branch now does the right thing. Before doing that, remember to
cleanup the wrong commit you did before: you can remove the svnmerge
property, with this command: svn propdel svnmerge-integrated ., followed
by a commit.

For curiosity: you could have worked around this auto-detection problem by
manually specifying the head of the branch and the branch point: svnmerge
init /trunk -r1-96656.


 (The commit -F/rm combination seems a bit arcane to me, what exactly
 am I doing there?)

Nothing special. The svnmerge tool *never* commits anything on your behalf:
it always modifies your working copy and lets you review what it did and do
the actual commit. To ease your work, it generates a simple commit message
in a text file. When you say svn ci -F svnmerge-commit-message.txt, you're
just telling svn to grab the commit message from the text file that svnmerge
generated for you. Then, you can simply remove the file as it's useless.

Feel free to mail me with other svnmerge problems, I'm happy to provide you
with support.
-- 
Giovanni Bajo
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2005, Giovanni Bajo
# Copyright (c) 2004-2005, Awarix, Inc.
# All rights reserved.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
#
# Author: Archie Cobbs   archie at awarix dot com
# Rewritten in Python by: Giovanni Bajo  rasky at develer dot com
#
# Acknowledgments:
#   John Belmonte john at neggie dot net - metadata and usability
# improvements
#   Blair Zajac blair at orcaware dot com - random improvements
#
# $HeadURL: 
http://svn.collab.net/repos/svn/trunk/contrib/client-side/svnmerge.py $
# $LastChangedDate: 2006-01-07 18:36:59 +0100 (Sat, 07 Jan 2006) $
# $LastChangedBy: giovannibajo $
# $LastChangedRevision: 18013 $
#
# Differences from official svnmerge:
# - More portable: tested as working in FreeBSD and OS/2.
# - Add double-verbose mode, which shows every svn command executed (-v -v).
# - svnmerge avail now only shows commits in head, not also commits in other
#   parts of the repository.
# - Add svnmerge block to flag some revisions as blocked, so that
#   they will not show up anymore in the available list.  Added also
#   the complementary svnmerge unblock.
# - svnmerge avail has grown two new options:
#   -B to display a list of the blocked revisions
#   -A to display both the the blocked and the available revisions.
# - Improved generated commit message to make it machine parsable even when
#   merging commits which are themselves merges.
# - Add --force option to skip working copy check
#
# TODO:
#  - Add svnmerge avail -R: show logs in reverse order

import sys, os, getopt, re

Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 Anyway, I have fixed the bug in svnmerge and attached a new version for
you
 in this mail.

 Thanks for the quick fix.  This seems to be working now (svnmerge.py
 avail gives me a rather enormous list of revision numbers).

Yes. It'll take some time to merge them, even if it'll go through a single
merge operation. You can use -v -v to add some debug message and see the
exact svn commands issued, so that you can double check that it'll use a
single merge operation for all of them.

If you're curious, it displays an enormous list of revisions instead of a
single revision range because svnmerge avail prunes it from commits
happened in other branches, thus showing you only the list of actual commits
to the trunk. This is more useful for release branches, where you want to
review the single changes and cherry-pick the ones you like more (of couse,
it makes more sense for smaller projects anyway). But when you do an actual
merge, it realizes it is sufficient to call svn merge just once with a
single range of revisions (since svn will ignore commits in other parts of
the repository and merge just the changes from the trunk).

 For curiosity: you could have worked around this auto-detection problem
by
 manually specifying the head of the branch and the branch point:
svnmerge
 init /trunk -r1-96656.

 That complains about
svnmerge: /trunk is not a valid URL or working directory

 but no big deal since the other method works.

Ah right, I fogot you need to use a full URL on the command line. It's
rather clumsy though: I should add support for repository-relative paths as
well.

 (The commit -F/rm combination seems a bit arcane to me, what exactly
 am I doing there?)


 Nothing special. The svnmerge tool *never* commits anything on your
behalf:
 it always modifies your working copy and lets you review what it did and
do
 the actual commit. To ease your work, it generates a simple commit
message
 in a text file. When you say svn ci -F svnmerge-commit-message.txt,
 you're just telling svn to grab the commit message from the text file
that
 svnmerge generated for you. Then, you can simply remove the file as it's
 useless.

 Ah ok.  Somehow I got confused with my old CVS mindset of no files
 changed, what am I committing, but I assume it's this property thing.

Yup :) svn status would show you what's changed:

 M .

Notice the 'M' on the second column. Second column is for properties:

Second column: Modifications of a file's or directory's properties
  ' ' no modifications
  'C' Conflicted
  'M' Modified
-- 
Giovanni Bajo




Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 One additional question, suppose I don't want to merge a huge number of
 revisions in one go, and I do
svnmerge.py merge -r a-small-list-of-revisions
svn diff ;; to verify everything is OK

 Do I then have to commit the merge so far before running svnmerge.py
 again, or can it get all the information it needs from the local
repository?

No need to commit, but you'll have to use --force (-F), as otherwise it'd
abort saying that your working copy is dirty.
-- 
Giovanni Bajo




Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 One additional question, suppose I don't want to merge a huge number of
 revisions in one go, and I do
svnmerge.py merge -r a-small-list-of-revisions
svn diff ;; to verify everything is OK

 Do I then have to commit the merge so far before running svnmerge.py
 again, or can it get all the information it needs from the local
repository?

Also note that, theoretically, it's better to do it in one-go as you'd end
up with less conflicts. This is why svnmerge does it that way (and goes
through large extent to automatically minimize the number of merge
operations it performs). But if you know what you're doing, you can merge as
much as you want at each go.
-- 
Giovanni Bajo




Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 One additional question, suppose I don't want to merge a huge number of
 revisions in one go, and I do
   svnmerge.py merge -r a-small-list-of-revisions
   svn diff ;; to verify everything is OK

 Do I then have to commit the merge so far before running svnmerge.py
 again, or can it get all the information it needs from the local

 repository?

 Also note that, theoretically, it's better to do it in one-go as you'd
end
 up with less conflicts.

 Why is that - do you mean merge conflicts?

Yes. I'm speaking of the general case, of course. Think of two trunk commits
modifying the same file at the same place: if you merge in one go, you'll
end up with a single conflict. If you do more merge operations, the file
might conflict twice, ending up with two sets of merge markers (and two set
.rXXX files, ecc.). If you clean up after each merge, it's of course better,
but you'll still have to clean up twice instead of once. But of course, it's
up to you to decide what's the best for you branch.

 I imagine that most of the revisions from trunk will just apply cleanly
 since the files are unchanged on the branch.  For changes in reload*.*,
 I'd rather merge one revision at a time and make sure I update the
 affected files properly (I very much doubt any large change will apply
 cleanly in those files; they'll need manual intervention).  I assume
 it'll be easier to correct merge errors for every individual change
 rather than trying to piece them together after a monster merge.

I understand, and I reckon this is the best for your situation. svnmerge can
handle range of changes as well as cherry-picking single changes, and will
correctly keep track of everything. Let me know if it works out correctly.
-- 
Giovanni Bajo




Re: keeping branch up to date with mainline

2006-01-10 Thread Giovanni Bajo
Bernd Schmidt [EMAIL PROTECTED] wrote:

 One more question (I'm experimenting... excuse the stupid questions
 but  I'd rather not accidentally mess up the branch).  What I did now was,
 first,

~/svnmerge.py merge  -r 96659-96679

 just to get started, then

~/svnmerge.py merge -F -r 96681,big list of numbers,9

 with the list of numbers cutpasted from the output of ~/svnmerge.py
 avail.  (That second command took a rather long time).

There is no need to cut  paste the exact revision numbers. svnmerge avail
shows you the exact revisions that are not merged yet, but svnmerge merge is
smart and won't merge twice a revision already merged, nor a revision which
does not refer to a commit to another branch. If you want to merge everything
up to revision 10, you can just do: ~/svnmerge.py merge -r96681-10,
or even ~/svnmerge.py merge -r1-10.

 Property changes on: .
 ___
 Name: svnmerge-integrated
 - /trunk:1-96656
 + /trunk:1-10,snip,103808-103809,1

 i.e. it seems to contain exactly the revision numbers I didn't want to
 merge yet.  Is this expected?

Yes, even if it might look confusing. If you try svn log -r103809
http://gcc.gnu.org/svn/gcc/trunk;, you'll see that the log is empty. If you
then try svn log -v -r103809 http://gcc.gnu.org/svn/gcc;, you'll see that it's
a commit on the 4.0 branch. Basically, svnmerge is reporting as integrated
revisions that won't ever need to be merged because they're not on trunk. It's
doing it so that, when you are finished merging from trunk, your
svnmerge-integrated property will be a solid range, like 1-108904 or
whatever. In short: trust it :)

As for the slowness, there are a couple of low-hanging fruits which I haven't
address yet. One glaring issue is that generating the log message for a big
merge operation can take longer than merging itself (e). If you feel
familiar with Python, feel free to comment out the log generation code in
action_merge.

Giovanni Bajo




Re: make all vs make bootstrap

2005-12-16 Thread Giovanni Bajo
Paolo Bonzini [EMAIL PROTECTED] wrote:

 What about bubblestrap?

 (See also http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25438)


 A make from a toplevel is equivalent to the old make bubblestrap
 or make -C bubblestrap.  In practice make just does the right
 thing, compiling all that is needed to not have comparison failures.


I would also note that using make in the cp/ directory at least used to build
cc1plus with the system compiler, without -Werror and with a different set of
warnings. There have been many cases where a patch tested by C++ rules (which
is *not* a full bootstrap, but just build compiler, build libjava, check c++)
resulted in a bootstrap failure because of a variable not initialized or
something. The correct solution used to be (guess what!) make bubblestrap to
build the compiler. Now, it's simply the default :)

Giovanni Bajo



Re: GCC middle-end

2005-12-15 Thread Giovanni Bajo
Thomas Lavergne [EMAIL PROTECTED] wrote:

 *) is GIMPLE truely front-end independent (no nore specific hacks)?

Yes, mostly. There are some so-called lang-hooks to ask some additional
information to the frontend, but it's mainly about the type system (eg:
there is a langhook to ask the frontend if two types are equivalent).

 *) are all tree SSA optimizations run on individual function's GIMPLE
 trees or do some start at GENERIC level?

Most passes work with GIMPLE (either hi-gimple or low-gimple) in SSA form,
but others work without SSA form or even on GENERIC (for instance,
tree-nested.c can be called an optimization pass -unnests the nested
functions - and it works in GENERIC form). Some loop optimizations work in
the so-called loop-closed SSA form. Anyway, you can say that *most* passes
work in GIMPLE form.

 *) are all front-end designed to pass GENERIC to the middle-end (which
 then gimplify) or do some directly provide GIMPLE trees?

C and C++ fully use trees as internal representation of the frontend data.
Basically, they build and work with GENERIC trees enriched with other
frontend-specific trees. They then gimplify these trees, using
frontend-specific gimplification callbacks that know how to gimplify the
frontend-specific trees. Other frontends like Fortran and Ada work with
totally different data structures and build GENERIC trees as a last
conversoin step just to feed the middle-end (which are then passed to the
gimplifier).

 *) are the various function's GIMPLE tree the only structure that the
 middle-end needs to write back-end code or are there other data that
 should be created/obtained from the front-end?

Not sure I understand the question. It looks like you could answer this
yourself by opening one of the easiest optimization passes (see tree-*.c)
and looking how it works.

 *) is there a tool to browse the gimpled tree of your functions (before
 and after some optimizations)?

Not sure what you mean by browse. You can use -fdump-tree-all to dump the
trees after each and every optimization step, so you can see what each pass
did.

 I rapidly saw an option to dump the generic/gimpled tree to a C form.
 (-fdump-generic-tree). Do you think it is theoretically possible to
 design inverse front-end which would translate the gimple tree to a
 selected language (transforming C++ code to C, C99 to C89, fortran95 to
 C,...)? An other way of putting this, is: did we loose information or
 can a smart tool recover from gimplifying? Are some open source projects
 already looking at these aspects?

I believe you could write a special backend which generates C code from the
final GIMPLE form, so after all optimizations happened. You can't do that
from the tree dumps though, as they don't carry enough information.

 I am sorry, these are a lot of questions. You can point me to any
 forum/mailing list archive or document: I can learn by myself. Are gcc's
 internal documentation available somewhere (without installing the gcc
 grom source)?


The source code contains the source TeX files for the internals
documentation. make doc should build the dvi version even without
compiling the compiler itself.
-- 
Giovanni Bajo



Re: SVN tags, branches and checkouts

2005-12-13 Thread Giovanni Bajo
John David Anglin [EMAIL PROTECTED] wrote:

 I find the documentation on checking out branches, particularly
 for branch releases, confusing.  It doesn't say you need to use tags
 instead of branches for releases.

Which documentation, exactly?

Giovanni Bajo



Re: gcc 4.1 code size

2005-12-06 Thread Giovanni Bajo
Andreas Killaitis [EMAIL PROTECTED] wrote:

 I was now astonished
 that my tests with gcc 4.1 showed that the library size has been
 grown by about 10%.

 The used compile options are:

 [...]
 -O3


Don't expect GCC to optimize code size at -O3. -O3 means -O2 *and* inline as
much as it makes sense to do, so it's really not a good option if you're
interested in code size.

Giovanni Bajo



Re: identifying c++ aliasing violations

2005-12-05 Thread Giovanni Bajo
Jack Howarth [EMAIL PROTECTED] wrote:

 What exactly is the implication of having a hundred or more of this in
 an application being built with gcc/g++ 4.x at -O3? Does it only risk
 random crashes in the generated code or does it also impact the
 quality
 of the generated code in terms of execution speed?


The main problem is wrong-code generation. Assuming the warning is right and
does not mark false positives, you should have those fixed. I don't think
quality of the generated code would be better with this change.

However, it's pretty strange that C++ code generation is worse with GCC 4: I
saw many C++ programs which actually got much faster due to higher lever
optimazations (such as SRA). You should really try and identify inner loops
which might have been slowed down and submit those as bugreports in our
Bugzilla.

Giovanni Bajo



Re: [PATCH] New predicate covering NOP_EXPR and CONVERT_EXPR

2005-12-02 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

  Java has to be fixed (probably with a frontend-specific tree code),
  and maybe also Ada.
 
 Ada does not.  It generates CONVERT_EXPR vs. NOP_EXPR in some attempt
 to preserve some old-semantic difference but always treats them the
 same when looking at trees.


This is a jolly good news to me, thanks.
-- 
Giovanni Bajo


Possible size-opt patch

2005-12-02 Thread Giovanni Bajo
Bernd,

I read you're interested in code-size optimizations. I'd like to point you
to this patch:
http://gcc.gnu.org/ml/gcc-patches/2005-05/msg00554.html

which was never finished nor committed (I don't know if RTH has a newer
version though). This is would be of great help for code size issues in AVR,
but I don't know if and how much it'd help for Blackfin.
-- 
Giovanni Bajo



Re: Wiki pages on tests cases

2005-11-27 Thread Giovanni Bajo
Jonathan Wakely [EMAIL PROTECTED] wrote:

 http://gcc.gnu.org/wiki/HowToPrepareATestcase
 http://gcc.gnu.org/wiki/TestCaseWriting

 The second one seems fairly gfortran-specific, but doesn't mention
 that fact anywhere.  If the second page adds info that is generally
 useful to the whole compiler, that info should be in the first page.
 If it doesn't, it should clearly say it is gfortran-specific, or
 should be removed.

 Yes, I know it's a wiki and I can do this myself, but I only have so
 much spare time and maybe the second one was added for a good reason.


I think they should be merged. The second page (I never saw it before) takes a
more tutorial-like approach. Maybe it could be inserted somewhere at the start
of the first page like a quick example or something.

Giovanni Bajo



Re: Default arguments and FUNCTION_TYPEs

2005-11-24 Thread Giovanni Bajo
Nathan Sidwell [EMAIL PROTECTED] wrote:

  In the C++ front end, default arguments are recorded in
 FUNCTION_TYPEs intead of being part of the FUNCTION_DECLs.  What are
 the reasons for that?


 There used to be an extension that allowed default arguments on
 function pointer types.  We agreed to kill it, although I don't know
 if it was actually removed.  If that's been done, there's no longer
 any reason.

 I took it out the back and shot it.

 The obvious place is on the DECL_INITIAL of the PARM_DECLs, but I
 don't think they exist until the function is defined.


I heard once that there was some long-term project of storing function
declarations (without corresponding definitions) in a more memory-efficient
memory representation. Moving default parameters within PARM_DECL seems a
little backward in this respect. And if your memory is right, requiring to
build PARM_DECLs just to store default arguments would be even worse.

I understand this has to be done in a separate pass: I was just bringing up the
issue so that, if possibile, we could find some place which does not conflict
with that project.

Giovanni Bajo



Re: Default arguments and FUNCTION_TYPEs

2005-11-24 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

  templateint struct X { };

void fu(int a, Xsizeof(a)) { } // #1

I gave a look to PR 17395 and you are probably right. This testcase requires
us to build PARM_DECLs even for function declarations. That's really too
bad.

You should though measure memory usage for large C++ testcases when building
PARM_DECLs immediately. If they raise too much, that's a serious regression.
-- 
Giovanni Bajo



Re: Default arguments and FUNCTION_TYPEs

2005-11-24 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 Once I'm finished, I'll post the patch and I would probably ask you
 help in the testing department and suggest better concrete
 solution. That PR needs to be fixed.


Fixing a PR introducing a regression is not a proper fix for any bug,
*especially* for a bug which is not a regression itself. Given that it never
worked before, there are no GCC users depending on it. Of course, it'd be
good to fix it, but must be done in the proper way.

I'm glad to help with testing if I have time.
-- 
Giovanni Bajo



Re: Abnormal behavior of malloc in gcc-3.2.2

2005-11-21 Thread Giovanni Bajo
Sandeep Kumar [EMAIL PROTECTED] wrote:

 I didnt get your point. I am allocating space only for 400 inregers
 then as soon as in the loop if it crosses the value of 400 , it should
 have given a segementation voilation ?

No. For that to happen, you need some memory checker. GCC has -fmudflap, try
with that. Recent versions of glibc also have their internal memory buffer
checker, it probably triggers the segmentation fault when you free the buffer
which you have overflown.

Giovanni Bajo



Re: GCC-3.4.5 Release status report

2005-11-21 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 I'm planning a release for the end of the month.
 I've fired the release script to build a pre-release tarball,
 which should be ready any moment now.

Thanks. Are there official plans for the 3.4 branch after this release?
-- 
Giovanni Bajo


Re: typedefs

2005-11-21 Thread Giovanni Bajo
Manu Abraham [EMAIL PROTECTED] wrote:

 When one does a

 typedef uint8_t array[10];

 what does really happen ?

This question does not concern the development of the GCC compiler in any way,
so it does not belong here. Please post it to support forums for th eC
language.

Giovanni Bajo



Re: Overwrite a file with svn update?

2005-11-19 Thread Giovanni Bajo
Steve Kargl [EMAIL PROTECTED] wrote:

 Perhaps, I missed the required options, but I'll
 ask an obvious question anyway.  Often when testing
 a patch, one will often place a new testcase in
 gcc/testsuite/*.  This new file is not under control
 of svn.  After review, the patch is committed to the
 tree.  Now, I want to update my local repository.
 I issue svn update and the result is

 svn: Failed to add file 'gcc/testsuite/gfortran.dg/fgetc_1.f90': \
 object of the same name already exists

 which is indeed correct.  So, is there an option to tell
 svn to blow away files that conflict with files in the
 repository.


Why don't you just svn add the file? So you won't miss it in the commit, in
the diffs, in the stats, and whatnot. svn add is a totally local operation
and does not require write access to the remote repository. You can even do
that on a tree checked out with svn:// and later switch the tree to svn+ssh://
to commit it.

Giovanni Bajo



Re: Register Allocation

2005-11-19 Thread Giovanni Bajo
Ian Lance Taylor ian@airs.com wrote:

 Reload inheritance is a complex idea threaded through reload.

In fact, this was cleared up in the reload-branch (as documented here:
http://gcc.gnu.org/wiki/BerndSchmidt), which seems that nobody has enough skill
and time to get into a mergeable state. Now that we're entering Stage 1, it'd
be great if somebody like you could find some time to work on it :)

Giovanni Bajo



Re: Register Allocation

2005-11-18 Thread Giovanni Bajo
Andrew MacLeod [EMAIL PROTECTED] wrote:

 It is my intention over the next few months to do some of the initial
 underlying infrastructure bits upon which the entire document is
 based. Presuming that proceeds OK and I can build up the data
 structures I am looking for, I'll move on from there.  If anyone
 wants to help, I'm sure there will be some juicy things to do.


1) Do you believe there will be sub-parts of this project which could be
carried on succesfully and efficiently by programmers without previous RTL
experience? IIUC, the optimizers will be basically abstracted by RTL details,
but I was thinking of something within the critical path.

2) As for the new tables needed by the RTL library, I suppose they will be
generated by some new gen* program. Did you consider using a scripting language
as a fast prototype to munge .md files and generate those tables? I believe it
would allow faster initial development and more flexibility in changes. Much
later, it can be rewritten in C.

Giovanni Bajo



Re: Link-time optimzation

2005-11-17 Thread Giovanni Bajo
Daniel Berlin [EMAIL PROTECTED] wrote:

 Thanks for woking on this. Any specific reason why using the LLVM
 bytecode wasn't taken into account?

 It was.
 A large number of alternatives were explored, including CIL, the JVM,
 LLVM, etc.

 It is proven to be stable, high-level enough to
 perform any kind of needed optimization,

 This is not true, unfortunately.
 That's why it is called low level virtual machine.
 It doesn't have things we'd like to do high level optimizations on,
 like dynamic_cast removal, etc.


Anyway, *slightly* extending an existing VM which already exists, is
production-ready, is GPL compatible, is supported by a full toolchain
(including interpreters, disassemblers, jitters, loaders, optimizers...) looks
like a much better deal. Also, I'm sure Chris would be willing to provide us
with all the needed help.

I also think CIL would have egregiously worked. I'm sure the reasons to refuse
it are more political than tecnical, so it's useless to go into further details
I presume.

Giovanni Bajo



Re: Link-time optimzation

2005-11-16 Thread Giovanni Bajo
Mark Mitchell [EMAIL PROTECTED] wrote:

 Thoughts?


Thanks for woking on this. Any specific reason why using the LLVM bytecode
wasn't taken into account? It is proven to be stable, high-level enough to
perform any kind of needed optimization, and already features interpreters,
JITters and whatnot.

Giovanni Bajo



Re: Change in order of evaluation in 4.0.2

2005-11-14 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

 I appreciate that this is quite valid according to the ANSI C
 standard and the team are within their rights to change this,
 but I am curious to know the reasoning behind the change which
 seems to me to make the object code less optimal.


It is not a deliberate change. GCC 4.0 feature more than 40 new optimization
passes, which rearrange and optimize your code in different ways. They all
know that they can freely change the order of evaluation of operands within
sequence points, so it's totally casual what the order ends up to be.

As for the code being less or more optimal, this is a totally orthogonal
issue. I suggest you inspect the assembly code to see if there is really a
pessimization. In this case, feel free to file a bugreport in Bugzilla about
it.
-- 
Giovanni Bajo



Re: [RFC] Enabling loop unrolls at -O3?

2005-11-06 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 You must not have been paying attention to one of the most frequent
 complaints about gcc, which is that it is dog slow already ;-)

 Sure, but to me -O2 says you don't care much about compilation time.

 If the Ada front-end wishes, it can make special flags for its own
 needs...


Why are you speaking of the Ada frontend?

If -O1 means optimize, but be fast, what does -O2 mean? And what does -O3
mean? If -O2 means the current set of optimizer that we put in -O2, that's
unsatisfying for me.

Giovanni Bajo



[RFC] Enabling loop unrolls at -O3?

2005-11-05 Thread Giovanni Bajo
Hello,

any specific reason why we still don't unroll loops by default at -O3? It
looks like it gives better results on most benchmark, and many people use it
always together with -O3 to say really optimize, I mean it.

Giovanni Bajo



Index: opts.c
===
--- opts.c  (revision 106352)
+++ opts.c  (working copy)
@@ -589,6 +589,7 @@ decode_options (unsigned int argc, const
   flag_inline_functions = 1;
   flag_unswitch_loops = 1;
   flag_gcse_after_reload = 1;
+  flag_unroll_loops = 1;
 }

   if (optimize  2 || optimize_size)



Re: [RFC] Enabling loop unrolls at -O3?

2005-11-05 Thread Giovanni Bajo
Steven Bosscher [EMAIL PROTECTED] wrote:

 Steven Bosscher [EMAIL PROTECTED] wrote:
 I guess the issue is what does huge mean, it is hard to discuss
 based on loaded adjectives taking the place of data :-)

 Huge here means 15-20% on x86* hosts.

 I don't consider this huge for -O3. I think -O3 can be slower if it
 achieves better code, and -funroll-loops makes it do just that.

 I would certainly agree, I am not sure I even find it huge for -O2.
 After all 20% compile time represents a couple of months advance
 in computer hardware (and that is true across the board, even if
 you are talking about upgrading 1990 hardware to 1991 hardware :-))

 You must not have been paying attention to one of the most frequent
 complaints about gcc, which is that it is dog slow already ;-)


It's not disabling the optimizers that you are making it faster.

I believe you are missing my point. What is the GCC command line option for
try to optimize as best as you can, please, I don't care compiletime? I
believe that should be -O3. Otherwise let's make -O4. Or -O666. The only real
argument I heard till now is that -funroll-loops isn't valuable without profile
feedback. My experience is that it isn't true, I for sure use it for profit in
my code. But it looks like the only argument that could make a difference is
SPEC, and SPEC is not freely available. So I'd love if someone could
SPEC -funroll-loops for me.

Giovanni Bajo



Re: diffing directories with merged-as-deleted files?

2005-11-03 Thread Giovanni Bajo
Joern RENNECKE [EMAIL PROTECTED] wrote:

 P.S.: When I use a diff-cmd with -N, I not only get a diff for the 44
 files that are different,
 but also a header for each of the 752 files that are identical, i.e.
 two lines for each file like:

 Index: gcc/tree-ssa-operands.c
 ===

 cvs would never do such nonsense.


Absolutely! It would just print all the directory names in the middle of the
diffs. I call that nonsense as well.

Giovanni Bajo



Re: svn: Is it there yet?

2005-10-30 Thread Giovanni Bajo
Paul Thomas [EMAIL PROTECTED] wrote:

 [EMAIL PROTECTED] gcc-svn]# svn up
 svn+ssh://[EMAIL PROTECTED]/svn/gcc/trunk svn:
 'svn+ssh://[EMAIL PROTECTED]/svn/gcc' is not a working copy


That command makes no sense, as svn help up would tell you. If you are going
to checkout the trunk, you need svn co. If you want to update an already
checked-out working copy, svn up is sufficient. The wiki covers this all.

As for your original problem, you can find out names by exploring the
repository through svn ls (or viewcvs). This is covered in the Wiki, too.

Giovanni Bajo



Re: Tag reorg

2005-10-30 Thread Giovanni Bajo
Joseph S. Myers [EMAIL PROTECTED] wrote:

 For old branches that are dead and of no use (because they are
 merged into newer branches), I'm include to rm them, and for old
 branches that have ideas, but, may never see the light of day, be
 conservative and leave them alone.

 I'd rather put dead branches which had development but have now been
 merged into newer branches or mainline in branches/closed instead of
 removing them

Why?

I fail to see any reason for this. When you don't need a file anymore, you
delete it. When you don't need a directory anymore, you delete it. I can't see
why it should be any different for branches. Deleting a branch makes life
easier for people looking for branches, reduce the noise, and makes the
repository cleaner.

 Note that the list of Inactive Development Branches in svn.html
 includes dormant branches (development not merged but branch not
 active, but could potentially reactivate) as well ones which have
 been merged or superseded by newer branches.  I think for now
 we should leave the dormant branches
 as-is and just move those which are dead.

There is nothing to gain and all to lose. If somebody wants to revive a
dormient or closed branch, he can do it with a simple SVN command which takes
less than a minute to be typed and executed. If we fail to remove them, the
branch will be seen many thousands of times in the svn ls output, by people
looking for active branches, and it will be just unwanted noise.

Giovanni Bajo



Re: Tag reorg

2005-10-30 Thread Giovanni Bajo
Joseph S. Myers [EMAIL PROTECTED] wrote:

 You can always see them with the [EMAIL PROTECTED] syntax

 ie
 svn ls svn://gcc.gnu.org/svn/gcc/[EMAIL PROTECTED]

 Which requires remembering an arbitrary revision number (i.e., making
 life *harder* not *easier* for people looking for that branch)
 rather than a more meaningful branch name.

And? The revision number can easily be found with an automated svn log | grep
command, or can be written in svn.html, or wherever. Let's not remember that
inspecting a dead branch is the *rare* case here. I consider the noise in svn
ls branches way more important.

 Abstractly, the history where a branch has been merged into mainline
 is

 mainline  current mainline
   \  /
\/
branch---

 (where the branch is ancestral to the current mainline, and logically
 branch-of-today is a hard link to mainline-of-tofay), not

 mainline - current
   mainline \
\
 branch--dead

 and while version control doesn't effectively represent the first form
 (multiple versions at the same time being ancestral to the same
 current
 version), I don't think tricks with revision numbers should be needed
 to see the ancestry of mainline.

Say you are looking into the history/annotation of a certain file, and you see
that a certain line/function you are interested it was changed exactly with a
commit which merged a branch. Then, the commit log will clearly mention the
branch name. Moreover, the commit number is the exact revision number you need
to start doing svn log into the branch. So, you'll always have what you need
there, handy for use. I would find it *way* more annoying having to search for
the branch in a complex /branches hierarchy (was it moved to /branches/closed?
Or is there a /branches/Apple/closed? Or is there a /closed? Ah nice, it was
/branches/codesourcery/jsm/closed).

 In order to avoid referencing arbitrary revision numbers

Here's the problem! In SVN, it is really common to reference arbitrary revision
numbers, since the same number conveys multiple meanings (an exact reference of
the tree, an exact reference to an atomic commit). Revision numbers *will*
proliferate in GCC mailing lists in bug reports, commit reports, mailing lists,
and whatnot. This is something which just happens with SVN repositories, and
people get used to it (and mostly, find it very handy). You just can't assign a
name to everything, and I don't see how the commit that broke mainline two
days ago is less important that the commit which closed an internal Apple
development branch. If you're trying to fight the use of revision numbers,
you're basically running against a thick wall.

I have a feeling that this discussion is going to be unproductive because it is
made just of our intuitions and expectations. There's no real-world concrete
use case.

Giovanni Bajo



Re: Tag reorg

2005-10-30 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 You can always see them with the [EMAIL PROTECTED] syntax
 
 ie
 svn ls svn://gcc.gnu.org/svn/gcc/[EMAIL PROTECTED]
 
 Which requires remembering an arbitrary revision number (i.e.,
 making life *harder* not *easier* for people looking for that
 branch) rather than a more meaningful branch name.
 
 And?
 
 And meaningful name matters.
 I would hate to see the fanatism taken to the point where just because
 you can do svn bar | baz fhu means we should not provide means for
 not-so-machine-mimicing humans.


fanatism? Keep your flames for youself.

Giovanni Bajo



Re: quick way to transition a cvs checkout to svn?

2005-10-28 Thread Giovanni Bajo
Paolo Bonzini [EMAIL PROTECTED] wrote:

 Is there a quick way to turn a CVS checkout to SVN, other than making a
 patch and applying to a fresh SVN checkout?

I believe cvs diff | patch is the only way, maybe Daniel knows better. Is
there a specific problem with this?
-- 
Giovanni Bajo



Re: Svn doc edit for 4.0 branch name

2005-10-28 Thread Giovanni Bajo
Mike Stump [EMAIL PROTECTED] wrote:

 $ svn switch svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4.0-branch
 
 to:
 
 $ svn switch svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_0-branch
 
 :-(  Took me a while to figure out what was wrong.  :-(

Sorry about that!
-- 
Giovanni Bajo


Re: New SVN Wiki page

2005-10-27 Thread Giovanni Bajo
Mike Stump [EMAIL PROTECTED] wrote:

 Uhm, I'm not sure how to explain this without being too pedantic.
 Does this
 sound clearer?

 This tool tracks each individual change (fine-grained) and will never
 reapply an already applied change.

 I think that is a high level answer, and completely answers the
 question to people that have the question, while doing as little as
 possible to confuse someone that doesn't even have the question.
 Anyone doing merges should have the question and understand the
 answer.

 Or, you can say it merges in N-1 to N, when one requests a merge of
 N.  A totally different way of expressing the same thing, and conveys
 the same information.

 Sound reasonable?

Thanks for the suggestion, I have incorporated this into the Wiki page, I hope
it's clearer now.

Giovanni Bajo



Re: ISO/IEC 14882:2003

2005-10-26 Thread Giovanni Bajo
Turner, Keith S [EMAIL PROTECTED] wrote:

 The man page information for the gcc  c++98 option says that the
 compiler will be compliant with The 1998 ISO C++ standard plus
 amendments. Are the amendments referring to the changes to the C++
 standard that is now ISO/IEC 14882:2003. I need to know if ISO/IEC
 14882:2003 is supported by gcc 4.0.2.


Yes, it is partly supported. You can look into the source code, directory
gcc/testsuite/g++.dg/tc1, for testcases which record the status with respect to
14882:2003. There is one file per defect report: if you see xfail in the
comments, it means that test does not work yet, and thus that defect report is
not implemented yet.

Giovanni Bajo



Re: Out of curiosity: [EMAIL PROTECTED]

2005-10-26 Thread Giovanni Bajo
Paolo Carlini [EMAIL PROTECTED] wrote:

 What are the plans for it?
 Often, I find it very useful, will be renamed to gcc-svn and kept alive?

It'll stay alive, but it'll get less and less useful, since it doesn't give
any information you can't find with svn log.
-- 
Giovanni Bajo



Re: New SVN Wiki page

2005-10-25 Thread Giovanni Bajo
Mike Stump [EMAIL PROTECTED] wrote:

 Ok, question about the merge tool.  Does revision N mean all changes
 that predate N that apply, or just N-1 to N?

N-1 to N. You can specify a single commit and it will apply only that.

 So, let's say that 3, 4, 5, 6, 7 are available, can I merge 3 today,
 then 6, the next day, then 5, then all the rest (4 and 7) the next
 day?  If yes, this corresponds to a N-1 to N above.

Yes. The good thing is that it'll keep track of what you've merged, so the list
of available revisions will trim down as you go.

 Could you update the doc to reflect your answer.

Uhm, I'm not sure how to explain this without being too pedantic. Does this
sound clearer?


To do partial merges or cherry-picking, you can pass -r/--revision to svnmerge
merge and name the single commit (or range of commits).


Giovanni Bajo



Re: SVN 1.3?

2005-10-25 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

 The Wiki mentiones it, but not where to get it.  Google searches don't
 see to find the tar file for it.

I'll add a note to the wiki about this. Subversion 1.3 RC1 should be out
real soon now, you can use (and test) that one. The idea is that if the GCC
community finds serious bugs in RC1, those bugs could get fixed before 1.3
final is out.

I believe you could try pulling it from the 1.3 branch on its repository.
Otherwise, use 1.2 for now and wait for 1.3 RC1 in the next couple of days.

The wiki also mention that you will see a disk usage improvement by using
1.3 RC1. This is also not true *right now* but will be true after Daniel
converts the CVS repository to SVN for good this weekend.
-- 
Giovanni Bajo



New SVN Wiki page

2005-10-24 Thread Giovanni Bajo
Hello,

I have rehauled http://gcc.gnu.org/wiki/SvnHelp. I'm almost done with it
(should be ready tomorrow), but only a few things are missing now. I believe
I have integrated all the issues raised during the past week in the mailing
list. Also, I have added new information (especially for branch
maintaieners), and reorganized information so that they are hopefully easier
to find.

Let me know if I have missed something. Feel free to fix typos and rephrase
my bad English, but I'd like to know if you plan on adding something
substantial, so to find the best place to put it and not to duplicate
information.

Thanks,
-- 
Giovanni Bajo





Re: svn tag inspection

2005-10-22 Thread Giovanni Bajo
Jeffrey A Law [EMAIL PROTECTED] wrote:

 In fact, after the subversion conversion is over, we can svn
 delete all those merging tags for good since they're there because
 you can't delete them in CVS but we really don't need them anymore
 (before anybody asks: svn delete keeps history of course).
 Just a minor correction, you can certainly delete cvs tags.  It
 just isn't something I'd necessarily recommend doing :-)

 cvs tag -d tag


I think that is not a versioned operatoin (but I might be wrong). The point is
that with svn you can remove all those tags as versioned operations, so you are
really not destroying past history, and you're making the tags list much more
meaningful and useful for everyday use. IMO we should do the same with closed
branches, they really serve no purpose there in /branches in the current
revision, we can safely svn rm them (or move to a /branches/closed subdir).

Giovanni Bajo



Re: Is the test svn repository working fine?

2005-10-21 Thread Giovanni Bajo
Paolo Carlini [EMAIL PROTECTED] wrote:

 See previous threads about how svn makes multiple connections to the
 server, each requiring authorization.
 
 
 Argh! I glanced briefly over those threads...
 
 While you are at it, are you willing to summarize the solution for this
 annoyance and/or point me to the relevant subthread?

http://gcc.gnu.org/wiki/SSH%20connection%20caching

-- 
Giovanni Bajo


Re: Is the test svn repository working fine?

2005-10-21 Thread Giovanni Bajo
Paolo Carlini [EMAIL PROTECTED] wrote:

 While you are at it, are you willing to summarize the solution for this
 annoyance and/or point me to the relevant subthread?


 http://gcc.gnu.org/wiki/SSH%20connection%20caching


 Ok. The only problem is that, for some reason, I can use only Protocol
 1, not Protocol 2.

 Anyone knows why?

 Maybe I have to send to the overseers a new public key?

Probably. I can never remember which is which, so I always provide both the
keys (generated with -tdsa and -trsa option of ssh-keygen).

BTW: I just simplified that Wiki page a little.
-- 
Giovanni Bajo



Re: svn tag inspection

2005-10-21 Thread Giovanni Bajo
Mike Stump [EMAIL PROTECTED] wrote:

 I did:

 svn co svn+ssh://gcc.gnu.org/svn/gcc/tags

 in the hopes that I could just update it form time to time, and have
 a list of all tags, but... empty directory...


Besides the fact that you must have used -N, remember that you don't need all
those merging tags anymore (forever). You can either live with a single tag per
branch which you move along, or don't use tags alone and use svnmerge.py (see
contrib scripts in 1.3). I am going to write some details in the wiki about
this (I wrote svnmerge.py).

In fact, after the subversion conversion is over, we can svn delete all those
merging tags for good since they're there because you can't delete them in CVS
but we really don't need them anymore (before anybody asks: svn delete keeps
history of course).

Giovanni Bajo



Re: A couple more subversion notes

2005-10-20 Thread Giovanni Bajo
Eric Botcazou [EMAIL PROTECTED] wrote:

 I've never created/managed branches or tagged anything in the GCC
 tree.  The important things to me are:

 - time to do a complete check-out on mainline/branch

Check-out is 30% slower because of the time needed to write the duplicate local
copy. On the other hand, there is a nice option svn switch which lets you
switch a working copy tree from mainling to any branch and viceversa just by
downloading the delta between the two, so much faster than checking out
everything from scratch. I can think of this workflow:

- svn co path/to/mainline  patch-sparc-frame-pointer(checkout pristine tree
to work on a specific patch)
- Write/test patch on mainline. Review/commit it. Committed as r1234567 (single
number identifies changes to 10 files, 1 rename, 2 deletions).
- From within .path-sparc-frame-pointer directory, svn switch
path/to/gcc40branch   (switch to 4.0 branch)
- Backport patch:  svn merge -r1234567 /path/to/mainline  (automatically
rename, delete, apply modifies).
- Test/commit.
- svn switch path/to/gcc34/branch   (switch to 3.4 branch)
- etc.

So, even if the initial checkout is slower, you have to do it less often.

 - time to do an update on mainline/branch

When updating, cvs/svn first try to find out what needs to be updated (in rough
terms) and then start downloading the updates. The latter part (download) is
obviously the same, as they both download compressed delta over the network.
The former part is many times faster in svn, and takes the same time on
branches or mailine (while CVS was much slower in the branch) You'll find out
that, after you type svn up, it'll start downloading the first file *much
faster* than it used to do with cvs, especially on a branch.

 - time to do a diff on mainline/branch

svn diff is a disconnected operation, requires no server access, so it takes
milliseconds. cvs diff is dominated by network connection, so it can take a
while. Also svn diff can handle new and removed files, as you can easily do
svn add/svn remove on any file, since they don't write anything to the
server. Also, the new svn status (which is not like cvs status) shows you the
current status of your working copy (which files are added, removed, modified,
unknown) in milliseconds because it's a disconnected operation (again).

 - time to do a commit on mainline/branch

Again, much faster in SVN, and it takes the same time on mainline/branches. CVS
used to do pretty slow at this.

 - space needed locally for mainline/branch

Each working copy will take twice the space amount. If you add that to the
usual build directory associated with each tree, the difference in
space-per-real-word-tree is smaller, but it's still very noticeable. This is
issue will be probably fixed in newer SVN versions. For now, if disk space is
critical, one solution would be to use the 'svk' client tool, which offers many
other benefits.

 - portability of svn to non-Linux systems


This has been answered already. It should not be an issue.

Giovanni Bajo



Re: A couple more subversion notes

2005-10-20 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 I'm looking forward to solutions that lower the entry barrier,
 specifically with repect too OpenSSH, diff and svk.


I'm going to write something in the wiki about svk. There's much FUD spreading
in this thread.
DanJ put up a wiki page on the OpenSSH configuration (which really could be
found with 3 minutes of googling, which is shorter than writing a mail asking
information about it [not speaking of you, gaby]). It might end up being not
strictly necessary if DannyB sets up a read-only svn:// repository (no SSH
required), but I'm sure that a release manager as you wants to have very fast
ssh connections to gcc.gnu.org also for other reasons.
I don't seem to recall that there was an unresolved/unclear issue about diff.
Would you refresh me?

Giovanni Bajo



Re: A couple more subversion notes

2005-10-20 Thread Giovanni Bajo
Arnaud Charlet [EMAIL PROTECTED] wrote:

 - portability of svn to non-Linux systems

 This has been answered already. It should not be an issue.

 Note that I found it a real pain to have to install so much
 dependency package on my linux system, so I suspect building the
 whole dependency packages under non linux systems might be slghtly of
 a pain. And I am not done yet with
 the OpenSSH update which seems kind of madatory to do any practical
 work.

Yes. I don't remember if there are issues with setting up a svn:// only
repository, which doesn't go through SSH.

Even if we assume that it's impossible to upgrade OpenSSH on a given platform
for some weird reason, the problem is probably going to be fixed by SVN 1.4 and
the new svn+ssl:// protocol. Meanwhile, unlucky people will have to live with a
slower svn diff -rR1 -rR2 remote operation. Sorry about that, but let's not
remember of the other dozens which works on branches and can do a merge in
seconds instead of literally *hours*, and so on.

I don't think we can uniformly win everywhere, right now. I believe I have
already shown and spoken about many day-to-day advantages even for people
working only on mainline/release branches, and I'm sure that people that wanted
to listen have understood. Maybe somebody gave the wrong impression that
changing SCM was a thing with no issue in the transition, and no regression
whatsoever for every corner case possible. It's not like that (and we GCC
developers should know better about small regressions falling out of huge
improvements!)

CVS is a dead trail, and it's a huge bottleneck for many of us. SVN offers
many, many improvements for everybody, and a few regressions which are not even
by design and will be fixed someday (if we are lucky, in the next few months).
I believe we should all try to live with that.

As for your specific case, Arnaud, I assume that you do much SCM work, given
your merge-centric position. I'm more than happy to help you through this
transition and see if we can find out ways to improve your workflow with SVN.
As I told you in another mail, if you are interested, just provide me with more
information and let's see if we can work out something cool for you (even in
private mail, if you prefer for some reason).

Giovanni Bajo



Re: A couple more subversion notes

2005-10-20 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

 Even if we assume that it's impossible to upgrade OpenSSH on a given
 platform for some weird reason,

 I appreciate your effort in this, but I strongly suggest that you
 refrain from calling reasons why people can't install the latest
 versions of supporting tools weird.

Wrong choice of words, I apologize. I should have written any given
reason, or simply any reason.

 the problem is probably going to be fixed by SVN 1.4 and
 the new svn+ssl:// protocol. Meanwhile, unlucky people will have to live
 with a slower svn diff -rR1 -rR2 remote operation. Sorry about that,
but
 let's not remember of the other dozens which works on branches and can do
 a merge in seconds instead of literally *hours*, and so on.

 I was fearing that. I am of the opinion that we wait for a stable SVN 1.4.


I would like to notice that a fair comparison should take into account the
fact that a single svn diff shows a whole changeset of changes (accounting
multiple files). With cvs, you might need to run multiple cvs diff (with
multiple SSH handshakes), plus the time to write all those different
revision numbers for each file. So, while the raw time for a single command
is slower, I believe that, in the common case, the operation still ends up
being faster, if expressed in seconds to do what I want.

In other words, what I see mostly in this thread is that people are worried
because of what we usually call micro-benchmarks (e.g. raw cvs diff time
for a single time across two revisions), which is of course important (and
svn is mostly faster except a couple of corner-cases); but some seem to miss
that real-world workflow benchmarks (e.g. time to backport a patch) are
several times better with svn, because of the higher-level commands and
concepts it provides.

I'd also remember that this issue (diff of a single file across SSH being
slower) can be fixed by either an OpenSSH upgrade (which should be flawless
in many cases), or a svn:// readonly access (which I still have to
understand if it can be done), or the use of svk which lets you mirror part
of the repository (including history) locally (e.g. everything since gcc
3.0) so that all the diff/checkout/switch/whatnot operations are blazingly
fast (at the expense of some disk space).
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-20 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

 Sorry about that, but let's not remember of the other dozens which
 works on branches and can do a merge in seconds instead of literally
 *hours*, and so on.

 Yes, but how often do even those who work on branches a lot do merges?


Less often than needed or wanted, because it takes way too much time to do
one, instead of few seconds as it should. One may want to merge a
development branch every day or so, but it can't be done right now because
the overhead of the operation is too high. This causes people to batch
merges in big drops, which increase the conflicts and the time to solve
those (when something does not work, you have to investigate a larger
timespan to find out what broken what, and you have to do that without even
seeing atomic changesets in logs).

 If not very often, why not just start it up, background it, and go to
sleep?

Notice that large merge commits on branches lock the whole CVS repository
for everybody for long time.
-- 
Giovanni Bajo



Re: using multiple trees with subversion

2005-10-19 Thread Giovanni Bajo
François-Xavier Coudert [EMAIL PROTECTED] wrote:

 I do only have small involvement in gcc, preparing few patches (never
 more than 5 at a time) on limited areas (gcc/fortran, libgfortran and
 gcc/testsuite), always on mainline or 4.0 branch. The way I manage to
 keep mind sanity right now is to have a few complete trees (one for
 4.0 and 3-4 for mainline, each one with a local changes), called
 gcc-newintrinsics, gcc-fpe, ...
 Having 5 subversion trees will need much more space (for local
 pristine copies), which I don't really have. Is there any way to force
 subversion use one pristine tree for all modified trees, or is my way
 of handling things completely rotten?

Not that I know of. As Daniel Berlin said, Subversion 1.4 will probably have
support for checking out repositories with compressed local copies (or no copy
at all -- but I wouldn't suggest this, as you'd start to be slow in svn diff,
svn stat, etc).

You may want to look into svk though, which implements a distributed system on
the top of an exising subversion repository. svk working copies do not have a
double-copy at all.

Also I suggest you to look into svn switch which might be useful to you to
switch an existing working copy from a branch to another, without redownloading
the whole thing but just the differences.

Giovanni Bajo



Re: using multiple trees with subversion

2005-10-19 Thread Giovanni Bajo
François-Xavier Coudert [EMAIL PROTECTED] wrote:

 Not that I know of. As Daniel Berlin said, Subversion 1.4 will probably
 have support for checking out repositories with compressed local copies
 (or no copy at all -- but I wouldn't suggest this, as you'd start to be
 slow in svn diff, svn stat, etc).

 I guess no local copy would be fine with me. diff and stat should not
 be much slower than in CVS, and since very rarely do a full tree
 diff/stat, this is quite acceptable.

Actually, once you get used to *immediate* reply to svn diff and stat, you
don't want to go back to wait for a few seconds. I know because it happens
to me all the time...

 Is that so hard to implement that it's not done already? Or am I the
 only person to find that disk is expensive (or working on his own
 hardware, maybe)?

Probably you're the only one finding disk space expensive. HDs are quite
cheap nowadays. Anyway, I'm sure SVN people would be happy if you help
finishing the summer of code project that was left undone, if you are really
interested in this. Personally I'm way more annoyed by the fact that the
local copy confuses grep, than the disk space it takes.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Arnaud Charlet [EMAIL PROTECTED] wrote:

 Not clear how to interpret this output without having to go to the doc,
 no easy way to guess with my cvs knowledge, nor with my english knowledge.

 I guess I was expecting something more verbose ala cvs, e.g a real
status
 in english, such as up-to-date, locally modified, needs merge, ...
 instead of nothing or M which are rather cryptic for a subversion
 novice.

It's the same, with a minimal non-verbose output, and a default which does
not require any connection to the server. You'll learn to use svn status
without any argument to find out what's up in your working copy,
irrespective of the server. If it does not say anything, your working copy
is pristine.

 $ svn status --show-updates Makefile.in
 Status against revision: 105364

This would show pending updates as you expect.

 Note: coming from a cvs background, having non incremental version numbers
 *per file* is very disruptive and non intuitive. I suspect it will take
 me some time to adjust to this. Any suggestions/tricks welcome.

I don't think there are suggestions or tricks. You'll just have to get used
to the idea that the changesets are atomic, and they uniquely identify the
whole tree. When you said that your repository is version 105364, your svn
status is empty, and you see a bug, I can download that very version and
reproduce it, without having to match your top of ChangeLog, or other weird
things.

Per-file, you can see the history to see when the file was changed. Noice
that you can use --verbose to see other files changed in the same commit,
which is very handy (no more wasted time looking for the whole patch in
gcc-cvs or in gcc-patches).

 took between 16 and 22 seconds. 18 seconds typically.

 Now, did a cvs diff -r1.120 -r1.121 Makefile.in

 took between 3 and 5 seconds. 3.5 seconds typically.

Out of curiosity, are you comparing anonymous CVS versus svn+ssh? In that
case, it's apple and oranges. Do some ssh multiplexing and get speed back.

 Is there any way to improve such svn diff operation ? (Note that
 I frequently do cvs diff on arbitrary revisions, not just the last two,
 although doing cvs diff -rHEAD is probably the most frequent operation
 I rely upon).

svk is a tool that lets you mirror the entire repository (or a subset of),
checkout many copies from your local mirror, diff whatever with whatever,
commit into your local repository, and finally push changes into the
official repository. I believe it's going to be very handy for the average
GCC developer. People are still discussing about it (see other mails) and I
believe a Wiki page will be setup about it.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Andreas Schwab [EMAIL PROTECTED] wrote:

 3. Small operations (IE ls of random dirs, etc) are generally dominated
 by the ssh handshake time.  Using ssh multiplexing will significantly
 speed these up.

 How can I tell ssh not to barf if the ControlPath does not exist?  Also,
 you can't share the config file with an older ssh version because it will
 barf about the unknown config option.


I put ControlPath in the config file, and then run ssh -fMN host at
startup. When is it barfing for you? If I remove the socket file, it just
does a normal connection.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Andrew Haley [EMAIL PROTECTED] wrote:

 It seems to be incredibly hard to find out which branch a file is on.

svn info file. More typically, snv info | grep URL will tell you which
branch was the current working copy pulled from.

  [EMAIL PROTECTED] gcc-head-test]$ svn status --verbose ChangeLog
105366   104478 mmitchel ChangeLog

 Now, I happen to know that this is gcc-4_0-branch, and presumably if I
 make any changes and check it back in that's where the changes will
 go.  But svn ls branches says

  105358 dberlin Oct 16 01:53 gcc-4_0-branch/

 So, how on Earth do I go from 105366 104478 to gcc-4_0-branch ?

Revisions and branches have nothing to do. It's not like CVS.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Steven Bosscher [EMAIL PROTECTED] wrote:

 Thanks Danny for asking. I'm reading the various messages coming to the
 list and, well, I'm *worried* the benefits will *not* outweigh the costs
 for many of us.

 Sorry for the harsh and naive question: *which* are the benefits for
people
 *not* managing many branches?

 Hmm, let's see.  The ones I care about most are:

 1) Atomic commits, which make regression hunting a lot easier.
You can pinpoint exactly one patch, one revision, as the
thing to blame.  Right now the regression hunter can from
time to time do checkouts from a data+time when someone was
just checking in a patch.  With SVN, this is not a problem.

 2) Ability to rename and move files.  Have you ever looked at
the messy structure of gcc (i.e. the compiler proper)?  And
don't you ever have the feeling that some libstdc++ file is
in the wrong place, but you don't want to move it because
it breaks the revision history?  SVN helps here.

 And less important but still nice:
 3) Faster tagging, so you don't have to worry about not checking
out something when a gcc snapshot cron job is running

I'll add others:

4) Uniquely identification of a tree with a single number. In my pristine
tree, revision 567890, I see this bug. That's unique.
5) Much much faster management of working copies: svn diff / svn status
do not require server connection. what's up in my tree and what did I
change can be answered in milliseconds.
6) Much easier reversion of patches for testing purposes, since you can
easily extract and revert an atomic changeset.
7) Much easier generation of proper diffs to send mail to the lists, since
you can svn add and svn delete without write access to the repository.
8) Fast switch of working copies from a branch to another, *maintaining* the
local changes. This is very handy.
9) Much easier backport of patches to release branches: svn
merge -r123456, which also correctly remove/add/rename files as needed.
10) Getting rid forever of the problem with DOS newlines in source files.

I would also notice that most people don't RTFM. I spent many efforts in
writing the Wiki page, and the benefits of SVN are apparent if you spend
some time reading it and studying the thing a little. To make things better,
something *has* to change. You can't expect SVN to be *identical* to CVS,
but it's very very close.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Andreas Schwab [EMAIL PROTECTED] wrote:

 If I remove the socket file, it just does a normal connection.
 
 It doesn't for me.
 
 $ ssh gcc.gnu.org
 Couldn't connect to /var/tmp/schwab/ssh_%h: No such file or directory

Ah, maybe it's a later fix? I'm using:

$ ssh -V
OpenSSH_4.2p1, OpenSSL 0.9.7f 22 Mar 2005

-- 
Giovanni Bajo


Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Arnaud Charlet [EMAIL PROTECTED] wrote:

 The main issue is really with svn status and the handling of versions and
 branches which seems to be quite different and quite disruptive for cvs
 users.


Branches is where we expect the most from SVN, compared to CVS. The wiki
part about management of branches is a little confusing, indeed. I'll try to
reword it to make it easier. There is also a little tool (svnmerge) which
helps managing branches.

If you care elaborating on which your typical cvs procedures for branch
management are (I'd separate release branches from dev branches for
clarity), I can work out for you the correct SVN counterparts, which - I'm
sure - they're going to be more than satisfying.
-- 
Giovanni Bajo



Re: A couple more subversion notes

2005-10-19 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

  Are there any maintainers (folks in MAINTAINERS) who have objections
or
  concerns?

 Well, I haven't tried it myself yet, so what I'm going by is hearsay but
 I do share the concern that it's looking like this is a change that may
 make the common things harder and slower in order to make the less common
 operations faster and/or easier.  If so, that may not be the right
tradeoff.

I understand the concern, but let me assure you: I strongly believe that
this is not true. The only issue here is that people are trying to configure
SVN using our Wiki page as the only reference (and not everybody even did
that). Daniel and I wrote that page, but it is not meant to contain the
answers for all the questions, nor the solve all configuration problems. We
*never* had such a page for CVS as well: if it didn't work, people simply
googled until they found the solution.

Yet, to help the transition, we *are* preparing a documentation and we *are*
helping people moving. There will be issues in the transition. But the
result is that the common things will be faster and easier, and the less
common things will be so incredibly faster and easier that might become more
common about hackers -- which I see as a good thing.
-- 
Giovanni Bajo



Re: Updating a subversion directory?

2005-10-18 Thread Giovanni Bajo
Steve Kargl [EMAIL PROTECTED] wrote:

 Perhaps.  OTOH, the svn documentation, which contains
 an explicit section on URL parsing, does not show a
 username@ as a valid part of a svn flavor URL.

In fact, I believe that using username@ does not work when using SVN over
http://. You have to use --username for that (and yes, it sucks).
-- 
Giovanni Bajo



Re: Updating a subversion directory?

2005-10-18 Thread Giovanni Bajo
Kean Johnston [EMAIL PROTECTED] wrote:

 I fully anticiptae creating a similar 'gccsvn' and adding
 and special args I need to get it working. This way you
 aren't surprised by accidental environment variable changes
 etc. Maybe you want to do that if you dont like using
 --username all the time.

Well, it's actually needed only for checking out the repository. After that,
it stores in the repository the information, so you don't need to specify it
anymore.
-- 
Giovanni Bajo



Re: GCC optimization oddity

2005-10-15 Thread Giovanni Bajo
Prosun Niyogi [EMAIL PROTECTED] wrote:

 i.  gcc -S gcc-bug.c -Os -DHAHA; md5sum gcc-bug.s
 ii. gcc -S gcc-bug.c -Os -DHAHA -DBLAH -DAKS -DJKAHSD; md5sum gcc-bug.s

 As you can tell, the -D macros are bogus, and I would expect the md5sums
 from each compile to match. But they dont. The diff of the resulting
 assembly is as follows:

I can't reproduce this with a newer GCC version. If  you still see it,
please submit a bugreport following the guidelines at
http://gcc.gnu.org/bugs.html.
-- 
Giovanni Bajo



Re: Update on GCC moving to svn

2005-10-10 Thread Giovanni Bajo
Daniel Berlin [EMAIL PROTECTED] wrote:

 Thus, i'm going to put an updated repo on gcc.gnu.org on Monday (i was
 converting it, but it looks like they shutdown the machines at watson)
 and do a few test branch merges to make sur eall the commit mails come
 out okay for very large cases.

Will the final conversion include the old-gcc repository?
 
Giovanni Bajo



Re: GCC 4.0.2 and PR 23993

2005-09-21 Thread Giovanni Bajo
Mark Mitchell [EMAIL PROTECTED] wrote:

 1. Release 4.0.2 without fixing this PR.  (The bits are ready, sitting
on my disk.)

 2. Apply the patch, respin the release, and release it.

 3. Apply the patch, spin RC3, and go through another testing cycle.

My feeling is that these 4.0 releases are under the reflectors, everybody
looks at them since it's a new GCC cycle and they want to see if it's
getting better or worse. I don't think we should rush bugfixes releases. My
humble opinion is to go with RC3, and possibly test it with Boost to make
sure the static data member patch didn't totally break it. It would be
unfortunate to regress too much in bugfix releases.
-- 
Giovanni Bajo



Re: RFA: pervasive SSE codegen inefficiency

2005-09-20 Thread Giovanni Bajo
Daniel Berlin [EMAIL PROTECTED] wrote:

 For example, Kenny and I discovered during his prespilling work that the
 liveness is actually calculated wrong.

 It's half-forwards (local), half-backwards (globally), instead of all
 backwards, which is how liveness is normally calculated, so we
 discovered that spilling registers wasn't actually changing the liveness
 calculation due to the forwardness.

I believe another short term project that could be done is to prepare a good
test infrastructure for RTL passes.

Currently, tree passes get away with checking their own dumps, and or
scanning the resulting code after the pass. This is not perfect but it is
surely good enough. Instead, in RTL world, there are absolutely *no* unit
tests, if we exclude some backend-specific pass (e.g. check cmov is
generated in final assembly listing).

Making an example out of Daniel's and Kenny's work, there is no way to test
that liveness is calculated correctly. Or that it is correctly updated after
prespilling. I'm not suprised that it turned out that spilling a register
didn't change liveness, since there is no test for it. Also, I would not be
surprised if 10 days after Kenny and Danny commit their work, someone else
manages to break it again with a stupid typo somewhere, and nobody notices
it for another 3-4 years.

For instance, I remember Joern writing a simple reload unit-test module: it
was a file that was able to setup reload, feed it with some hand-crafed RTL,
and check if the output RTL was as expected. It was SH-only and incomplete,
but it would be a good start. I don't think we can make much progress with
RA, if people can break other people's work without even noticing.

Another example could be Paolo's recent fwprop pass: it will be committed
without a way to test that it is actually working. Now think if there was a
way to feeds it with some RTL and check the generated output. That'd be
useful!
-- 
Giovanni Bajo



Re: Undefined behavior in genautomata.c?

2005-09-19 Thread Giovanni Bajo
Dave Korn [EMAIL PROTECTED] wrote:

   Do you suppose the idiom is common enough that VRP could special-case
 arrays of size 1 at the end of a struct ?  And still obtain the benefits
 of the optimisation in 99.99% of all non-variable-length-tail-array cases?

It makes sense to me. We could special case arrays of size 1 at the end of
the struct, and treat it as C99 flexible array members. Any other case
could simply be considered broken.
-- 
Giovanni Bajo



Re: New port contribution - picoChip

2005-09-19 Thread Giovanni Bajo
Gerald Pfeifer [EMAIL PROTECTED] wrote:

 I suggest you to double check also the list present in this mail:
 http://gcc.gnu.org/ml/gcc-patches/2004-06/msg01625.html

 This was never publically approved, but it reflects views of many GCC
 maintainers. Surely it does not hurt to follow those guidelines, even if
 they are (yet) not showstoppers for a backend inclusion.

 I see that I reviewed it with two days back then.  Not everything I
 could/can approve as web pages maintainer (because it looks like
 policy changes), but I see that about half of the changes I only
 had minor editorial comments on.

 Would you mind taking and committing those?


Well, I believe the gist of the review was that I needed to get agreements
from GWP or similar about the policy. I also later got a confirmation from
the SC that I don't need their explicit approval on technical issues. I'd
rather have a GWP approval on the technical contents before committing those
changes.
-- 
Giovanni Bajo



Re: Undefined behavior in genautomata.c?

2005-09-19 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

   Do you suppose the idiom is common enough that VRP could special-case
 arrays of size 1 at the end of a struct ?  And still obtain the
benefits
 of the optimisation in 99.99% of all non-variable-length-tail-array
cases?

 It makes sense to me. We could special case arrays of size 1 at the end
of
 the struct, and treat it as C99 flexible array members. Any other case
 could simply be considered broken.

 broken with respect to what?


broken as in undefined behaviour? We would still be doing the right thing in
the common case that people use, and optimize all the others.
-- 
Giovanni Bajo



Re: Undefined behavior in genautomata.c?

2005-09-19 Thread Giovanni Bajo
Gabriel Dos Reis [EMAIL PROTECTED] wrote:

   Do you suppose the idiom is common enough that VRP could
special-case
 arrays of size 1 at the end of a struct ?  And still obtain the
 benefits of the optimisation in 99.99% of all
 non-variable-length-tail-array cases?

 It makes sense to me. We could special case arrays of size 1 at the
end
 of the struct, and treat it as C99 flexible array members. Any other
 case could simply be considered broken.

 broken with respect to what?


 broken as in undefined behaviour?

 Could you explain in detail where you see the undefined behaviour?


Accessing the array beyond its size. From the tone of your concise answers,
I deduce that this is not undefined behaviour as per the ISO C or ISO C++
standards (otherwise it would have been clear what it is undefined
behaviour); in which case, I would appreciate if you could elaborate on when
it is invalid to access an array outside its usual range.
-- 
Giovanni Bajo



Re: TCP retransmission in Downloading from GDB

2005-09-15 Thread Giovanni Bajo
Dave Korn [EMAIL PROTECTED] wrote:

   Wrong list.  This is the gcc list.  You were right first time when you
 posted this exact same message to the gdb list half an hour ago.

Moreover, people have surely already deleted his message beacuse the
disclaimer at the end of it explicitally says that you must delete the mail
if you are not the intended recipient. I am not gcc@gcc.gnu.org, so I
deleted his message.

Emmanuel, you should read the section Policies at
http://gcc.gnu.org/lists.html before posting again.
-- 
Giovanni Bajo



Re: regmove fixups vs pseudos

2005-09-14 Thread Giovanni Bajo
Ian Lance Taylor ian@airs.com wrote:

 Any reason why we blindly assume destination registers will be hard
 registers here?
 
 Index: regmove.c
 ===
 RCS file: /cvs/gcc/gcc/gcc/regmove.c,v
 retrieving revision 1.173
 diff -p -U3 -r1.173 regmove.c
 --- regmove.c 25 Aug 2005 06:44:09 - 1.173
 +++ regmove.c 14 Sep 2005 00:27:34 -
 @@ -1020,7 +1020,8 @@ fixup_match_2 (rtx insn, rtx dst, rtx sr
if (REG_N_CALLS_CROSSED (REGNO (src)) == 0)
  break;
 
 -   if (call_used_regs [REGNO (dst)]
 +   if ((REGNO(dst)  FIRST_PSEUDO_REGISTER
 + call_used_regs [REGNO (dst)])
|| find_reg_fusage (p, CLOBBER, dst))
  break;
  }
 
 The destination register which is set by a CALL will normally be
 FUNCTION_VALUE, which is normally a hard register.


gcc_assert?
-- 
Giovanni Bajo



Re: New port contribution - picoChip

2005-09-12 Thread Giovanni Bajo
Steven Bosscher [EMAIL PROTECTED] wrote:

 The linker and assembler used by the port are proprietary, and can't
 be made publicly available at this point. The port will have to be
 assembler output only.

 I suppose this means that nobody but you will ever be able to run/test
your
 backend. If you are fine with this, I don't think anybody will object.

 I think people should object.  What is the point in having a free
 software compiler if e.g. users can't use a complete free toolchain;
 or gcc developers not being able to test changes when some patch
 needs changes in every port.


You can still test compilation, at least. I think it makes sense as an
intermediate step, assuming the port of binutils is in the works. Usually
first binutils is contributed, but hey.
-- 
Giovanni Bajo



Re: Minimum/maximum operators are deprecated?

2005-09-11 Thread Giovanni Bajo
Steven Bosscher [EMAIL PROTECTED] wrote:

 It was an ill-defined and poorly maintained language extension that
 was broken in many cases.

That's an overstatement. I've been using it for years without any problem, and
was very deprived by its removal, though I can understand the we don't want
extensions reason. But that's really the only compelling one that prompted its
removal.

Giovanni Bajo



Re: sh64 support deteriorating

2005-09-09 Thread Giovanni Bajo
Joern RENNECKE [EMAIL PROTECTED] wrote:

 I can't justify spending the amount time that it would take to make the
 sh64 port regression free.
 The lack of a debugger that works reliably with recent gcc versions has
 led to an increasing
 backlog of uninvestigated execution failures.

For reference, could you post some regression results to gcc-testresults?
-- 
Giovanni Bajo


Re: DCE eliminating valid statement for ACATS c34007p

2005-09-06 Thread Giovanni Bajo
Richard Kenner [EMAIL PROTECTED] wrote:

   /* Otherwise, if we are taking the address of something that is
 neither a refence, declaration, or constant, make a variable for the
operand
 here and then take its address.  If we don't do it this way, we may
 confuse the gimplifier because it needs to know the variable is
 addressable at this point.  This duplicates code in
 internal_get_tmp_var, which is unfortunate.  */
   else if (TREE_CODE_CLASS (TREE_CODE (op)) != tcc_reference
 TREE_CODE_CLASS (TREE_CODE (op)) != tcc_declaration
 TREE_CODE_CLASS (TREE_CODE (op)) != tcc_constant)
 {
   tree new_var = create_tmp_var (TREE_TYPE (op), A);
   tree mod = build (MODIFY_EXPR, TREE_TYPE (op), new_var, op);

   TREE_ADDRESSABLE (new_var) = 1;
   if (TREE_CODE (TREE_TYPE (op)) == COMPLEX_TYPE)
 DECL_COMPLEX_GIMPLE_REG_P (new_var) = 1;

   if (EXPR_HAS_LOCATION (op))
 SET_EXPR_LOCUS (mod, EXPR_LOCUS (op));

   gimplify_and_add (mod, pre_p);
   TREE_OPERAND (expr, 0) = new_var;
   recompute_tree_invarant_for_addr_expr (expr);
   return GS_ALL_DONE;
 }

Can't you use get_initialized_tmp_var, then?
-- 
Giovanni Bajo



Re: Language Changes in Bug-fix Releases?

2005-09-03 Thread Giovanni Bajo
Richard B. Kreckel [EMAIL PROTECTED] wrote:

 Since the creation of the GCC 4.0 branch back in February a number of
 minor C++ language changes seem to have slipped in.  Let me mention just
 two examples: [...]

 Are you really, really sure such language tightening is appropiate for
 bug-fix releases?  (Note that the examples above are not regressions since
 gcc-3.4.y accept both of them.)

Notice that we consider up to gcc 2.95 to check for regression status. If
the code was correctly rejected in any version of GCC up to 2.95, then it is
a regression, even if gcc 3.4 was wrong.

But even if that couple of snippets weren't regressions, there are very high
chances that a small variations of them could be a regression -- in fact, if
the bug was fixed, it *must* have been a regression! Somebody found it,
reported it in Bugzilla, and it got fixed. I suggest you to do some Bugzilla
archeology.

We don't really fix anything in dot releases unless there is a regression
bug open in Bugzilla. Also, we try and keep the patch to fix the bug as
minimal as possible -- sometimes, there is also a more complete and invasive
patch to properly fix the same regression which is committed to HEAD.

We can't do more than this. If a patch to fix a regression accidentally also
fixes a very similar testcase which is not a regression, then let it be.
-- 
Giovanni Bajo



Re: [RFA] Nonfunctioning split in rs6000 back-end

2005-08-23 Thread Giovanni Bajo
Paolo Bonzini [EMAIL PROTECTED] wrote:

 While researching who is really using flow's computed LOG_LINKS, I
 found 
 a define_split in the rs6000 back-end that uses them through
 find_single_use.  It turns out the only users are combine, this split,
 and a function in regmove.


See also:
http://gcc.gnu.org/ml/gcc-patches/2004-01/msg02371.html

Giovanni Bajo



Re: [PATCH]: Proof-of-concept for dynamic format checking

2005-08-17 Thread Giovanni Bajo
Florian Weimer [EMAIL PROTECTED] wrote:

 I haven't tried to flesh this out any further.  I'd be curious to
 hear how people react to it.

 Can't we just use some inline function written in plain C to check the
 arguments and execute it at compile time using constant folding etc.?


Do we have a sane way to (partially) execute optimizers at -O0 without screwing
up with the pass manager too much? Probably they can be talked into, but might
require some work. The idea is neat though, and I prefer it over introducing a
specific pattern-matching language (which sounds like over-engineering for such
a side feature).

Giovanni Bajo



Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VALUE (Ada RFC)

2005-08-12 Thread Giovanni Bajo
Robert Dewar [EMAIL PROTECTED] wrote:

 why can't we just
 completely turn off this optimization for Ada since it is wrong!


Well, the point is that Gigi uses the fields TYPE_MIN/MAX_VALUE in a way
which is (now) incorrect, and this causes wrong optimizations. Of course,
this might be seen as an evolution (the exact semantics weren't as clear
before), but it does not change things.

You can either disable the optimization or fix Gigi. I'd also note that you
already have SRA disabled, which is an optimization which would be *very*
useful to Ada, because of other Gigi bugs.
-- 
Giovanni Bajo



Re: gcc 3.3.6 - stack corruption questions

2005-07-25 Thread Giovanni Bajo
Louis LeBlanc [EMAIL PROTECTED] wrote:

 I added the -fstack-check switch to my makefile and recompiled with
 various optimizations.  I was pretty surprised at the file sizes that
 showed up:
 
 No Optimization:
 -rwxr-xr-x  1 leblanc  daemon  1128660 Jul 25 16:25 myprocess*
 
 Optimized with -O2
 -rwxr-xr-x  1 leblanc  daemon  1058228 Jul 25 17:36 myprocess*
 
 Optimized with -O3
 -rwxr-xr-x  1 leblanc  daemon  1129792 Jul 25 17:32 myprocess*
 
 I would have expected much different results.  Shouldn't the file
 sizes be smaller (at least a little) with the -O3 switch?  Maybe
 there's a loop unrolled to make it faster, resulting in a larger
 codebase?


Or inlining, or many other things. If you care about size, use -Os.
-- 
Giovanni Bajo


Re: extension to -fdump-tree-*-raw

2005-07-22 Thread Giovanni Bajo
Ebke, Hans-Christian [EMAIL PROTECTED] wrote:

 So to resolve that problem I took the gcc 4.0.1 source code and patched
 tree.h and tree-dump.c. The patched version introduces two new options for
 -fdump-tree: The parseable option which produces unambiguous and easier
to
 parse but otherwise similar output to raw and the maskstringcst option
 which produces output with the string constants masked since this makes
 parsing the output even easier and I'm not interested in the string
 constants.


You could write some code to escape special characters, so to write
something like:

@54 string_cst   type: @61 strg: wrong
type:\n\0\0\xaf\x03\x03foo\bar  lngt: 19

This would not need a different special option.
-- 
Giovanni Bajo



Re: extension to -fdump-tree-*-raw

2005-07-22 Thread Giovanni Bajo
Ebke, Hans-Christian [EMAIL PROTECTED] wrote:

   I have to write this in Outlook, so I don't even try to get the quoting
 right. Sorry. :-(

http://jump.to/outlook-quotefix

 But it would break applications relying on the old format.

There is no format either. dump-tree is *very* specific of GCC inners, and
it can dramatically changes between releases. OK, maybe not the syntax, but
the semantic. I wouldn't care of the syntax at that point.
-- 
Giovanni Bajo



  1   2   >