Re: Git and GCC

2007-12-05 Thread Harvey Harrison

>   git repack -a -d --depth=250 --window=250
> 

Since I have the whole gcc repo locally I'll give this a shot overnight
just to see what can be done at the extreme end or things.

Harvey



Re: Git and GCC

2007-12-05 Thread Jeff King
On Thu, Dec 06, 2007 at 01:47:54AM -0500, Jon Smirl wrote:

> The key to converting repositories of this size is RAM. 4GB minimum,
> more would be better. git-repack is not multi-threaded. There were a
> few attempts at making it multi-threaded but none were too successful.
> If I remember right, with loads of RAM, a repack on a 450MB repository
> was taking about five hours on a 2.8Ghz Core2. But this is something
> you only have to do once for the import. Later repacks will reuse the
> original deltas.

Actually, Nicolas put quite a bit of work into multi-threading the
repack process; the results have been in master for some time, and will
be in the soon-to-be-released v1.5.4.

The downside is that the threading partitions the object space, so the
resulting size is not necessarily as small (but I don't know that
anybody has done testing on large repos to find out how large the
difference is).

-Peff


Re: Git and GCC

2007-12-05 Thread Jon Smirl
On 12/6/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> > While you won't get the git svn metadata if you clone the infradead
> > repo, it can be recreated on the fly by git svn if you want to start
> > commiting directly to gcc svn.
> >
> I will give this a try :)

Back when I was working on the Mozilla repository we were able to
convert the full 4GB CVS repository complete with all history into a
450MB pack file. That work is where the git-fastimport tool came from.
But it took a month of messing with the import tools to achieve this
and Mozilla still chose another VCS (mainly because of poor Windows
support in git).

Like Linus says, this type of command will yield the smallest pack file:
 git repack -a -d --depth=250 --window=250

I do agree that importing multi-gigabyte repositories is not a daily
occurrence nor a turn-key operation. There are significant issues when
translating from one VCS to another. The lack of global branch
tracking in CVS causes extreme problems on import. Hand editing of CVS
files also caused endless trouble.

The key to converting repositories of this size is RAM. 4GB minimum,
more would be better. git-repack is not multi-threaded. There were a
few attempts at making it multi-threaded but none were too successful.
If I remember right, with loads of RAM, a repack on a 450MB repository
was taking about five hours on a 2.8Ghz Core2. But this is something
you only have to do once for the import. Later repacks will reuse the
original deltas.

-- 
Jon Smirl
[EMAIL PROTECTED]


Re: Git and GCC

2007-12-05 Thread Linus Torvalds


On Thu, 6 Dec 2007, Daniel Berlin wrote:
> 
> Actually, it turns out that git-gc --aggressive does this dumb thing
> to pack files sometimes regardless of whether you converted from an
> SVN repo or not.

Absolutely. git --aggressive is mostly dumb. It's really only useful for 
the case of "I know I have a *really* bad pack, and I want to throw away 
all the bad packing decisions I have done".

To explain this, it's worth explaining (you are probably aware of it, but 
let me go through the basics anyway) how git delta-chains work, and how 
they are so different from most other systems.

In other SCM's, a delta-chain is generally fixed. It might be "forwards" 
or "backwards", and it might evolve a bit as you work with the repository, 
but generally it's a chain of changes to a single file represented as some 
kind of single SCM entity. In CVS, it's obviously the *,v file, and a lot 
of other systems do rather similar things.

Git also does delta-chains, but it does them a lot more "loosely". There 
is no fixed entity. Delta's are generated against any random other version 
that git deems to be a good delta candidate (with various fairly 
successful heursitics), and there are absolutely no hard grouping rules.

This is generally a very good thing. It's good for various conceptual 
reasons (ie git internally never really even needs to care about the whole 
revision chain - it doesn't really think in terms of deltas at all), but 
it's also great because getting rid of the inflexible delta rules means 
that git doesn't have any problems at all with merging two files together, 
for example - there simply are no arbitrary *,v "revision files" that have 
some hidden meaning.

It also means that the choice of deltas is a much more open-ended 
question. If you limit the delta chain to just one file, you really don't 
have a lot of choices on what to do about deltas, but in git, it really 
can be a totally different issue.

And this is where the really badly named "--aggressive" comes in. While 
git generally tries to re-use delta information (because it's a good idea, 
and it doesn't waste CPU time re-finding all the good deltas we found 
earlier), sometimes you want to say "let's start all over, with a blank 
slate, and ignore all the previous delta information, and try to generate 
a new set of deltas".

So "--aggressive" is not really about being aggressive, but about wasting 
CPU time re-doing a decision we already did earlier!

*Sometimes* that is a good thing. Some import tools in particular could 
generate really horribly bad deltas. Anything that uses "git fast-import", 
for example, likely doesn't have much of a great delta layout, so it might 
be worth saying "I want to start from a clean slate".

But almost always, in other cases, it's actually a really bad thing to do. 
It's going to waste CPU time, and especially if you had actually done a 
good job at deltaing earlier, the end result isn't going to re-use all 
those *good* deltas you already found, so you'll actually end up with a 
much worse end result too!

I'll send a patch to Junio to just remove the "git gc --aggressive" 
documentation. It can be useful, but it generally is useful only when you 
really understand at a very deep level what it's doing, and that 
documentation doesn't help you do that.

Generally, doing incremental "git gc" is the right approach, and better 
than doing "git gc --aggressive". It's going to re-use old deltas, and 
when those old deltas can't be found (the reason for doing incremental GC 
in the first place!) it's going to create new ones.

On the other hand, it's definitely true that an "initial import of a long 
and involved history" is a point where it can be worth spending a lot of 
time finding the *really*good* deltas. Then, every user ever after (as 
long as they don't use "git gc --aggressive" to undo it!) will get the 
advantage of that one-time event. So especially for big projects with a 
long history, it's probably worth doing some extra work, telling the delta 
finding code to go wild.

So the equivalent of "git gc --aggressive" - but done *properly* - is to 
do (overnight) something like

git repack -a -d --depth=250 --window=250

where that depth thing is just about how deep the delta chains can be 
(make them longer for old history - it's worth the space overhead), and 
the window thing is about how big an object window we want each delta 
candidate to scan.

And here, you might well want to add the "-f" flag (which is the "drop all 
old deltas", since you now are actually trying to make sure that this one 
actually finds good candidates.

And then it's going to take forever and a day (ie a "do it overnight" 
thing). But the end result is that everybody downstream from that 
repository will get much better packs, without having to spend any effort 
on it themselves.

Linus


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
> While you won't get the git svn metadata if you clone the infradead
> repo, it can be recreated on the fly by git svn if you want to start
> commiting directly to gcc svn.
>
I will give this a try :)


Re: Git and GCC

2007-12-05 Thread Harvey Harrison
On Thu, 2007-12-06 at 00:11 -0500, Daniel Berlin wrote:
> On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> > From: "Daniel Berlin" <[EMAIL PROTECTED]>
> > Date: Wed, 5 Dec 2007 23:32:52 -0500
> >
> > > On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> > > > From: "Daniel Berlin" <[EMAIL PROTECTED]>
> > > > Date: Wed, 5 Dec 2007 22:47:01 -0500
> > > >
> > > > > The size is clearly not just svn data, it's in the git pack itself.
> > > >
> > > > And other users have shown much smaller metadata from a GIT import,
> > > > and yes those are including all of the repository history and branches
> > > > not just the trunk.
> > > I followed the instructions in the tutorials.
> > > I followed the instructions given to by people who created these.
> > > I came up with a 1.5 gig pack file.
> > > You want to help, or you want to argue with me.
> >
> > Several people replied in this thread showing what options can lead to
> > smaller pack files.
> 
> Actually, one person did, but that's okay, let's assume it was several.
> I am currently trying Harvey's options.
> 
> I asked about using the pre-existing repos so i didn't have to do
> this, but they were all
> 1. Done using read-only imports or
> 2. Don't contain full history
> (IE the one that contains full history that is often posted here was
> done as a read only import and thus doesn't have the metadata).

While you won't get the git svn metadata if you clone the infradead
repo, it can be recreated on the fly by git svn if you want to start
commiting directly to gcc svn.

Harvey



Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> From: "Daniel Berlin" <[EMAIL PROTECTED]>
> Date: Wed, 5 Dec 2007 23:32:52 -0500
>
> > On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> > > From: "Daniel Berlin" <[EMAIL PROTECTED]>
> > > Date: Wed, 5 Dec 2007 22:47:01 -0500
> > >
> > > > The size is clearly not just svn data, it's in the git pack itself.
> > >
> > > And other users have shown much smaller metadata from a GIT import,
> > > and yes those are including all of the repository history and branches
> > > not just the trunk.
> > I followed the instructions in the tutorials.
> > I followed the instructions given to by people who created these.
> > I came up with a 1.5 gig pack file.
> > You want to help, or you want to argue with me.
>
> Several people replied in this thread showing what options can lead to
> smaller pack files.

Actually, one person did, but that's okay, let's assume it was several.
I am currently trying Harvey's options.

I asked about using the pre-existing repos so i didn't have to do
this, but they were all
1. Done using read-only imports or
2. Don't contain full history
(IE the one that contains full history that is often posted here was
done as a read only import and thus doesn't have the metadata).

> They also listed what the GIT limitations are that would effect the
> kind of work you are doing, which seemed to mostly deal with the high
> space cost of branching and tags when converting to/from SVN repos.

Actually, it turns out that git-gc --aggressive does this dumb thing
to pack files sometimes regardless of whether you converted from an
SVN repo or not.


Re: Git and GCC

2007-12-05 Thread Harvey Harrison
On Wed, 2007-12-05 at 20:54 -0800, Linus Torvalds wrote:
> 
> On Wed, 5 Dec 2007, Harvey Harrison wrote:
> > 
> > If anyone recalls my report was something along the lines of
> > git gc --aggressive explodes pack size.

> [ By default, for example, "git svn clone/fetch" seems to create those 
>   horrible fake email addresses that contain the ID of the SVN repo in 
>   each commit - I'm not talking about the "git-svn-id", I'm talking about 
>   the "[EMAIL PROTECTED]" thing for the author. Maybe people don't 
>   really care, but isn't that ugly as hell? I'd think it's worth it doing 
>   a really nice import, spending some effort on it.
> 
>   But maybe those things come from the older CVS->SVN import, I don't 
>   really know. I've done a few SVN imports, but I've done them just for 
>   stuff where I didn't want to touch SVN, but just wanted to track some 
>   project like libgpod. For things like *that*, a totally mindless "git 
>   svn" thing is fine ]
> 

git svn does accept a mailmap at import time with the same format as the
cvs importer I think.  But for someone that just wants a repo to check
out this was easiest.  I'd be willing to spend the time to do a nicer
job if there was any interest from the gcc side, but I'm not that
invested (other than owing them for an often-used tool).

Harvey



Re: Git and GCC

2007-12-05 Thread Linus Torvalds


On Wed, 5 Dec 2007, Harvey Harrison wrote:
> 
> If anyone recalls my report was something along the lines of
> git gc --aggressive explodes pack size.

Yes, --aggressive is generally a bad idea. I think we should remove it or 
at least fix it. It doesn't do what the name implies, because it actually 
throws away potentially good packing, and re-does it all from a clean 
slate.

That said, it's totally pointless for a person who isn't a git proponent 
to do an initial import, and in that sense I agree with Daniel: he 
shouldn't waste his time with tools that he doesn't know or care about, 
since there are people who *can* do a better job, and who know what they 
are doing, and understand and like the tool.

While you can do a half-assed job with just mindlessly running "git 
svnimport" (which is deprecated these days) or "git svn clone" (better), 
the fact is, to do a *good* import does likely mean spending some effort 
on it. Trying to make the user names / emails to be better with a mailmap, 
for example. 

[ By default, for example, "git svn clone/fetch" seems to create those 
  horrible fake email addresses that contain the ID of the SVN repo in 
  each commit - I'm not talking about the "git-svn-id", I'm talking about 
  the "[EMAIL PROTECTED]" thing for the author. Maybe people don't 
  really care, but isn't that ugly as hell? I'd think it's worth it doing 
  a really nice import, spending some effort on it.

  But maybe those things come from the older CVS->SVN import, I don't 
  really know. I've done a few SVN imports, but I've done them just for 
  stuff where I didn't want to touch SVN, but just wanted to track some 
  project like libgpod. For things like *that*, a totally mindless "git 
  svn" thing is fine ]

Of course, that does require there to be git people in the gcc crowd who 
are motivated enough to do the proper import and then make sure it's 
up-to-date and hosted somewhere. If those people don't exist, I'm not sure 
there's much idea to it.

The point being, you cannot ask a non-git person to do a major git import 
for an actual switch-over. Yes, it *can* be as simple as just doing a

git svn clone --stdlayout svn://svn://gcc.gnu.org/svn/gcc gcc

but the fact remains, you want to spend more effort and expertise on it if 
you actually want the result to be used as a basis for future work (as 
opposed to just tracking somebody elses SVN tree).

That includes:

 - do the historic import with good packing (and no, "--aggressive" 
   is not it, never mind the misleading name and man-page)

 - probably mailmap entries, certainly spending some time validating the 
   results.

 - hosting it

and perhaps most importantly

 - helping people who are *not* git users get up to speed.

because doing a good job at it is like asking a CVS newbie to set up a 
branch in CVS. I'm sure you can do it from man-pages, but I'm also sure 
you sure as hell won't like the end result.

Linus


Re: Git and GCC

2007-12-05 Thread David Miller
From: "Daniel Berlin" <[EMAIL PROTECTED]>
Date: Wed, 5 Dec 2007 23:32:52 -0500

> On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> > From: "Daniel Berlin" <[EMAIL PROTECTED]>
> > Date: Wed, 5 Dec 2007 22:47:01 -0500
> >
> > > The size is clearly not just svn data, it's in the git pack itself.
> >
> > And other users have shown much smaller metadata from a GIT import,
> > and yes those are including all of the repository history and branches
> > not just the trunk.
> I followed the instructions in the tutorials.
> I followed the instructions given to by people who created these.
> I came up with a 1.5 gig pack file.
> You want to help, or you want to argue with me.

Several people replied in this thread showing what options can lead to
smaller pack files.

They also listed what the GIT limitations are that would effect the
kind of work you are doing, which seemed to mostly deal with the high
space cost of branching and tags when converting to/from SVN repos.


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> From: "Daniel Berlin" <[EMAIL PROTECTED]>
> Date: Wed, 5 Dec 2007 22:47:01 -0500
>
> > The size is clearly not just svn data, it's in the git pack itself.
>
> And other users have shown much smaller metadata from a GIT import,
> and yes those are including all of the repository history and branches
> not just the trunk.
I followed the instructions in the tutorials.
I followed the instructions given to by people who created these.
I came up with a 1.5 gig pack file.
You want to help, or you want to argue with me.
Right now it sounds like you are trying to blame me or make it look
like i did something wrong.

You are of course, welcome to try it yourself.
I can give you the absolute exactly commands I gave, and with git
1.5.3.7, it will give you a 1.5 gig pack file.


Re: Git and GCC

2007-12-05 Thread Harvey Harrison

On Wed, 2007-12-05 at 20:20 -0800, David Miller wrote:
> From: "Daniel Berlin" <[EMAIL PROTECTED]>
> Date: Wed, 5 Dec 2007 22:47:01 -0500
> 
> > The size is clearly not just svn data, it's in the git pack itself.
> 
> And other users have shown much smaller metadata from a GIT import,
> and yes those are including all of the repository history and branches
> not just the trunk.

David, I think it is actually a bug in git gc with the --aggressive
option...mind you, even if he solves that the format git svn uses
for its bi-directional metadata is so space-inefficient Daniel will
be crying for other reasons immediately afterwards...4MB for every
branch and tag in gcc svn (more than a few thousand).

You only need it around for any branches you are planning on committing
to but it is all created during the default git svn import.

FYI

Harvey



Re: Git and GCC

2007-12-05 Thread Harvey Harrison
I fought with this a few months ago when I did my own clone of gcc svn.
My bad for only discussing this on #git at the time.  Should have put
this to the list as well.

If anyone recalls my report was something along the lines of
git gc --aggressive explodes pack size.

git repack -a -d --depth=100 --window=100 produced a ~550MB packfile
immediately afterwards a git gc --aggressive produces a 1.5G packfile.

This was for all branches/tags, not just trunk like Daniel's repo.

The best theory I had at the time was that the gc doesn't find as good
deltas or doesn't allow the same delta chain depth and so generates a 
new object in the pack, rather the reusing a good delta it already has
in the well-packed pack.

Cheers,

Harvey



Re: Git and GCC

2007-12-05 Thread David Miller
From: "Daniel Berlin" <[EMAIL PROTECTED]>
Date: Wed, 5 Dec 2007 22:47:01 -0500

> The size is clearly not just svn data, it's in the git pack itself.

And other users have shown much smaller metadata from a GIT import,
and yes those are including all of the repository history and branches
not just the trunk.


Re: Function specific optimizations call for discussion

2007-12-05 Thread Jonathan Adamczewski

Michael Meissner wrote:

One of the things that I've been interested in is adding support to GCC to
compile individual functions with specific target options.  I first presented a
draft at the Google mini-summit, and then another draft at the GCC developer
summit last July.

...

The proposal is at:
http://gcc.gnu.org/wiki/FunctionSpecificOpt
  



Have you given any thought to specifying --param values?

jonathan.


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> From: "Daniel Berlin" <[EMAIL PROTECTED]>
> Date: Wed, 5 Dec 2007 21:41:19 -0500
>
> > It is true I gave up quickly, but this is mainly because i don't like
> > to fight with my tools.
> > I am quite fine with a distributed workflow, I now use 8 or so gcc
> > branches in mercurial (auto synced from svn) and merge a lot between
> > them. I wanted to see if git would sanely let me manage the commits
> > back to svn.  After fighting with it, i gave up and just wrote a
> > python extension to hg that lets me commit non-svn changesets back to
> > svn directly from hg.
>
> I find it ironic that you were even willing to write tools to
> facilitate your hg based gcc workflow.
Why?

> That really shows what your
> thinking is on this matter, in that you're willing to put effort
> towards making hg work better for you but you're not willing to expend
> that level of effort to see if git can do so as well.
See, now you claim to know my thinking.
I went back to hg because the GIT's space usage wasn't even in the
ballpark, i couldn't get git-svn rebase to update the revs after the
initial import (even though i had properly used a rewriteRoot).

The size is clearly not just svn data, it's in the git pack itself.

I spent a long time working on SVN to reduce it's space usage (repo
side and cleaning up the client side and giving a path to svn devs to
reduce it further), as well as ui issues, and I really don't feel like
having to do the same for GIT.

I'm tired of having to spend a large amount of effort to get my tools
to work.  If the community wants to find and fix the problem, i've
already said repeatedly i'll happily give over my repo, data,
whatever.  You are correct i am not going to spend even more effort
when i can be productive with something else much quicker.  The devil
i know (committing to svn) is better than the devil i don't (diving
into git source code and finding/fixing what is causing this space
blowup).
The python extension took me a few hours (< 4).
In git, i spent these hours waiting for git-gc to finish.

> This is what really eats me from the inside about your dissatisfaction
> with git.  Your analysis seems to be a self-fullfilling prophecy, and
> that's totally unfair to both hg and git.
Oh?
You seem to be taking this awfully personally.
I came into this completely open minded. Really, I did (i'm sure
you'll claim otherwise).
GIT people told me it would work great and i'd have a really small git
repo and be able to commit back to svn.
I tried it.
It didn't work out.
It doesn't seem to be usable for whatever reason.
I'm happy to give details, data, whatever.

I made the engineering decision that my effort would be better spent
doing something I knew i could do quickly (make hg commit back to svn
for my purposes) then trying to improve larger issues in GIT (UI and
space usage).  That took me a few hours, and I was happy again.

I would have been incredibly happy to have git just have come up with
a 400 meg gcc repository, and to be happily committing away from
git-svn to gcc's repository  ...
But it didn't happen.
So far, you have yet to actually do anything but incorrectly tell me
what I am thinking.

I'll probably try again in 6 months, and maybe it will be better.


Re: Git and GCC

2007-12-05 Thread David Miller
From: "Daniel Berlin" <[EMAIL PROTECTED]>
Date: Wed, 5 Dec 2007 21:41:19 -0500

> It is true I gave up quickly, but this is mainly because i don't like
> to fight with my tools.
> I am quite fine with a distributed workflow, I now use 8 or so gcc
> branches in mercurial (auto synced from svn) and merge a lot between
> them. I wanted to see if git would sanely let me manage the commits
> back to svn.  After fighting with it, i gave up and just wrote a
> python extension to hg that lets me commit non-svn changesets back to
> svn directly from hg.

I find it ironic that you were even willing to write tools to
facilitate your hg based gcc workflow.  That really shows what your
thinking is on this matter, in that you're willing to put effort
towards making hg work better for you but you're not willing to expend
that level of effort to see if git can do so as well.

This is what really eats me from the inside about your dissatisfaction
with git.  Your analysis seems to be a self-fullfilling prophecy, and
that's totally unfair to both hg and git.


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, David Miller <[EMAIL PROTECTED]> wrote:
> From: "Daniel Berlin" <[EMAIL PROTECTED]>
> Date: Wed, 5 Dec 2007 14:08:41 -0500
>
> > So I tried a full history conversion using git-svn of the gcc
> > repository (IE every trunk revision from 1-HEAD as of yesterday)
> > The git-svn import was done using repacks every 1000 revisions.
> > After it finished, I used git-gc --aggressive --prune.  Two hours
> > later, it finished.
> > The final size after this is 1.5 gig for all of the history of gcc for
> > just trunk.
> >
> > [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> > total 1568899
> > -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> > pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> > -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> > pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
> >
> > This is 3x bigger than hg *and* hg doesn't require me to waste my life
> > repacking every so often.
> > The hg operations run roughly as fast as the git ones
> >
> > I'm sure there are magic options, magic command lines, etc, i could
> > use to make it smaller.
> >
> > I'm sure if i spent the next few weeks fucking around with git, it may
> > even be usable!
> >
> > But given that git is harder to use, requires manual repacking to get
> > any kind of sane space usage, and is 3x bigger anyway, i don't see any
> > advantage to continuing to experiment with git and gcc.
>
> I would really appreciate it if you would share experiences
> like this with the GIT community, who have been now CC:'d.
>
> That's the only way this situation is going to improve.
>
> When you don't CC: the people who can fix the problem, I can only
> speculate that perhaps at least subconsciously you don't care if
> the situation improves or not.
>
I didn't cc the git community for three reasons

1. It's not the nicest message in the world, and thus, more likely to
get bad responses than constructive ones.

2. Based on the level of usability, I simply assume it is too young
for regular developers to use.  At least, I hope this is the case.

3. People i know have had bad experiences talking usability issues
with the git community in the past.  I am not likely to fare any
better, so I would rather have someone who is involved with both our
community and theirs, raise these issues, rather than a complete
newcomer.

But hey, whatever floats your boat :)

It is true I gave up quickly, but this is mainly because i don't like
to fight with my tools.
I am quite fine with a distributed workflow, I now use 8 or so gcc
branches in mercurial (auto synced from svn) and merge a lot between
them. I wanted to see if git would sanely let me manage the commits
back to svn.  After fighting with it, i gave up and just wrote a
python extension to hg that lets me commit non-svn changesets back to
svn directly from hg.

--Dan


Re: Git and GCC

2007-12-05 Thread David Miller
From: "Daniel Berlin" <[EMAIL PROTECTED]>
Date: Wed, 5 Dec 2007 14:08:41 -0500

> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.
> 
> [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> total 1568899
> -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
> 
> This is 3x bigger than hg *and* hg doesn't require me to waste my life
> repacking every so often.
> The hg operations run roughly as fast as the git ones
> 
> I'm sure there are magic options, magic command lines, etc, i could
> use to make it smaller.
> 
> I'm sure if i spent the next few weeks fucking around with git, it may
> even be usable!
> 
> But given that git is harder to use, requires manual repacking to get
> any kind of sane space usage, and is 3x bigger anyway, i don't see any
> advantage to continuing to experiment with git and gcc.

I would really appreciate it if you would share experiences
like this with the GIT community, who have been now CC:'d.

That's the only way this situation is going to improve.

When you don't CC: the people who can fix the problem, I can only
speculate that perhaps at least subconsciously you don't care if
the situation improves or not.

The OpenSolaris folks behaved similarly, and that really ticked me
off.


Re: libbid and floatingpoint exception access funcs

2007-12-05 Thread H.J. Lu
Hi Bernhard,

Please open a gcc bug and assign it to me.

Thanks.


H.J.
---
On Wed, Dec 05, 2007 at 03:02:32PM +0100, Bernhard Fischer wrote:
> Hi,
> 
> My libc is configured to omit any FP support (UCLIBC_HAS_FLOATS is not set)
> but the recent libbid updates seems to unconditionally pull in floatingpoint
> accessor functions thus breaking bootstrap. My notes on this read:
> 
> 8<
> Follows: 
> Precedes: 
> 
> do not pull in allegedly unneeded floatingpoint exception access funcs
> 
>   HJL's recent update of libbid would pull in Floating-point exception
>   handling, although __GCC_FLOAT_NOT_NEEDED is defined.
> 
>   Prevent pulling in feclearexcept, feraiseexcept et al for now.
>   FIXME: revisit
> 8<
> 
> H.J., please advise.
> 
> PS: I currently do:
> libgcc/ChangeLog:
> 2007-10-13  Bernhard Fischer  <>
> 
>   * config/libbid/bid_conf.h: Do not define
>   DECIMAL_GLOBAL_EXCEPTION_FLAGS_ACCESS_FUNCTIONS if
>   __GCC_FLOAT_NOT_NEEDED is defined.

> Index: gcc-4.3.0/libgcc/config/libbid/bid_conf.h
> ===
> --- gcc-4.3.0/libgcc/config/libbid/bid_conf.h (revision 129202)
> +++ gcc-4.3.0/libgcc/config/libbid/bid_conf.h (working copy)
> @@ -535,7 +535,9 @@ Software Foundation, 51 Franklin Street,
>  #define DECIMAL_GLOBAL_ROUNDING 1
>  #define DECIMAL_GLOBAL_ROUNDING_ACCESS_FUNCTIONS 1
>  #define DECIMAL_GLOBAL_EXCEPTION_FLAGS 1
> +#ifndef __GCC_FLOAT_NOT_NEEDED
>  #define DECIMAL_GLOBAL_EXCEPTION_FLAGS_ACCESS_FUNCTIONS 1
> +#endif
>  #define BID_HAS_GCC_DECIMAL_INTRINSICS 1
>  #endif /* IN_LIBGCC2 */
>  



Re: Rant about ChangeLog entries and commit messages

2007-12-05 Thread Robert Dewar

Ben Elliston wrote:


Something else that hasn't been raised is that ChangeLogs can be
revised.  We often see people making mistakes with their ChangeLog
entries, but since the ChangeLog is versioned, they can revise it.  If
you screw up a commit message, it's much harder to fix it (and a purist
might argue that to do so would be destroying revision history).


What we do with Ada is to allow *additions* to an existing revision
history entry, but not modifications of what is already there.


Re: Rant about ChangeLog entries and commit messages

2007-12-05 Thread Ben Elliston
On Wed, 2007-12-05 at 18:35 -0500, Daniel Berlin wrote:

> svn propedit --revision  svn:log

OK, well, it used to be a bit trickier in CVS .. :-)

Ben




Re: iWMMXt/Linux EABI toolchain

2007-12-05 Thread Paul Brook
> > > > Thanks for the quick response!
> > > > I'm sure it seems I like to make hard wok for myself! It gets worse,
> > > > I'm porting Gentoo Linux to iWMMXt with pure EABI kernel and
> > > > userspace.  I'm not concerned about being able to run old binaries.
> > > > So is using abi=iwmmxt really not what I want? A really bad idea?
> > >
> > > Absolutely.  You want the AAPCS, not Intel's pre-AAPCS ABI.
> >
> > Actually, -mabi=iwmmxt is AAPCS based. It's diffferent from the old intel
> > iwmmxt ABI.

Yes, but not all AAPCS ABIs are equal. There are some aspects of the ABI (e.g. 
enum sizes) that are target specific. gcc currently does not have an option 
for both Linux and iwmmxt.

Paul


Re: Git and GCC

2007-12-05 Thread Ollie Wild
On Dec 5, 2007 1:40 PM, Daniel Berlin <[EMAIL PROTECTED]> wrote:
>
> > Out of curiosity, how much of that is the .git/svn directory?  This is
> > where git-svn-specific data is stored.  It is *very* inefficient, at
> > least for the 1.5.2.5 version I'm using.
> >
>
> I was only counting the space in .the packs dir.

In my personal client, which includes the entire history of GCC, the
packs dir is only 652MB.

Obviouisly, you're not a big fan of Git, and you're entitled to your
opinion.  I, however, find it very useful.  Given a choice between Git
and Mercurial, I choose git, but only because I have prior experience
working with the Linux kernel.  From what I've heard, both do the job
reasonably well.

Thanks to git-svn, using Git to develop GCC is practical with or
without explicit support from the GCC maintainers.  As I see it, the
main barrier is the inordinate amount of time it takes to bring up a
repository from scratch.  As has already been noted, Harvey has
provided a read-only copy, but it (a) only allows access to a subset
of GCC's branches and (b) doesn't provide a mechanism for developers
to push changes directly via git-svn.

This sounds like a homework project.  I'll do some investigation and
see if I can come up with a good bootstrap process.

Ollie


Re: Git and GCC

2007-12-05 Thread Harvey Harrison
On Thu, 2007-12-06 at 00:34 +0100, Andreas Schwab wrote:
> Harvey Harrison <[EMAIL PROTECTED]> writes:
> 
> > On Wed, 2007-12-05 at 21:23 +0100, Samuel Tardieu wrote:
> >> > "Daniel" == Daniel Berlin <[EMAIL PROTECTED]> writes:
> >> 
> >> Daniel> So I tried a full history conversion using git-svn of the gcc
> >> Daniel> repository (IE every trunk revision from 1-HEAD as of
> >> Daniel> yesterday) The git-svn import was done using repacks every
> >> Daniel> 1000 revisions.  After it finished, I used git-gc --aggressive
> >> Daniel> --prune.  Two hours later, it finished.  The final size after
> >> Daniel> this is 1.5 gig for all of the history of gcc for just trunk.
> >> 
> >> Most of the space is probably taken by the SVN specific data. To get
> >> an idea of how GIT would handle GCC data, you should clone the GIT
> >> directory or checkout one from infradead.org:
> >> 
> >>   % git clone git://git.infradead.org/gcc.git
> >> 
> >
> > Actually I went through and created the basis for that repo.  It
> > contains all branches and tags in the gcc svn repo and the final
> > pack comes to about 600M.  This has _everything_, not just trunk.
> 
> Not everything.  Only trunk and a few selected branches, and no tags.
> 

Yes, everything, by default you only get the more modern branches/tags,
but it's all in there.  If there is interest I can work with Bernardo
and get the rest publically exposed.

Harvey



Re: Rant about ChangeLog entries and commit messages

2007-12-05 Thread Daniel Berlin
On Dec 5, 2007 6:15 PM, Ben Elliston <[EMAIL PROTECTED]> wrote:
> On Tue, 2007-12-04 at 09:18 -0700, Tom Tromey wrote:
>
> > First, continuing to have good quality messages.  Right now at the
> > very least you get a (semi-) accurate record of what was touched.
> > I've seen plenty of ChangeLog-less projects out there than end up with
> > commits like "fixed a bug", or even worse.
>
> Something else that hasn't been raised is that ChangeLogs can be
> revised.  We often see people making mistakes with their ChangeLog
> entries, but since the ChangeLog is versioned, they can revise it.  If
> you screw up a commit message, it's much harder to fix it (and a purist
> might argue that to do so would be destroying revision history).
Uh?


svn propedit --revision  svn:log

Hope this helps!


Re: Git and GCC

2007-12-05 Thread Andreas Schwab
Harvey Harrison <[EMAIL PROTECTED]> writes:

> On Wed, 2007-12-05 at 21:23 +0100, Samuel Tardieu wrote:
>> > "Daniel" == Daniel Berlin <[EMAIL PROTECTED]> writes:
>> 
>> Daniel> So I tried a full history conversion using git-svn of the gcc
>> Daniel> repository (IE every trunk revision from 1-HEAD as of
>> Daniel> yesterday) The git-svn import was done using repacks every
>> Daniel> 1000 revisions.  After it finished, I used git-gc --aggressive
>> Daniel> --prune.  Two hours later, it finished.  The final size after
>> Daniel> this is 1.5 gig for all of the history of gcc for just trunk.
>> 
>> Most of the space is probably taken by the SVN specific data. To get
>> an idea of how GIT would handle GCC data, you should clone the GIT
>> directory or checkout one from infradead.org:
>> 
>>   % git clone git://git.infradead.org/gcc.git
>> 
>
> Actually I went through and created the basis for that repo.  It
> contains all branches and tags in the gcc svn repo and the final
> pack comes to about 600M.  This has _everything_, not just trunk.

Not everything.  Only trunk and a few selected branches, and no tags.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: Rant about ChangeLog entries and commit messages

2007-12-05 Thread Ben Elliston
On Tue, 2007-12-04 at 09:18 -0700, Tom Tromey wrote:

> First, continuing to have good quality messages.  Right now at the
> very least you get a (semi-) accurate record of what was touched.
> I've seen plenty of ChangeLog-less projects out there than end up with
> commits like "fixed a bug", or even worse.

Something else that hasn't been raised is that ChangeLogs can be
revised.  We often see people making mistakes with their ChangeLog
entries, but since the ChangeLog is versioned, they can revise it.  If
you screw up a commit message, it's much harder to fix it (and a purist
might argue that to do so would be destroying revision history).

> Also it seems to me that this will make it a bit harder for developers
> without write access to get their patches checked in ... because it
> will mean even more work for whoever does the commit.

That's a good point.

Ben




Re: Git and GCC

2007-12-05 Thread J.C. Pizarro
On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.
>
> [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> total 1568899
> -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
>
> This is 3x bigger than hg *and* hg doesn't require me to waste my life
> repacking every so often.
> The hg operations run roughly as fast as the git ones
>
> I'm sure there are magic options, magic command lines, etc, i could
> use to make it smaller.
>
> I'm sure if i spent the next few weeks fucking around with git, it may
> even be usable!
>
> But given that git is harder to use, requires manual repacking to get
> any kind of sane space usage, and is 3x bigger anyway, i don't see any
> advantage to continuing to experiment with git and gcc.
>
> I already have two way sync with hg.
> Maybe someday when git is more usable than hg to a normal developer,
> or it at least is significantly smaller than hg, i'll look at it
> again.
> For now, it seems a net loss.
>
> --Dan
> >
> > git clone --depth 100 git://git.infradead.org/gcc.git
> >
> > should give around ~50mb repository with usable trunk. This is all thanks to
> > Bernardo Innocenti for setting up an up-to-date gcc git repo.
> >
> > P.S:Please cut down on the usage of exclamation mark.
> >
> > Regards,
> > ismail
> >
> > --
> > Never learn by your mistakes, if you do you may never dare to try again.
> >

To see "Re: svn trunk reaches nearly 1 GiB!!! That massive!!!"

http://gcc.gnu.org/ml/gcc/2007-11/msg00805.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00770.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00769.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00768.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00767.html

On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.
>
> [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> total 1568899
> -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
>
> This is 3x bigger than hg *and* hg doesn't require me to waste my life
> repacking every so often.
> The hg operations run roughly as fast as the git ones
>
> I'm sure there are magic options, magic command lines, etc, i could
> use to make it smaller.
>
> I'm sure if i spent the next few weeks fucking around with git, it may
> even be usable!
>
> But given that git is harder to use, requires manual repacking to get
> any kind of sane space usage, and is 3x bigger anyway, i don't see any
> advantage to continuing to experiment with git and gcc.
>
> I already have two way sync with hg.
> Maybe someday when git is more usable than hg to a normal developer,
> or it at least is significantly smaller than hg, i'll look at it
> again.
> For now, it seems a net loss.
>
> --Dan
> >
> > git clone --depth 100 git://git.infradead.org/gcc.git
> >
> > should give around ~50mb repository with usable trunk. This is all thanks to
> > Bernardo Innocenti for setting up an up-to-date gcc git repo.
> >
> > P.S:Please cut down on the usage of exclamation mark.
> >
> > Regards,
> > ismail
> >
> > --
> > Never learn by your mistakes, if you do you may never dare to try again.
> >

To see "Re: svn trunk reaches nearly 1 GiB!!! That massive!!!"

http://gcc.gnu.org/ml/gcc/2007-11/msg00805.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00770.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00769.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00768.html
http://gcc.gnu.org/ml/gcc/2007-11/msg00767.html

* In http://gcc.gnu.org/ml/gcc/2007-11/msg00675.html , i did put

The generated files from flex/bison are a lot of "trashing hexadecimals" that
don't must to be commited to any cvs/svn/git/hg because it consumes
a lot of diskspace for only a modification of few lines of flex/bison sources.

* In http://gcc.gnu.org/ml/gcc/2007-11/msg00683.html , i did put

I hate considering temporary files as sources of the tree. They aren't sources.

It's good idea to remove ALL generated files from sources:

A) generated *.c, *.h from lex/bison s

gcc-4.2-20071205 is now available

2007-12-05 Thread gccadmin
Snapshot gcc-4.2-20071205 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.2-20071205/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.2 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_2-branch 
revision 130635

You'll find:

gcc-4.2-20071205.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.2-20071205.tar.bz2 C front end and core compiler

gcc-ada-4.2-20071205.tar.bz2  Ada front end and runtime

gcc-fortran-4.2-20071205.tar.bz2  Fortran front end and runtime

gcc-g++-4.2-20071205.tar.bz2  C++ front end and runtime

gcc-java-4.2-20071205.tar.bz2 Java front end and runtime

gcc-objc-4.2-20071205.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.2-20071205.tar.bz2The GCC testsuite

Diffs from 4.2-20071128 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.2
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Patch manager dying for a week or two

2007-12-05 Thread Rask Ingemann Lambertsen
On Wed, Dec 05, 2007 at 04:32:00PM -0500, NightStrike wrote:
> On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> > Patch manager will be dying for a week or two while i change hosting.
> >
> > of course, if nobody is still using it, i can just kill it permanently.

   grep -F -e patchapp gcc-bugs@ says it is being used. I use it and would
like to keep doing so. As well as tracking my patches, I find the notice
automatically posted to the bug database a lot more convenient that having
to do so manually.
 
> What is the patch manager?

http://gcc.gnu.org/wiki/GCC_Patch_Tracking

-- 
Rask Ingemann Lambertsen
Danish law requires addresses in e-mail to be logged and stored for a year


RE: Patch manager dying for a week or two

2007-12-05 Thread Dave Korn
On 05 December 2007 21:04, Daniel Berlin wrote:

> Patch manager will be dying for a week or two while i change hosting.
> 
> of course, if nobody is still using it, i can just kill it permanently.

  Well I haven't submitted any patches just lately, but I always use it when I
do, I think it's very useful indeed.  Thanks for organising it.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: iWMMXt/Linux EABI toolchain

2007-12-05 Thread Daniel Jacobowitz
On Wed, Mar 01, 2006 at 06:20:53PM +, Steven Newbury wrote:
> OK, thank-you.  I'll target "arm-iwmmxt-linux-gnueabi" with --with-cpu= etc 
> and
> --disable-multilib.  The vendor string is for my build scripts and also will
> help differentiate the toolchain, is that valid?

Yep.

-- 
Daniel Jacobowitz
CodeSourcery



Re: Designs for better debug info in GCC

2007-12-05 Thread Joe Buck
On Wed, Dec 05, 2007 at 09:05:33AM -0500, Diego Novillo wrote:
> In my simplistic view of this problem, I've always had the idea that -O0 
> -g means "full debugging bliss", -O1 -g means "tolerable debugging" 
> (symbols shouldn't disappear, for instance, though they do now) and -O2 
> -g means "you can probably know what line+function you're executing".

I'd be happy enough if the state of -O1 -g debugging were improved,
perhaps using some of Alexandre's ideas so that it could be "full
debugging bliss" with some optimization as well.  Speeding up the
compile/test/debug/modify cycle would result.  We could then have fast
but fully debuggable code at -O1, and even faster code at -O2 not
constrained by the requirement of, as Diego says, "deconstructing
arbitrary transformations done by the optimizers". 



Re: Git and GCC

2007-12-05 Thread NightStrike
On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> As I said, maybe i'll look at git in another year or so.
> But  i'm certainly going to ignore all the "git is so great, we should
> move gcc to it" people until it works better, while i am much more
> inclined to believe the "hg is so great, we should move gc to it"
> people.

Just out of curiosity, is there something wrong with the current
choice of svn?  As I recall, it wasn't too long ago that gcc converted
from cvs to svn.  What's the motivation to change again?  (I'm not
trying to oppose anything.. I'm just curious, as I don't know much
about this kind of thing).


Broken regression testing on Intel Darwin9

2007-12-05 Thread Dominique Dhumieres
At revision 130629 regtesting on Intel Darwin9 gives a dozen

The process has forked and you cannot use this CoreFoundation functionality 
safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.

then stop to do anything untill I kill it. I did not see that with revision 
130589.
What is the meaning of the message? and what could I do?

TIA

Dominique


Re: Git and GCC

2007-12-05 Thread Harvey Harrison
On Wed, 2007-12-05 at 21:23 +0100, Samuel Tardieu wrote:
> > "Daniel" == Daniel Berlin <[EMAIL PROTECTED]> writes:
> 
> Daniel> So I tried a full history conversion using git-svn of the gcc
> Daniel> repository (IE every trunk revision from 1-HEAD as of
> Daniel> yesterday) The git-svn import was done using repacks every
> Daniel> 1000 revisions.  After it finished, I used git-gc --aggressive
> Daniel> --prune.  Two hours later, it finished.  The final size after
> Daniel> this is 1.5 gig for all of the history of gcc for just trunk.
> 
> Most of the space is probably taken by the SVN specific data. To get
> an idea of how GIT would handle GCC data, you should clone the GIT
> directory or checkout one from infradead.org:
> 
>   % git clone git://git.infradead.org/gcc.git
> 

Actually I went through and created the basis for that repo.  It
contains all branches and tags in the gcc svn repo and the final
pack comes to about 600M.  This has _everything_, not just trunk.

For the first time after doing such an import, I found it much better
to do git repack -a -f --depth=100 --window=100.  After that initial
repack a plain git-gc occasionally will be just fine.

If you want any more information about this, let me know.

CHeers,

Harvey



Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, Samuel Tardieu <[EMAIL PROTECTED]> wrote:
> > "Daniel" == Daniel Berlin <[EMAIL PROTECTED]> writes:
>
> Daniel> So I tried a full history conversion using git-svn of the gcc
> Daniel> repository (IE every trunk revision from 1-HEAD as of
> Daniel> yesterday) The git-svn import was done using repacks every
> Daniel> 1000 revisions.  After it finished, I used git-gc --aggressive
> Daniel> --prune.  Two hours later, it finished.  The final size after
> Daniel> this is 1.5 gig for all of the history of gcc for just trunk.
>
> Most of the space is probably taken by the SVN specific data.

I showed a du of the pack directory.
Everyone tells me that svn specfic data is in .svn, so i am
disinclined to believe this.

Also, given that hg can store the svn data without this kind of
penalty, it's just another strike against git.


>  To get
> an idea of how GIT would handle GCC data, you should clone the GIT
> directory or checkout one from infradead.org:
Does infradead have the entire history?

>   % git clone git://git.infradead.org/gcc.git
>
> On my machine, it takes 856M with a checkout copy of trunk and
> contains the trunk, autovect, fixed-point, 4.1 and 4.2 branches. In
> comparaison, my checked out copy of trunk using SVN requires 1.2G, and
> I don't have any history around...

This is about git's usability and space usage, not SVN.
People say we should consider GIT. I have been considering GIT and hg,
and right now, GIT looks like a massive loser in every respect.
It's harder to use.
It takes up more space than hg to store the same data.
It requires manual repacking
it's diff/etc commands are not any faster.

Humorously, i tried to verify whether infradead has full history or
not, but of course git log git://git.infradead.org/gcc.git says
"fatal, not a git repository".
(though git clone is happy to clone it, because it is a git repository).
I'm sure there is some magic option or command line i need to use to
view remote log history without cloning the repository.
But all the other systems we look at don't require this kind of
bullshit to actually get things done.

As I said, maybe i'll look at git in another year or so.
But  i'm certainly going to ignore all the "git is so great, we should
move gcc to it" people until it works better, while i am much more
inclined to believe the "hg is so great, we should move gc to it"
people.


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, Ollie Wild <[EMAIL PROTECTED]> wrote:
> On Dec 5, 2007 11:08 AM, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> > So I tried a full history conversion using git-svn of the gcc
> > repository (IE every trunk revision from 1-HEAD as of yesterday)
> > The git-svn import was done using repacks every 1000 revisions.
> > After it finished, I used git-gc --aggressive --prune.  Two hours
> > later, it finished.
> > The final size after this is 1.5 gig for all of the history of gcc for
> > just trunk.
>
> Out of curiosity, how much of that is the .git/svn directory?  This is
> where git-svn-specific data is stored.  It is *very* inefficient, at
> least for the 1.5.2.5 version I'm using.
>

I was only counting the space in .the packs dir.

> Ollie
>


Re: BITS_PER_UNIT larger than 8 -- word addressing

2007-12-05 Thread Ian Lance Taylor
Boris Boesler <[EMAIL PROTECTED]> writes:

>   I assume that GCC internals assume that memory can be byte (8 bits)
> addressed - for historical reasons. 

No.  gcc internals assume that memory can be addressed in units of
size BITS_PER_UNIT.  The default for BITS_PER_UNIT is 8.  I have
written backends for machines for which that is not true.

It is unusual, and there is only one official target with
BITS_PER_UNIT != 8 (c4x), so there is often some minor breakage.

Ian


Re: Patch manager dying for a week or two

2007-12-05 Thread NightStrike
On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> Patch manager will be dying for a week or two while i change hosting.
>
> of course, if nobody is still using it, i can just kill it permanently.
>

What is the patch manager?


Re: ACATS c460008 and VRP (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2007-12-05 Thread Richard Kenner
> On GCC we use -gnato on tests known to need it
> (/gcc/testsuite/ada/acats/overflow.lst) since we want to test
> flags the typical GCC/Ada user does use and not what official validation
> requires (which is -gnato -gnatE IIRC).

But you're running a test that's *part* of the official validation and
it assumes the options that implement the full language (including overflow
checks).  I don't see the relevance of what options the "typical user"
specifies: this isn't typical user *code*!


Common logging config

2007-12-05 Thread Richard Almquist

Tony,

To configure common-logging to use JDK logger. Create a file named 
"commons-loggin.properties" with the following:


org.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger

In a webapp this would go into WEB-INF/classes directory.
I'm not sure where to put it for the routing engine.

Richard



Re: ACATS c460008 and VRP (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2007-12-05 Thread Richard Kenner
> Richard, Arnaud, could you check amongst GNAT experts if for such types
> (non power of two modulus), it's not worth enabling overflow checks by
> default now that we have VRP doing non trivial optimisations? People
> using non power of two modulus are not caring for performance anyway, so
> having a compliant implementation by default won't harm.

I don't think that either of us are the best people to ask, but my sense
is that it's not a great idea to have the default overflow handling differ
between types.  For one thing, what option would then disable overflow
checking for those types?

-gnato is required for ACATS tests because you need -gnato for RM compliance.


Patch manager dying for a week or two

2007-12-05 Thread Daniel Berlin
Patch manager will be dying for a week or two while i change hosting.

of course, if nobody is still using it, i can just kill it permanently.


Re: Git and GCC

2007-12-05 Thread Samuel Tardieu
> "Daniel" == Daniel Berlin <[EMAIL PROTECTED]> writes:

Daniel> So I tried a full history conversion using git-svn of the gcc
Daniel> repository (IE every trunk revision from 1-HEAD as of
Daniel> yesterday) The git-svn import was done using repacks every
Daniel> 1000 revisions.  After it finished, I used git-gc --aggressive
Daniel> --prune.  Two hours later, it finished.  The final size after
Daniel> this is 1.5 gig for all of the history of gcc for just trunk.

Most of the space is probably taken by the SVN specific data. To get
an idea of how GIT would handle GCC data, you should clone the GIT
directory or checkout one from infradead.org:

  % git clone git://git.infradead.org/gcc.git

On my machine, it takes 856M with a checkout copy of trunk and
contains the trunk, autovect, fixed-point, 4.1 and 4.2 branches. In
comparaison, my checked out copy of trunk using SVN requires 1.2G, and
I don't have any history around...

  Sam
-- 
Samuel Tardieu -- [EMAIL PROTECTED] -- http://www.rfc1149.net/



Re: BITS_PER_UNIT larger than 8 -- word addressing

2007-12-05 Thread Boris Boesler

On 2007-11-27 18:29, Michael Eager wrote:
> Joseph S. Myers wrote:
> > On Tue, 27 Nov 2007, Michael Eager wrote:
> >
> >> I think that there is a pervasive understanding that SImode is
> >> single precision integer, 32-bits long.
> >
> > Only among contributors not considering non-8-bit bytes.  SImode  
is 4
> > times QImode, 4*BITS_PER_UNIT bits, and may not exist (or at  
least not be

> > particularly usable, much like the limitations on TImode on 32-bit
> > targets) with large BITS_PER_UNIT.
>
> I think you just described the majority of contributors.  :-)
>
> It's human nature not to recognize one's tacit assumptions or their
> consequences.

 I assume that GCC internals assume that memory can be byte (8 bits)  
addressed - for historical reasons. Therefore, the sizes of all types  
are multiples of a byte. The same is true for addressing values in  
memory. (Sizes of types and their addresses must be separated more  
precisely. A 32 bit value could be on a 4 bit boundary!) But this is  
changing. Addressable units are on 32 bit boundaries or even on 4 bit  
boundaries today. Well, this is the problem I'm running in right now.


Boris



Re: Git and GCC

2007-12-05 Thread Ollie Wild
On Dec 5, 2007 11:08 AM, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.

Out of curiosity, how much of that is the .git/svn directory?  This is
where git-svn-specific data is stored.  It is *very* inefficient, at
least for the 1.5.2.5 version I'm using.

Ollie


Re: Git and GCC

2007-12-05 Thread Ismail Dönmez
Wednesday 05 December 2007 21:08:41 Daniel Berlin yazmıştı:
> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.
>
> [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> total 1568899
> -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
>
> This is 3x bigger than hg *and* hg doesn't require me to waste my life
> repacking every so often.
> The hg operations run roughly as fast as the git ones

I think this (gcc HG repo) is very good but only problem is its not always in 
sync with SVN, it would really rock if a post svn commit hook would sync hg 
repo.

Thanks for doing this anyhow.

Regards,
ismail

-- 
Never learn by your mistakes, if you do you may never dare to try again.


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
On 12/5/07, NightStrike <[EMAIL PROTECTED]> wrote:
> On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> > I already have two way sync with hg.
> > Maybe someday when git is more usable than hg to a normal developer,
> > or it at least is significantly smaller than hg, i'll look at it
> > again.
>
> Sorry, what is hg?
>
http://www.selenic.com/mercurial/


Re: Git and GCC

2007-12-05 Thread NightStrike
On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> I already have two way sync with hg.
> Maybe someday when git is more usable than hg to a normal developer,
> or it at least is significantly smaller than hg, i'll look at it
> again.

Sorry, what is hg?


Re: Git and GCC

2007-12-05 Thread Daniel Berlin
For the record:

 [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo$ git --version
git version 1.5.3.7

(I downloaded it yesterday when i started the import)

On 12/5/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> So I tried a full history conversion using git-svn of the gcc
> repository (IE every trunk revision from 1-HEAD as of yesterday)
> The git-svn import was done using repacks every 1000 revisions.
> After it finished, I used git-gc --aggressive --prune.  Two hours
> later, it finished.
> The final size after this is 1.5 gig for all of the history of gcc for
> just trunk.
>
> [EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
> total 1568899
> -r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
> -r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
> pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx
>
> This is 3x bigger than hg *and* hg doesn't require me to waste my life
> repacking every so often.
> The hg operations run roughly as fast as the git ones
>
> I'm sure there are magic options, magic command lines, etc, i could
> use to make it smaller.
>
> I'm sure if i spent the next few weeks fucking around with git, it may
> even be usable!
>
> But given that git is harder to use, requires manual repacking to get
> any kind of sane space usage, and is 3x bigger anyway, i don't see any
> advantage to continuing to experiment with git and gcc.
>
> I already have two way sync with hg.
> Maybe someday when git is more usable than hg to a normal developer,
> or it at least is significantly smaller than hg, i'll look at it
> again.
> For now, it seems a net loss.
>
> --Dan
> >
> > git clone --depth 100 git://git.infradead.org/gcc.git
> >
> > should give around ~50mb repository with usable trunk. This is all thanks to
> > Bernardo Innocenti for setting up an up-to-date gcc git repo.
> >
> > P.S:Please cut down on the usage of exclamation mark.
> >
> > Regards,
> > ismail
> >
> > --
> > Never learn by your mistakes, if you do you may never dare to try again.
> >
>


Git and GCC

2007-12-05 Thread Daniel Berlin
So I tried a full history conversion using git-svn of the gcc
repository (IE every trunk revision from 1-HEAD as of yesterday)
The git-svn import was done using repacks every 1000 revisions.
After it finished, I used git-gc --aggressive --prune.  Two hours
later, it finished.
The final size after this is 1.5 gig for all of the history of gcc for
just trunk.

[EMAIL PROTECTED]:/compilerstuff/gitgcc/gccrepo/.git/objects/pack$ ls -trl
total 1568899
-r--r--r-- 1 dberlin dberlin 1585972834 2007-12-05 14:01
pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.pack
-r--r--r-- 1 dberlin dberlin   19008488 2007-12-05 14:01
pack-cd328fcf0bd673d8f2f72c42fbe67da64cbcd218.idx

This is 3x bigger than hg *and* hg doesn't require me to waste my life
repacking every so often.
The hg operations run roughly as fast as the git ones

I'm sure there are magic options, magic command lines, etc, i could
use to make it smaller.

I'm sure if i spent the next few weeks fucking around with git, it may
even be usable!

But given that git is harder to use, requires manual repacking to get
any kind of sane space usage, and is 3x bigger anyway, i don't see any
advantage to continuing to experiment with git and gcc.

I already have two way sync with hg.
Maybe someday when git is more usable than hg to a normal developer,
or it at least is significantly smaller than hg, i'll look at it
again.
For now, it seems a net loss.

--Dan
>
> git clone --depth 100 git://git.infradead.org/gcc.git
>
> should give around ~50mb repository with usable trunk. This is all thanks to
> Bernardo Innocenti for setting up an up-to-date gcc git repo.
>
> P.S:Please cut down on the usage of exclamation mark.
>
> Regards,
> ismail
>
> --
> Never learn by your mistakes, if you do you may never dare to try again.
>


Re: Designs for better debug info in GCC

2007-12-05 Thread Diego Novillo

On 11/25/07 3:43 PM, Mark Mitchell wrote:


My suggestion (not as a GCC SC member or GCC RM, but just as a fellow
GCC developer with an interest in improving the compiler in the same way
that you're trying to do) is that you stop writing code and start
writing a paper about what you're trying to do.

Ignore the implementation.  Describe the problem in detail.  Narrow its
scope if necessary.  Describe the success criteria in detail.  Ideally,
the success criteria are mechanically checkable properties: i.e., given
a C program as input, and optimized code + debug information as output,
it should be possible to algorithmically prove whether the output is
correct.


Yes, please.  I would very much like to see an abstract design document 
on what you are trying to accomplish.  I have been trying to follow this 
thread but I've gotten lost.  It's full of implementation details, 
rhetoric and high-level discussion.


I would like to see exactly what Mark is asking for.  Perhaps a 
presentation in next year's Summit?  I don't think I understand the goal 
of the project.  "Correct debugging info" means little, particularly if 
you say that it's not debuggers that you are thinking about.


It's certainly worrisome that your implementation seems to be intrusive 
to the point of brittleness.  Will every new optimization need to think 
about debug information from scratch and refrain from doing certain 
transformations?


In my simplistic view of this problem, I've always had the idea that -O0 
-g means "full debugging bliss", -O1 -g means "tolerable debugging" 
(symbols shouldn't disappear, for instance, though they do now) and -O2 
-g means "you can probably know what line+function you're executing".


But you seem to be addressing other problems.  And it even seems to me 
that you want debugging information that is capable of deconstructing 
arbitrary transformations done by the optimizers.  But I think I'm just 
lost in this thread, so a high-level design document would be perfect to 
 expose your ideas.



Diego.



libbid and floatingpoint exception access funcs

2007-12-05 Thread Bernhard Fischer
Hi,

My libc is configured to omit any FP support (UCLIBC_HAS_FLOATS is not set)
but the recent libbid updates seems to unconditionally pull in floatingpoint
accessor functions thus breaking bootstrap. My notes on this read:

8<
Follows: 
Precedes: 

do not pull in allegedly unneeded floatingpoint exception access funcs

  HJL's recent update of libbid would pull in Floating-point exception
  handling, although __GCC_FLOAT_NOT_NEEDED is defined.

  Prevent pulling in feclearexcept, feraiseexcept et al for now.
  FIXME: revisit
8<

H.J., please advise.

PS: I currently do:
libgcc/ChangeLog:
2007-10-13  Bernhard Fischer  <>

* config/libbid/bid_conf.h: Do not define
DECIMAL_GLOBAL_EXCEPTION_FLAGS_ACCESS_FUNCTIONS if
__GCC_FLOAT_NOT_NEEDED is defined.
Index: gcc-4.3.0/libgcc/config/libbid/bid_conf.h
===
--- gcc-4.3.0/libgcc/config/libbid/bid_conf.h	(revision 129202)
+++ gcc-4.3.0/libgcc/config/libbid/bid_conf.h	(working copy)
@@ -535,7 +535,9 @@ Software Foundation, 51 Franklin Street,
 #define DECIMAL_GLOBAL_ROUNDING 1
 #define DECIMAL_GLOBAL_ROUNDING_ACCESS_FUNCTIONS 1
 #define DECIMAL_GLOBAL_EXCEPTION_FLAGS 1
+#ifndef __GCC_FLOAT_NOT_NEEDED
 #define DECIMAL_GLOBAL_EXCEPTION_FLAGS_ACCESS_FUNCTIONS 1
+#endif
 #define BID_HAS_GCC_DECIMAL_INTRINSICS 1
 #endif /* IN_LIBGCC2 */
 


Re: [RFC] [PATCH] 32-bit pointers in x86-64

2007-12-05 Thread Andrew Pinski
On 12/5/07, Jan Beulich <[EMAIL PROTECTED]> wrote:
> >>> "Andrew Pinski" <[EMAIL PROTECTED]> 25.11.07 19:45 >>>
> >On 11/25/07, Luca <[EMAIL PROTECTED]> wrote:
> >> 7.1. Add __attribute__((pointer_size(XXX))) and #pragma pointer_size
> >> to allow 64-bit pointers in 32-bit mode and viceversa
> >
> >This is already there, try using __attribute__((mode(DI) )).
>
> Hmm, unless this is a new feature in 4.3, I can't seem to get this to work on
> either i386 (using mode DI) or x86-64 (using mode SI). Could you clarify?

This only works when you add support for the different pointer modes.
I was saying the middle support for this feature was already there,
just the target support was not.

Also there are issues with mode on pointers for C++, I don't know what
they are though.

Note this feature is used on the s390 target and also the ia64-hpux targets.
--Pinski


Re: [RFC] [PATCH] 32-bit pointers in x86-64

2007-12-05 Thread Jan Beulich
>>> "Andrew Pinski" <[EMAIL PROTECTED]> 25.11.07 19:45 >>>
>On 11/25/07, Luca <[EMAIL PROTECTED]> wrote:
>> 7.1. Add __attribute__((pointer_size(XXX))) and #pragma pointer_size
>> to allow 64-bit pointers in 32-bit mode and viceversa
>
>This is already there, try using __attribute__((mode(DI) )).

Hmm, unless this is a new feature in 4.3, I can't seem to get this to work on
either i386 (using mode DI) or x86-64 (using mode SI). Could you clarify? If
this worked consistently on at least all 64-bit architectures, I would have a
use for it in the kernel (cutting down the kernel size by perhaps several
pages). Btw., I continue to think that the error message 'initializer element
is not computable at load time' on 64-bit code like this

extern char array[];
unsigned int p = (unsigned long)array;

or 32-bit code like this

extern char array[];
unsigned long long p = (unsigned long)array;

is incorrect - the compiler generally has no knowledge what 'array' is (it may
know whether the architecture is generally capable of expressing the
necessary relocation, but if 'array' is really a placeholder for an assembly
level constant, possibly even defined through __asm__() in the same
translation unit, this diagnostic should at best be a warning). I'm pretty
sure I have an open bug for this, but the sad thing is that bugs like this
never appear to really get looked at.

Thanks, Jan