Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/17/07, Maxim Kuvyrkov [EMAIL PROTECTED] wrote: There is a patch for this PR29841 in http://gcc.gnu.org/ml/gcc-patches/2007-02/msg01134.html . The problem is that I don't really know which maintainer ask to review it :( I think this patch needs re-testing (because of my cfglayout changes). BARRIERs are never inside a basic block, so the patch looks obviously correct to me. I think you should commit it as such if it passes bootstrap/testing (preferably on two or three targets, and with checking enabled of course). Gr. Steven
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/17/07, Steven Bosscher [EMAIL PROTECTED] wrote: On 4/17/07, Maxim Kuvyrkov [EMAIL PROTECTED] wrote: There is a patch for this PR29841 in http://gcc.gnu.org/ml/gcc-patches/2007-02/msg01134.html . The problem is that I don't really know which maintainer ask to review it :( I think this patch needs re-testing (because of my cfglayout changes). BARRIERs are never inside a basic block, so the patch looks obviously correct to me. I think you should commit it as such if it passes bootstrap/testing (preferably on two or three targets, and with checking enabled of course). Indeed. The patch is ok after a re-bootstrap and re-test. Thanks, Richard.
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/17/07, Richard Guenther [EMAIL PROTECTED] wrote: Indeed. The patch is ok after a re-bootstrap and re-test. Actually, please don't commit that patch. Eric Botcazou has already proposed a fix that looks better: http://gcc.gnu.org/ml/gcc-patches/2007-04/msg01065.html Gr. Steven
Re: GCC 4.2.0 Status Report (2007-04-15)
You want more bugs fixed, it would seem a better way would be to build a better sense of community (Have bugfix-only days, etc) and encourage it through good behavior, not through negative reinforcement. I do agree with that in a general way, but I think there should also be a real effort done by the various maintainers to make sure people indeed fix the few PRs they created. Maintainers should be able to say, please think of fixing this PR before submitting a patch for that feature. That doesn't introduce administrative overhead, because maintainers should keep track of the various PRs and patches of their area. I think it works already for some areas of the compiler, but doesn't work fine for the most common areas. A few examples of that (maybe I'm always quoting the same examples, but those are the ones I know that impact my own work on GCC): -- how can bootstrap stay broken (with default configure options) on i386-linux for 3 weeks? -- how could i386-netbsd bootstrap be broken for months (PR30058), and i386-mingw still be broken after 6 months (PR30589), when the cause of failure is well known? These are not rethorical How, or finger-pointing. I think these are cases of failure we should analyze to understand what in our development model allows them to happen. FX
[wwwdocs] PATCH for Re: GCC 4.2.0 Status Report (2007-04-15)
Installed. Index: index.html === RCS file: /cvs/gcc/wwwdocs/htdocs/index.html,v retrieving revision 1.607 diff -u -3 -p -r1.607 index.html --- index.html 23 Mar 2007 08:31:00 - 1.607 +++ index.html 16 Apr 2007 08:51:28 - @@ -128,7 +128,7 @@ mission statement/a./p GCC 4.2.0 (a href=gcc-4.2/changes.htmlchanges/a) /dtdd Status: a href=develop.html#stage3Stage 3/a; - a href=http://gcc.gnu.org/ml/gcc/2007-03/msg00865.html;2007-03-22/a + a href=http://gcc.gnu.org/ml/gcc/2007-04/msg00509.html;2007-04-15/a (regression fixes amp; docs only). br / a
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, François-Xavier Coudert [EMAIL PROTECTED] wrote: You want more bugs fixed, it would seem a better way would be to build a better sense of community (Have bugfix-only days, etc) and encourage it through good behavior, not through negative reinforcement. I do agree with that in a general way, but I think there should also be a real effort done by the various maintainers to make sure people indeed fix the few PRs they created. Maintainers should be able to say, please think of fixing this PR before submitting a patch for that feature. That doesn't introduce administrative overhead, because maintainers should keep track of the various PRs and patches of their area. I think it works already for some areas of the compiler, but doesn't work fine for the most common areas. A few examples of that (maybe I'm always quoting the same examples, but those are the ones I know that impact my own work on GCC): -- how can bootstrap stay broken (with default configure options) on i386-linux for 3 weeks? -- how could i386-netbsd bootstrap be broken for months (PR30058), and i386-mingw still be broken after 6 months (PR30589), when the cause of failure is well known? These are not rethorical How, or finger-pointing. I think these are cases of failure we should analyze to understand what in our development model allows them to happen. FX The mea culpa is to permit for long time to modify configure instead of configure.ac or configure.in that is used by autoconf and/or automake. Another mea culpa is don't update the autoconf/automake versions when the GCC''s scripts are using very obsolete/deprecated autoconf/automake versions. Currently, autoconf is less used because of bad practices of GCC. I propose to have the following: * several versions of autoconf/automake in /opt that are depended from the current GCC's scripts. And to set PATH to corresponding /opt/autoXXX/bin:$PATH. * to do diff bettween configure and the configure generated by autoconf/automake with configure.ac * with these diffs, to do modifications to configure.ac * to repeat it for verifying of the scripts with recent versions of autoconf/automake Sincerely J.C. Pizarro
Re: GCC 4.2.0 Status Report (2007-04-15)
The mea culpa is to permit for long time to modify configure instead of configure.ac or configure.in that is used by autoconf and/or automake. [...] I'm sorry, but I don't understand at all what you propose, what your proposal is supposed to fix or how that is related to the mail you're answering to. FX
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/16/07, J.C. Pizarro [EMAIL PROTECTED] wrote: The mea culpa is to permit for long time to modify configure instead of configure.ac or configure.in that is used by autoconf and/or automake. Another mea culpa is don't update the autoconf/automake versions when the GCC''s scripts are using very obsolete/deprecated autoconf/automake versions. What world are you living in? Do you even look at the source? Even though http://gcc.gnu.org/install/prerequisites.html has not been updated, the toplevel actually uses autoconf 2.59 already and has since 2007-02-09. And how can you say 2.59 is obsolete when 90-99% of the distros ship with that version? Plus automake 1.9.6 is actually the latest version of 1.9.x automake. libtool on the other hand is the older version but that is in the progress of being fixed, don't you read the mailing lists? Currently, autoconf is less used because of bad practices of GCC. Huh? What do you mean by that? I don't know anyone who touches just configure and not use autoconf. Yes at one point we had an issue with the toplevel needing an old version of autoconf but that day has past for 2 months now. Also usually what happened is that someone would regenerate the toplevel configure with the incorrect version of autoconf and then someone would notice that and just regenerate it. Not a big issue. The big issues are not with the configure scripts at all. It has to do with people abusing sometimes their power of maintainership or at least that is how I see it. Configure scripts are not even related to what FX is talking about. You should look into the bug reports before saying something about the configure scripts. One of problem that FX is talking about is the fall out due to the C99 extern inline patch which I had mentioned when the patch was posted, it will break targets left and right. The other problem FX is talking about is the recent fallout due to enabling dfp for x86-linux-gnu which was obviously not tested for all x86-linux-gnu targets anyways :). -- Pinski
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, François-Xavier Coudert [EMAIL PROTECTED] wrote: The mea culpa is to permit for long time to modify configure instead of configure.ac or configure.in that is used by autoconf and/or automake. [...] I'm sorry, but I don't understand at all what you propose, what your proposal is supposed to fix or how that is related to the mail you're answering to. FX Snapshot GCC-4.3 uses Autoconf 2.59 and Automake-1.9.6 but why does it appear generated by ... aclocal 1.9.5 when it uses 1.9.6? libdecnumber/aclocal.m4:# generated automatically by aclocal 1.9.5 -*- Autoconf -*- I say that the generated scripts must to be updated automatically and recursively before than tarballing and distributing it, and the GCC site is doing the wrong task. The correct task is: 1) To update the generated configure scripts of the tarball before than distributing it. 2) Or to remove the non-updated configure scripts. Sincerely J.C. Pizarro
Re: GCC 4.2.0 Status Report (2007-04-15)
libdecnumber/aclocal.m4:# generated automatically by aclocal 1.9.5 -*- Autoconf -*- That's a problem in the last regeneration of this file. I'm CCing M. Meissner, H. J. Lu and M. Cornea, since they appear to have last changed this file, although there's no ChangeLog entry for it in their commit. PS: it appears that it has been update two days ago by bonzini, and the new version has been generated with aclocal 1.9.6. 1) To update the generated configure scripts of the tarball before than distributing it. It could be done, but there's the risk that an automated process like that might introduce problems. I'd be more in favour of a nightly tester that check the Generated by headers to see if anything has an unexpected version number. 2) Or to remove the non-updated configure scripts. That's a annoyance, because it would require the autotools to build the GCC source, which is inconvenient. FX
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, Andrew Pinski [EMAIL PROTECTED] wrote: On 4/16/07, J.C. Pizarro [EMAIL PROTECTED] wrote: The mea culpa is to permit for long time to modify configure instead of configure.ac or configure.in that is used by autoconf and/or automake. Another mea culpa is don't update the autoconf/automake versions when the GCC''s scripts are using very obsolete/deprecated autoconf/automake versions. What world are you living in? Do you even look at the source? Even though http://gcc.gnu.org/install/prerequisites.html has not been updated, the toplevel actually uses autoconf 2.59 already and has since 2007-02-09. And how can you say 2.59 is obsolete when 90-99% of the distros ship with that version? Plus automake 1.9.6 is actually the latest version of 1.9.x automake. Since 2007-02-09, it's the problem, little time for a drastic modification. So, this drastic modification could have lost arguments or flags or modified incorrectly the behaviour between before and after. Because of this, there is not time for releasing or iceing after of this. libtool on the other hand is the older version but that is in the progress of being fixed, don't you read the mailing lists? Currently, autoconf is less used because of bad practices of GCC. Huh? What do you mean by that? I don't know anyone who touches just configure and not use autoconf. Yes at one point we had an issue with the toplevel needing an old version of autoconf but that day has past for 2 months now. By example, http://gcc.gnu.org/ml/gcc/2007-04/msg00525.html ... -- Pinski J.C. Pizarro :)
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, François-Xavier Coudert [EMAIL PROTECTED] wrote: 1) To update the generated configure scripts of the tarball before than distributing it. It could be done, but there's the risk that an automated process like that might introduce problems. I'd be more in favour of a nightly tester that check the Generated by headers to see if anything has an unexpected version number. if [ $? == 0 ]; then echo OK. All configure script is generated. else echo Remove the old configure scripts XXX to non-updated_XXX fi Is it complicated? I believe that not. 2) Or to remove the non-updated configure scripts. That's a annoyance, because it would require the autotools to build the GCC source, which is inconvenient. FX The GCC scripts use autotools but the site don't use autotools because it says which is inconvenient. What??? Don't use autotools or do use autotools? yes or no? Or yes-and-no? J.C. Pizarro
RE: GCC 4.2.0 Status Report (2007-04-15)
On 16 April 2007 10:56, J.C. Pizarro wrote: The GCC scripts use autotools but the site don't use autotools because it says which is inconvenient. What??? sigh Why don't you ever go and actually *find something out* about what you're talking about before you spout nonsense all over the list? This is not a remedial class for people who can't be bothered to read the docs. Yes, gcc uses autoconf. But the end-users who just want to compile gcc from a tarball do not have to have autoconf installed, because we supply all the generated files for them in the tarball. cheers, DaveK -- Can't think of a witty .sigline today
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, Dave Korn [EMAIL PROTECTED] wrote: On 16 April 2007 10:56, J.C. Pizarro wrote: The GCC scripts use autotools but the site don't use autotools because it says which is inconvenient. What??? sigh Why don't you ever go and actually *find something out* about what you're talking about before you spout nonsense all over the list? This is not a remedial class for people who can't be bothered to read the docs. Yes, gcc uses autoconf. But the end-users who just want to compile gcc from a tarball do not have to have autoconf installed, because we supply all the generated files for them in the tarball. cheers, DaveK -- Can't think of a witty .sigline today I follow, The end-users who just want to compile gcc from a tarball do not have to have autoconf installed, because we supply all the generated files for them in the tarball. - Well, what is the matter if the generated files aren't updated? The users will say many times broken situations like bootstrap doesn't work or else. J.C. Pizarro
RE: GCC 4.2.0 Status Report (2007-04-15)
On 16 April 2007 11:17, J.C. Pizarro wrote: I follow, No, not very well. The end-users who just want to compile gcc from a tarball do not have to have autoconf installed, because we supply all the generated files for them in the tarball. - Well, what is the matter if the generated files aren't updated? This has never happened as far as I know. Can you point to a single release that was ever sent out with out-of-date generated files? The users will say many times broken situations like bootstrap doesn't work or else. I haven't seen that happening either. Releases get tested before they are released. Major failures get spotted. Occasionally, there might be a bug that causes a problem building on one of the less-used (and hence less-well-tested) platforms, but this is caused by an actual bug in the configury, and not by the generated files being out of date w.r.t the source files from which they are generated; regenerating them would only do the same thing again. If you have a counter-example of where this has /actually/ happened, I would be interested to see it. cheers, DaveK -- Can't think of a witty .sigline today
Re: GCC 4.2.0 Status Report (2007-04-15)
2007/4/16, Dave Korn [EMAIL PROTECTED] wrote: On 16 April 2007 11:17, J.C. Pizarro wrote: I follow, No, not very well. The end-users who just want to compile gcc from a tarball do not have to have autoconf installed, because we supply all the generated files for them in the tarball. - Well, what is the matter if the generated files aren't updated? This has never happened as far as I know. Can you point to a single release that was ever sent out with out-of-date generated files? The users will say many times broken situations like bootstrap doesn't work or else. I haven't seen that happening either. Releases get tested before they are released. Major failures get spotted. Occasionally, there might be a bug that causes a problem building on one of the less-used (and hence less-well-tested) platforms, but this is caused by an actual bug in the configury, and not by the generated files being out of date w.r.t the source files from which they are generated; regenerating them would only do the same thing again. If you have a counter-example of where this has /actually/ happened, I would be interested to see it. cheers, DaveK -- Can't think of a witty .sigline today $ ./configure ... checking for i686-pc-linux-gnu-ld... /usr/lib/gcc/i486-slackware-linux/3.4.6/../../../../i486-slackware-linux/bin/ld # -- i don't like this ... $ grep \-ld configure appears COMPILER_LD_FOR_TARGET $ gcc --print-prog-name=ld /usr/lib/gcc/i486-slackware-linux/3.4.6/../../../../i486-slackware-linux/bin/ld This absolute path had broke me many times little time ago. J.C. Pizarro
Re: GCC 4.2.0 Status Report (2007-04-15)
Also, beyond that, I would strongly suspect that these PRs haven't been fixed in large part because they're difficult to track down, and possibly if we knew what commit had introduced them, we'd be a good bit farther along in fixing them, even without having the help of whoever introduced them. That's my feeling as well. If we knew which checkin caused a P1 regression, there would be considerable peer pressure for the person who checked in that patch to fix it. I'm not clear that any rule would be stronger. I think the point is that we no longer *know* which checkin caused it in most cases or that it happened so long ago that that information is no longer technically relevant.
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/16/07, Mark Mitchell [EMAIL PROTECTED] wrote: 29841 [4.2/4.3 regression] ICE with scheduling and __builtin_trap Honza, PING! 31360 [4.2/4.3 Regression] rtl loop invariant is broken Zdenek, PING! The broader question of why there are so many 124 P3 or higher regressions against 4.2.0 points to a more fundamental problem. Quick bug breakdown: 46 c++ bugs: 13 of these are assigned 33 missed-optimizations: 7 of these are assigned So that's 79 of 124 bugs, almost two thirds of all bugs. Only 6 of the 124 bugs are reported against a compiler older than gcc 4.0, so most of these regressions are fairly recent. Only 29 of 124 bugs are assigned to a developer, but some bugs have been assigned since forever to the assignee without any sign progress towards a fix, ever. So in reality, even fewer than 29 bugs have an active assignee. That's less than one fourth of all serious regressions being taken seriously. Then again, all things considering it seems to me that having 33 missed optimizations as regressions only makes things look terribly bad, while in reality the situation is really not so bad at all. The usual discussion about bikeshed regressions vs. real significant progress: With 33 missed optimizations, GCC 4.2 still produces measurably better scores on the usual benchmarks. So maybe the fundamental problem is that we just have bugs that look more serious than they really are. Certainly some of the missed optimizations are pretty severe, but the majority of these reports is just silly. Despite the fact that virtually all of the bugs open against 4.2.0 are also open against 4.1 or 4.3 -- or both! -- there seems to be little interest in fixing them. Interest in fixing regression typically peaks in the days after the Release Manager posts a bug overview / release status. We haven't had very many updates for this release... ;-) Some have suggested that I try to solve this by closing GCC 4.3 development until 4.2.0 is done. I've considered that, but I don't think it's a good idea. In practice, this whole software freedom thing means that people can go off and do things on their own anyhow; people who are more motivated to add features than fix bugs are likely just to keep doing that, and wait for mainline to reopen. I think that, as the Release Manager, you should just near-spam the list to death with weekly updates, and keep pushing people to fix bugs. I think most of the time, people don't fix their bugs simply because they're too involved with the fun stuff to even think about fixing bugs. That's how it works for me, at least. I think the release manager should try to get people to do the hard work of identifying the cause of bugs, which is usually not really hard at all. For example: * Janis has more than once been asked to reghunt a regression, and she's always been very helpful and quick to respond, in my experience. * Very few people know how to use Janis' scripts, so to encourage people to use them, the release manager could write a wiki page with a HOWTO for these scripts (or ask someone to do it). Regression hunting should only be easier now, with SVN's atomic commits. But the easier and more accessible you make it for people to use the available tools, the harder it gets for people to justify ignoring their bugs to the rest of us. * Maintainers of certain areas of the compiler may not be sufficiently aware of some bug in their part of the compiler. For example, only one of the three preprocessor bugs is assigned to a preprocessor maintainer, even though in all three preprocessor bugs a maintainer is in the CC. It's just that the bugs have been inactive for some time. (And in this particular case of preprocessor bugs, it's not even been established whether PR30805 is a bug at all, but it is marked as a P2 regression anyway) In summary, I just strongly encourage a more active release manager... As of course you understand, this is intended as constructive criticism. Gr. Steven
Re: GCC 4.2.0 Status Report (2007-04-15)
On Mon, Apr 16, 2007 at 10:58:13AM -0700, Mark Mitchell wrote: Janis Johnson wrote: On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote: * Very few people know how to use Janis' scripts, so to encourage people to use them, the release manager could write a wiki page with a HOWTO for these scripts (or ask someone to do it). Regression hunting should only be easier now, with SVN's atomic commits. But the easier and more accessible you make it for people to use the available tools, the harder it gets for people to justify ignoring their bugs to the rest of us. The RM can encourage me to do this; I've already been meaning to for a long time now. You may certainly consider yourself encouraged. :-) Gosh, thanks! One silly thing holding me back is not quite knowing what needs copyrights and license notices and what doesn't. Some scripts are large and slightly clever, others are short and obvious. For safety sake, we should probably get assignments on them. I'm not sure how hard it is to get IBM to bless contributing the scripts. If it's difficult, but IBM doesn't mind them being made public, perhaps we could just put them somewhere on gcc.gnu.org, outside of the official subversion tree. I have IBM permission to contribute them to GCC. An earlier version for CVS is in contrib/reghunt with formal FSF copyright and GPL statements. I've sent later versions to gcc-patches as a way to get them to particular people who wanted to try them out. My inclination is to put full copyright/license statements on the bigger ones and just Copyright FSF dates on the small ones. Janis
Re: GCC 4.2.0 Status Report (2007-04-15)
Janis Johnson wrote: The RM can encourage me to do this; I've already been meaning to for a long time now. You may certainly consider yourself encouraged. :-) Gosh, thanks! :-) I have IBM permission to contribute them to GCC. An earlier version for CVS is in contrib/reghunt with formal FSF copyright and GPL statements. I've sent later versions to gcc-patches as a way to get them to particular people who wanted to try them out. My inclination is to put full copyright/license statements on the bigger ones and just Copyright FSF dates on the small ones. I guess I'd tend to be conservative and put the full GPL notice on all of them. If it doesn't apply because the file is too small, whoever wants to use it in some non-GPL way can assert that fact if they want. Is there some reason that putting the GPL on them is bad/wrong? Thanks, -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: GCC 4.2.0 Status Report (2007-04-15)
Janis == Janis Johnson [EMAIL PROTECTED] writes: * Very few people know how to use Janis' scripts, so to encourage people to use them, the release manager could write a wiki page with a HOWTO for these scripts (or ask someone to do it). Regression hunting should only be easier now, with SVN's atomic commits. But the easier and more accessible you make it for people to use the available tools, the harder it gets for people to justify ignoring their bugs to the rest of us. Janis The RM can encourage me to do this; I've already been meaning to for a Janis long time now. Janis My reghunt scripts have grown into a system that works well for me, but Janis I'd like to clean them up and document them so that others can use them. Janis What I've got now is very different from what I used with CVS. I wonder whether there is a role for the gcc compile farm in this? For instance perhaps someone could keep a set of builds there and provide folks with a simple way to regression-test ... like a shell script that takes a .i file, ssh's to the farm, and does a reghunt... ? I think this would only be worthwhile if the farm has enough disk, and if regression hunting is a fairly common activity. Tom
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/16/07, Janis Johnson [EMAIL PROTECTED] wrote: I'd like at least two volunteers to help me with this cleanup and documentation effort by using my current scripts on regressions for open PRs and finding the places that are specific to my environment. Since I brought this up, I guess I'm on the hook ;-) Gr. Steven
RE: GCC 4.2.0 Status Report (2007-04-15)
On 15 April 2007 23:51, Mark Mitchell wrote: The broader question of why there are so many 124 P3 or higher regressions against 4.2.0 points to a more fundamental problem. Despite the fact that virtually all of the bugs open against 4.2.0 are also open against 4.1 or 4.3 -- or both! -- there seems to be little interest in fixing them. Some have suggested that I try to solve this by closing GCC 4.3 development until 4.2.0 is done. I've considered that, but I don't think it's a good idea. In practice, this whole software freedom thing means that people can go off and do things on their own anyhow; people who are more motivated to add features than fix bugs are likely just to keep doing that, and wait for mainline to reopen. So here's a second possibility: delete the 4.2 branch, and start again with a fresh release branch. Call it 4.2 again, although it would be more-or-less what we're expecting to be 4.3. Maybe it would be not just simplest but also most effective to cut our losses and try again. However, I would consider asking the SC for permission to institute a rule that would prevent contributors responsible for P1 bugs (in the only possible bright-line sense: that the bug appeared as a result of their patch) from checking in changes on mainline until the P1 bug was resolved. This would provide an individual incentive for each of us to clean up our own mess. I'm certain that someone will raise the latent bug issue, but that's not the common case. And, we can always decide to make an exception if necessary. Of course, if we do this, I'd be happy to recuse myself as necessary, in order to avoid any appearance of favoritism towards CodeSourcery personnel. What do people think of that suggestion? I think it runs the risk of seeming finger-pointy and causing political reactions, but I wouldn't object to it as a new working practice. cheers, DaveK -- Can't think of a witty .sigline today
Re: GCC 4.2.0 Status Report (2007-04-15)
Dave Korn wrote: Some have suggested that I try to solve this by closing GCC 4.3 development until 4.2.0 is done. I've considered that, but I don't think it's a good idea. In practice, this whole software freedom thing means that people can go off and do things on their own anyhow; people who are more motivated to add features than fix bugs are likely just to keep doing that, and wait for mainline to reopen. So here's a second possibility: delete the 4.2 branch, and start again with a fresh release branch. Call it 4.2 again, although it would be more-or-less what we're expecting to be 4.3. Maybe it would be not just simplest but also most effective to cut our losses and try again. I've already considered, discussed, and dismissed this possibility -- and I still think it's a bad idea for all the same reasons. See: http://gcc.gnu.org/ml/gcc/2007-02/msg00427.html for my thinking. Of course, this bit: Then, we'll have a 4.2.0 release by (worst case, and allowing for lameness on my part) March 31. has not come to pass. Either I did not allow for sufficient lameness on my part, or I failed to correctly estimate the worst case, or both. :-( I also see no evidence that 4.3 is going to be particularly better. After all, most of the 4.2 bugs are still in 4.3, so there's certainly no reason to think that a new branch today would be any closer to release. And there's a good bit more functionality that we want to get into 4.3, which, in the way of things, is likely to introduce new bugs, no matter how positive its overall impact. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
RE: GCC 4.2.0 Status Report (2007-04-15)
Despite the fact that virtually all of the bugs open against 4.2.0 are also open against 4.1 or 4.3 -- or both! -- there seems to be little interest in fixing them. Some have suggested that I try to solve this by closing GCC 4.3 development until 4.2.0 is done. So here's a second possibility: delete the 4.2 branch, and start again with a fresh release branch. Call it 4.2 again, although it would be more-or-less what we're expecting to be 4.3. Maybe it would be not just simplest but also most effective to cut our losses and try again. This is a really bad idea. 4.3 is months from being ready to release and there's only a handful of P1 PRs blocking 4.2 from release. From my perspective, 4.2 is better than 4.1 in many ways. As a port maintainer, I have spent a large amount of time trying to ensure that 4.2 is ready. I'd be quite upset if this effort was for naught. If 4.2 has deficiencies that need addressing prior to release, then the specific technical aspects need to be discussed. A general suggestion that the branch be deleted isn't acceptable as the same mistakes will be made all over again. Dave -- J. David Anglin [EMAIL PROTECTED] National Research Council of Canada (613) 990-0752 (FAX: 952-6602)
Re: GCC 4.2.0 Status Report (2007-04-15)
On 4/15/07, Mark Mitchell [EMAIL PROTECTED] wrote: As has been remarked on the GCC mailing lists, I've not succeeded in getting GCC 4.2.0 out the door. However, with the limited criteria that we target only P1 regressions not present in 4.1.x, we seem to be getting a bit closer. The only regressions in this category are: 26792 [4.2 Regression] need to use autoconf when using newly-ad... 29841 [4.2/4.3 regression] ICE with scheduling and __builtin_trap 30222 [4.2 Regression] gcc.target/i386/vectorize1.c ICEs 30700 [4.2 Regression] YA bogus undefined reference error to st... 31136 [4.2 Regression] FRE ignores bit-field truncation (C and ... 31360 [4.2/4.3 Regression] rtl loop invariant is broken 31513 [4.2/4.3 Regression] Miscompilation of Function Passing B... I'm disappointed that the patch for PR 30700 hasn't been applied to the 4.2 branch; according to bugzilla it looks like all that's needed is a build/test cycle there. I'll be tackling 31513 tonight. Perhaps I can get to one or two of the other PRs above. The broader question of why there are so many 124 P3 or higher regressions against 4.2.0 points to a more fundamental problem. Despite the fact that virtually all of the bugs open against 4.2.0 are also open against 4.1 or 4.3 -- or both! -- there seems to be little interest in fixing them. Some have suggested that I try to solve this by closing GCC 4.3 development until 4.2.0 is done. I've considered that, but I don't think it's a good idea. In practice, this whole software freedom thing means that people can go off and do things on their own anyhow; people who are more motivated to add features than fix bugs are likely just to keep doing that, and wait for mainline to reopen. This is certainly what I would do. I fixed all the critical 4.2 bugs I was responsible for. I have no plans to fix the non-critical ones because to be honest, 4.2 lacks a lot of the infrastructure to fix them in sane ways. However, I would consider asking the SC for permission to institute a rule that would prevent contributors responsible for P1 bugs (in the only possible bright-line sense: that the bug appeared as a result of their patch) from checking in changes on mainline until the P1 bug was resolved. This would provide an individual incentive for each of us to clean up our own mess. And how exactly would we plan on tracking this? At least to me, it seems like adding more administrative overhead and i don't see it fixing the underlying problem. You want more bugs fixed, it would seem a better way would be to build a better sense of community (Have bugfix-only days, etc) and encourage it through good behavior, not through negative reinforcement. You can't fix behavior in a volunteer community by beating people into submission. There will always be those who don't get it, or who everyone believes isn't doing a good job, and it's not just because they are committing patches and leaving messes. The reality is we just don't want people like this in our community, and they shouldn't have write access, TBQH[1]. This is not to say we shouldn't accept/review their patches and commit them for them. But with write access comes a level of trust. --Dan [1] Then again, I think our system of granting more than write-after-approval access based on the secret discussions of a bunch of people, some of whom don't even work on gcc anymore, is flawed anyway, so take it for what it's worth . It's really not a dig against the SC as people, most are great people. As an entity, I don't believe this should be in it's job description. I firmly believe the decision of who gets write access should be in the hands of the committers, since those are whom are most affected, and those who are continually active.
Re: GCC 4.2.0 Status Report (2007-04-15)
Daniel Berlin wrote: On 4/15/07, Mark Mitchell [EMAIL PROTECTED] wrote: However, I would consider asking the SC for permission to institute a rule that would prevent contributors responsible for P1 bugs (in the only possible bright-line sense: that the bug appeared as a result of their patch) from checking in changes on mainline until the P1 bug was resolved. This would provide an individual incentive for each of us to clean up our own mess. And how exactly would we plan on tracking this? At least to me, it seems like adding more administrative overhead and i don't see it fixing the underlying problem. You want more bugs fixed, it would seem a better way would be to build a better sense of community (Have bugfix-only days, etc) and encourage it through good behavior, not through negative reinforcement. You can't fix behavior in a volunteer community by beating people into submission. There will always be those who don't get it, or who everyone believes isn't doing a good job, and it's not just because they are committing patches and leaving messes. The reality is we just don't want people like this in our community, and they shouldn't have write access, TBQH[1]. It seems to me that having the support infrastructure for this would potentially be quite useful. However, I agree with Daniel that I'm not so sure how much actually having the policy on top of that will help. We have 124 regression PRs of P3 or above against 4.2.0, and 7 of P1. For how many of those do we know which commit caused the regression? My personal feeling is that it's entirely possible that I might make a mistake in some code that doesn't show up in the regression testing and nobody notices in the code review, and a P1 regression might come out of that mistake a few months later when someone notices the consequences. I hope this won't happen, and I try hard to keep it from happening, and we've got a lot of process in place to try to prevent it, but with the number of commits we have, statistical anomalies will happen. At that point, if I don't see the bug report (or don't recognize that it's a consequence of my code when I'm looking at the list of regressions), then I'll have no idea that I'm responsible for it. However, if we had some sort of infrastructure in place that would produce a message to me saying Your commit number #123456 introduced this P1 regression, I'm likely going to say, Oh, bother, and do my best to fix it. I suspect that the people who had the poor luck to cause the PRs on Mark's current hit list would feel exactly the same way. Also, beyond that, I would strongly suspect that these PRs haven't been fixed in large part because they're difficult to track down, and possibly if we knew what commit had introduced them, we'd be a good bit farther along in fixing them, even without having the help of whoever introduced them. This is, after all, a large part of why we try to have the one idea per patch rule. - Brooks, who is also wondering who'd get to decide whether a regression qualified as a P1 under this proposal
RE: GCC 4.2.0 Status Report (2007-04-15)
-Original Message- From: Mark Mitchell [mailto:[EMAIL PROTECTED] Sent: Sunday, April 15, 2007 4:51 PM To: GCC Subject: GCC 4.2.0 Status Report (2007-04-15) However, I would consider asking the SC for permission to institute a rule that would prevent contributors responsible for P1 bugs (in the only possible bright-line sense: that the bug appeared as a result of their patch) from checking in changes on mainline until the P1 bug was resolved. This would provide an individual incentive for each of us to clean up our own mess. I'm certain that someone will raise the latent bug issue, but that's not the common case. And, we can always decide to make an exception if necessary. Of course, if we do this, I'd be happy to recuse myself as necessary, in order to avoid any appearance of favoritism towards CodeSourcery personnel. What do people think of that suggestion? Instead of creating a new rule to try to convince people to get things done, why not use your role of release manager and just not allow a release of 4.2 or 4.3 until the quality is deemed sufficient. If it takes longer to do a release then so be it. Politely point out what is left to get done. The status reports are a helpful reminder of the state of the branch. Eric Weddington