Simon Marlow wrote:
As you know, so far we've been resistant to adding significant amounts of "process" to modifying the HEAD, because it'll necessarily slow down development. Instead, we've added lots of other measures:
Yes, but IMO this is only true if the number of developers is very small. Once a project becomes big enough, having a HEAD which is frequently broken slows down development more than having to be careful about which patches to submit/push.
- The "FIX BUILD" notation for patch naming
FWIW, I don't think this notation is really used consistently atm (and I'm not sure it would help us if it was).
Personally, I think requiring a complete bootstrap/testsuite on two platforms for every patch is still prohibitively expensive: up to 2
We definitely need to find some middle ground, but having a broken HEAD on a regular basis is prohibitively expensive for us (what about others?). Anyway, usually people do bootstrap/testsuite once for a bunch of related patches, effectively working on a local branch until things stabilise somewhat. IMO, this is a much better strategy than pushing untested patches to the HEAD.
Also, running the entire testsuite is, perhaps, unnecessary - for a patch to the typechecker, for instance, it should be sufficient to run only tc-related tests and only on one platform. However, platform-dependent changes (like asm output) *must* be tested on multiple platforms before going into the HEAD, IMO, unless they are only enabled on a specific platform by appropriate #ifdefs (and even then, personally, I would bootstrap on two platforms to see if the #ifdefs actually work as expected).
Even if running the testsuite is too slow, bootstrapping on a development machine really shouldn't be. It takes under 20 minutes on my laptop (and used to take under 12 back when parallel make worked for libraries). Perhaps speeding up the bootstrap/testsuite process should be made a high-priority goal? For me, the recent changes to the build system have made bootstrapping slower by a factor of over 1.5.
hours for each build plus the time and effort to set them up - that's if you even have access to 2 different platforms. If the developer doesn't have 2 platforms, then we have to do the testing. What's the easiest way to get the testing done? Push the patch, and let buildbot do it.
This might sound too harsh, but if a developer doesn't have access to a couple of different platforms, he/she shouldn't be touching platform-dependent code. Again, platform-specific #ifdefs help here. If all else fails, mailing the patches to cvs-ghc and asking people to test them should work. I would gladly test patches on OS X if necessary - it would even save me time in the long run. It might also be a good idea to have people responsible for individual platforms who can/have to look at patches if necessary.
Ultimately, this is a question of who has to do more work: the developer of the patch or the other developers who are affected by a broken HEAD. Personally, I think it should be the former, at least to a large extent.
This is why I think the staging/tested repositories suit our needs better.
I really have to disagree here. IIUC, this is going to be one repository per platform, i.e., there will be multiple working repositories. Which one do I develop against? In particular, if I want to do the bootstrap/testsuite thing on multiple platforms which repository/repositories do I use? Having to work with a different repository on each platform would be quite inconvenient. (That said, this is not going to affect the NDP project for quite some time but this design is bound to cause trouble in general, IMO.)
All of the above is intended only as food for thought and is only MHO, of course. It's just that ghc hacking is a bit frustrating at the moment.
Roman _______________________________________________ Cvs-ghc mailing list [email protected] http://www.haskell.org/mailman/listinfo/cvs-ghc
