> It would be a change in culture for us, that's for sure. The question > becomes whether the drop in patch throughput is justified by an > increase of patch quality and stability in the code?
Well, let's take the multiprocessing example. The question is: - would an a priori review have been able to uncover the most subtle bugs? - would someone have done that review at all, and how long would it have taken before it had been done? - if we had delayed the inclusion of multiprocessing in the mainline, doesn't it mean it would have got almost no testing since people are unlikely to test specific branches rather than the "official trunk"? - isn't the inclusion of multiprocessing itself, even if subtle bugs remain, an increase of quality since it gives people a standard and rather good package for doing parallel stuff, rather than baking their own defective ad-hoc solutions? The thing I want to point out in the latter item is that measuring quality solely by the number of bugs or the stability of a bunch of buildbots is wrong (although of course fixing bugs and having green buildbots *is* important). Sometimes committing an imperfect patch can be better than committing nothing at all. Regards Antoine. _______________________________________________ python-committers mailing list python-committers@python.org http://mail.python.org/mailman/listinfo/python-committers