[EMAIL PROTECTED] wrote:
Jonathan Revusky wrote:

I revert to my statement that a version repository makes it quite easy to restore the code to any point it was at in the past.

In any case, consider some potential bad consequence of letting just about anybody commit:

1. On occasion, people start committing all kinds of bad code and it's a lot of work for you to start sorting it out. (This very rarely happens because new people are typically very timid in their initial commits, and don't do drastic things, their cokmmits are small and localized and could be rolled back easily.)

2. Once in a very long while, let's say 10 or 20 years, somebody with sociopathic tendencies comes along and... I dunno... starts introducing bugs deliberately. (But c'mon, this just about never happens.)

Now, let's consider the consequences of making it very hard, nigh impossible, for new people to get involved.

A talented, energetic person who has a fire in his belly to do some stuff is given the runaround. You drive that person away. You lose all the contributions he would have made. Moreover, that energy gets invested in the competing project (in our conceptual experiment above) with low barriers to entry.

Which is going to be the bigger negative for a project, the above point, or points 1 and 2 above?


There are other potential bad consequences than the two listed above.
Consider

3. Subtle errors and exploitable security holes get introduced, either
inadvertantly or intentionally.

First of all, pPeople seem to be addressing things I never said. For example, I don't think I ever said that people should be allowed to commit _anonymously_. I simply said that I believed you could be quite liberal about granting commit privileges to people and the sky would not fall in.

Now, here you seem to be suggesting that I see no need for code review on new code that is committed.

No, I certainly don't believe that. Of course, code that is committed should be reviewed carefully. However, I don't know if this is such a problem as regards this kind of situation. If you imagine a situationin which a new guy is given commit access, I think it's totally normal that the established developers will be quite carefully reviewing the things this guy does.

So basically, I don't think your above point 3 is such an objection.

Also, there is a countervailing point here: in terms of subtle errors and so on, simply getting more people involved may well reduce the number of such subtle errors for the basic principle of more eyeballs. So this works both ways.


While a revision control system allows backing out changes, each change
must be carefully considered.  A security hole or other error may not be
the result of a single change, but of multiple changes made in multiple
locations and, perhaps, at multiple times.

If you are going to go the route of drastically reduce the barriers to people committing code, you do need some people to keep an eye on this, sure. One aspect of this is that security holes can be introduced independently of whether you let in new committers or not.

Now, of course, if the project simply stagnates because no new blood is allowed in, then no new security holes get introduced, but that is for the trivial reason that nothing is being done... period. But surely that's not your point, because that's kinda silly, right?


While open source allows a large number of eyes to see the code, it's
not that easy to review code in depth and spot such problems.  Much
trust is placed on the skill, attention, and thoroughness of the
committers.

Well, if you don't give somebody commit privileges and they offer a patch, and somebody has to review that patch, well, is that more work than if you give somebody commit privileges and they commit their patch and then it has to be reviewed? The argument that it is hard work to dilligently review code seems to be orthogonal to what we are talking about. Surely, in a well run project, code contributions would be reviewed carefully, right?

So the contributed code needs to be reviewed in either case, right? And requires the same skill, attention, and thoroughness.


Consider the C2 Wiki and Wikipedia as analogies.  Yes, it's easy to
delete obviously false information.  It's just as easy to reintroduce
it.  Keeping the worst of the cruft out is pretty much a full-time job
for volunteers who take on the task, and there's not even agreement
between them which is the cruft.  Subtle or infrequently viewed
incorrect information can, and does, remain for long periods of time.
Spectacular failures occur that make headlines in the mass news media.

Just to be clear: are you speculating in the above, or are you speaking from direct experience maintaining such resources?


I, for one, would never recommend to any business enterprise that they
use Struts for important applications if the source was not vetted and
controlled by a small, trusted committee.  Your needs may not have such
requirements for trustworthiness.

Do you have objective proof that Struts is more "trustworthy" -- i.e. lack of subtle security holes and so on -- than other comparable projects?

I mean, as I said to Frank Zametti, this all is stuff that can, at least in theory, be resolved empirically. If you assert that a software project that lowers the barriers to entry for committers is going to have more security holes in it than it would otherwise, then this is verifiable empirically, in principle, right?


But if businesses were to abandon use of Struts for important
applications, would that be a reasonable trade-off for the contributions
of your talented, energetic person?  Or would the loss of talented,
careful people, who needed a framework for business use where large sums
of money are at risk, be a larger negative for the project?


George, I'm rather unconvinced by your arguments. If a business is extremely conservative and risk-averse, they can always use a version of the software that is a couple of versions behind (and they typically do that) and thus, in principle, any security holes would likely have come to light -- admittedly because there are early adopters out there willing to use more bleeding edge stuff. There's sort of an ecology out there, with more conservative people and earlier adopters and people willing to use the nightly build. A countervailing aspect of what you are saying is that any process that got more people involved with the code could actually reduce the number of bugs and other issues, simply by the basic mechanism of bringing more eyeballs to bear.

But I reiterate: the idea that a more open collaborative model is going to produce software with more bugs or more security holes is an empirical question that cannot be resolved by a priori reasoning.

An interesting related thing is that the quality of wiki-based collaboratively developed materials actually can be surprisingly high. One recent study (I mention this in a blog entry about this kind of topic here BTW http://freemarker.blogspot.com/2006/02/musings-on-wikipedia-and-open-source.html ) actually shows that the number of serious factual errors in wikipedia versus Encyclopedia Britannica is not so different.

But to summarize: the basic idea that you need to closely guard commit privileges should not be a dogma. It is a hypothesis that should stand and fall based on empirical evidence. All I see here is people arguing that this is a bad idea based on a priori reasoning. Has anybody jumped up and said something like: "Oh, we tried that and it was a disaster. This happened and the other thing happened and we had to revert to a less open approach."

With my kind of empirical mind-set, this is the kind of thing more likely to convince me I'm wrong, or at least cause me to doubt.

I see people willing to say that this is a terrible idea but no empirical evidence is offered. I ask you: what would a fair-minded
observer of this discussion conclude from that?

Regards,

Jonathan Revusky
--
lead developer, FreeMarker project, http://freemarker.org/


 - George Dinwiddie
   http://www.idiacomputing.com/


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to