Russ P. a écrit :
No one ever claimed that a programming language, no matter how
rigorous, can eliminate all bugs. All a language can do is to reduce
their rate of occurrence.

The Ariane fiasco was not a failure of Ada per se but rather a failure
of people using Ada.

Almost right.

They attempted to re-use software written for one
rocket for another without proper testing. No language can prevent
that sort of error.

Now this is plain right.


We can argue forever about the usefulness of language-enforced
restriction of access to private data and methods. I have no doubt
whatsoever that it is very useful at least for the extreme cases of
very large, safety-critical systems.

And my POV is that it's just plain useless.

If you don't think access to
private data needs to be restricted for control of strategic nuclear
arsenals, for example, I think you're crazy, but that's just my
opinion.

If you think that this kind of access restriction makes softwares controling strategic nuclear arsenal any safer, then *you* are totally crazy. As *you* stated above, "no language can prevent this kind of error".

The only reasonable question in my mind is where the line should be
drawn between systems that should have enforced restrictions and those
that can rely on coding standards and voluntary cooperation among
programmers.

The only reasonable question in *my* mind is whether you think it's better, specially for safety-critical stuff, to hire people you can trust or to rely on technology.

A while back, I read something about the final integration of the
flight software on the Boeing 777, which was written mainly in Ada.
The claim was made that this integration took only three days, whereas
normally it would be expected to take more like three months with a
less rigorous language such as C++.  The reason for the simplified
integration is that Ada enforces interfaces and prevents access to
private data and methods.

C++ does it too. Or at least, that's a very common "claim" about C++.

Apparently such enforcement can improve the
efficiency of software production -- and that's not just in "theory."

My own experience is that too much rigidity in a language only leads to more boilerplate code and more workarounds (design patterns anyone ?), IOW more accidental complexity, hence more space for bugs to creep in.

My very humble (no don't say it) opinion about this is that what's important is how you handle and manage the whole project (including both technical and non-technical aspects), not the technology used (which might be relevant - I of course wouldn't use Python for anything real-time - or not - most of the monstruogigantic "entreprise" software written in Java would work just as well in Python or any other decent language).

No technology will prevent human errors - I think this point is clear, and that we both agree on it. OTHO, some technologies can at least help reducing the opportunities for human errors - and I think we also agree on this. Now the point is : *how* can a given techno helps wrt/ this second goal. Here you have two philosophies. One is that you'll reduce errors by improving simplicity and readability, and by promoting trust (whithin the team), sense of responsabily, and communication. The other one is that you'll reduce errors by not allowing those stupid code-monkeys to do anything that *might* (according mostly to unproved assertions) be "dangerous" - hence promoting distrust and irresponsability, and adding quite a lot of accidental complexity. BTW, do you know the by far most common Java idiom ? It's named the "do-nothing catch-all exception handler". As an excercise, explain the reasons behind this idiom, and it's practical results.

As a last point : I certainly don't think Python is perfect in any way, *nor* that it's the appropriate tool for each and any project. I just disagree with you on your assertion that Python is not ok for large or critical projects because of it's "lack" of language-enforced access restriction.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to