On 29/09/2014 05:03, Sean Kelly wrote:
I recall Toyota got into trouble with their computer controlled cars
because of their idea of handling of inevitable bugs and errors. It
was one process that controlled everything. When something unexpected
went wrong, it kept right on operating, any unknown and unintended
consequences be damned.

The way to get reliable systems is to design to accommodate errors,
not pretend they didn't happen, or hope that nothing else got
affected, etc. In critical software systems, that means shut down and
restart the offending system, or engage the backup.

My point was that it's often more complicated than that.  There have
been papers written on self-repairing systems, for example, and ways to
design systems that are inherently durable when it comes to even
internal errors.  I think what I'm trying to say is that simply aborting
on error is too brittle in some cases, because it only deals with one
vector--memory corruption that is unlikely to reoccur.  But I've watched
always-on systems fall apart from some unexpected ongoing situation, and
simply restarting doesn't actually help.

Sean, I fully agree with the points you have been making so far.
But if Walter is fixated on thinking that all the practical uses of D will be critical systems, or simple (ie, single-use, non-interactive) command-line applications, it will be hard for him to comprehend the whole point that "simply aborting on error is too brittle in some cases".

PS: Walter, what browser to you use?

--
Bruno Medeiros
https://twitter.com/brunodomedeiros

Reply via email to