On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner wrote:
This isn't evidence that memory safety is "the future", though.
This is evidence that people do not follow basic engineering practices (for whatever seemingly valid reasons - such as a project deadline - at the time).

Writing a program (with manual memory management) that does not have dangerous memory issues is not an intrinsically hard task. It does, however, require you to *design* your program, not *grow* it (which, btw, is what a software *engineer* should do anyway).

If the system in practice does not bear any resemblance to the system in theory, then one cannot defend the theory. If, in practice, programming languages without safety checks produces very common bugs which have caused millions of dollars in damage, then defending the language on the theory that you might be able to make it safe with the right effort is untenable.

Since I have not defended anything, this is missing the point.


Why is it that test CIs catch bugs when people should be running tests locally? Why is it that adding unittest blocks to the language made unit tests in D way more popular when people should always be writing tests?

These are fallacies of presupposition.

Because we're human. We make mistakes.

I agree, but still missing the point I made.

We put things off that shouldn't be put off.

Assumption, but I won't dispute it in my personal case.


It's like the new safety features on handheld buzzsaws which make it basically impossible to cut yourself. Should people be using these things safely? Yes. But, accidents happen, so the tool's design takes human behavior into account and we're all the better for it.

Quite, but that's not exclusive to memory bugs (though they are usually the ones with the most serious implications) and still misses the point of my argument. If you want *evidence of memory safety being the future*, you have to write programs making use of *memory safety*, put them out into the wild and let people try to break them for at least 10-15 years (test of time). *Then* you have to provide conclusive (or at the very least hard to refute) proof that the reason that no one could break them were the memory safety features; and then, *finally*, you can point to all the people *still not using memory safe languages* and say "Told you so". I know it sucks, but that's the price as far as I'm concerned; and it's one *I'm* trying to help pay by using a language like D with a GC, automatic reference counting, and scope guards for memory safety. You *cannot* appropriate one (or even a handful) examples of someone doing something wrong in language A as evidence for language feature C (still missing from A) being *the future*, just because feature C is *supposed* to make doing those things wrong harder. They are evidence that there's something wrong and it needs fixing. I personally think memory safety might be one viable option for that (even if it only addresses one symptom), but I've only ever witnessed over-promises such as "X is the future" in anything engineering related play out to less than what was promised.


Using a programing language which doesn't take human error into account is a recipe for disaster.

Since you're going for extreme generalization, I'll bite: Humans are a recipe for disaster.

Reply via email to