On 5/26/2013 3:48 PM, Timon Gehr wrote:
On 05/26/2013 08:46 PM, Walter Bright wrote:
On 5/26/2013 7:26 AM, Peter Alexander wrote:
A language that
statically enforces the programmer to check for null would help here.

I'm not arguing it won't help. I've been working in the background on a
NotNull!T template.

I'm arguing that the benefits are being oversold.

IIRC the damage done by software bugs to US economy alone is estimated to be
around 60 billion a year. One billion damage done by dereferenceable null
pointers appears to be an optimistic estimate.

Still seems like hyperbole to me. Has anyone run through the bug database of a long running project, their fixes, and counted? What about all the bugs from:

1. other uninitialized data
2. memory corruption
3. running out of memory
4. failure to detect error conditions
5. failure to recover properly from error conditions
6. hanging
7. race conditions
8. deadlocks
9. stack overflows
10. read the specs wrong
11. wrote the specs wrong
12. didn't understand the problem
13. operator precedence
14. copy/paste errors
15. misuse of APIs
16. buffer overruns and underruns
17. endless security issues
18. requirements change
19. overflows and underflows
20. loss of precision
21. accumulated roundoff errors
22. wrote '-' instead of '+'
23. wrote 'i' instead of 'j'
24. relying on undefined behavior
25. untested code
26. leaky abstractions
27. typed 0x100000000 instead of 0x10000000
28. fencepost bugs

I could go on. Like I said, one hole in a cheese grater of holes.

It's like saying null pointers are the vast bulk of programming bugs.
I just find that to be so obviously untrue.

Who is claiming this?

Anyone claiming it is a "deal-breaker".

Reply via email to