Walter Bright wrote: > Daniel Keep wrote: >> "But the user will just assign to something useless to get around that!" >> >> You mean like how everyone wraps every call in try{...}catch(Exception >> e){} to shut the damn exceptions up? > > They do just that in Java because of the checked-exceptions thing. I > have a reference to Bruce Eckel's essay on it somewhere in this thread. > The observation in the article was it wasn't just moron idiot > programmers doing this. It was the guru programmers doing it, all the > while knowing it was the wrong thing to do. The end result was the > feature actively created the very problems it was designed to prevent.
Checked exceptions are a bad example: you can't not use them. No one is proposing to remove null from the language. If we WERE, you would be quite correct. But we're not. If someone doesn't want to use non-null references, then they don't use them. >> Or uses pointer arithmetic and >> casts to get at those pesky private members? > > That's entirely different, because privacy is selected by the > programmer, not the language. I don't have any issue with a user-defined > type that is non-nullable (Andrei has designed a type constructor for > that). Good grief, that's what non-null references are! Object foo = new Object; // Dear Mr. Compiler, I would like a non-nullable // reference to an Object, please! Here's the object // I want you to use. Object? bar; // Dear Mr. Compiler, I would like a nullable reference // to an object, please! Just initialise with null, thanks. How is that not selected by the programmer? The programmer is in complete control. We are not asking for the language to unilaterally declare null to be a sin, we want to be given the choice to say we don't want it! Incidentally, on the subject of non-null as a UDT, that would be a largely acceptable solution for me. The trouble is that in order to do it, you'd need to be able to block default initialisation, **which is precisely what you're arguing against** You can't have it both ways. >> If someone is actively trying to break the type system, it's their >> goddamn fault! Honestly, I don't care about the hacks they employ to >> defeat the system because they're going to go around blindly shooting >> themselves in the foot no matter what they do. > > True, but it's still not a good idea to design a language feature that > winds up, in reality, encouraging bad programming practice. It > encourages bad practice in a way that is really, really hard to detect > in a code review. Whether or not it encourages it is impossible to determine at this juncture because I can't think of a language comparable to D that has it. Things that are "like" it don't count. Ignoring that, you're correct that if someone decides to abuse non-null references, it's going to be less than trivial to detect. > I like programming mistakes to be obvious, not subtle. There's nothing > subtle about a null pointer exception. There's plenty subtle about the > wrong default value. I think this is a fallacy. You're assuming a person who is actively going out of their way to misuse the type system. I'll repeat myself: Foo bar = arbitrary_default; is harder to do than Foo? bar; Which does exactly what they want: it relieves them of the need to initialise, and gives a relatively safe default value. I mean, people could abuse a lot of things in D. Pointers, certainly. DEFINITELY inline assembler. But we don't get rid of them because at some point you have to say "you know what? If you're going to play with fire, that's your own lookout." The only way you're ever going to have a language that's actually safe no matter how ignorant, stupid or just outright suicidal the programmer is would be to implement a compiler for SIMPLE: http://esoteric.voxelperfect.net/wiki/SIMPLE >> And what about the people who AREN'T complete idiots, who maybe >> sometimes just accidentally trip and would quite welcome a safety rail >> there? > > Null pointer seg faults *are* a safety rail. They keep an errant program > from causing further damage. Really? " I used to work at Boeing designing critical flight systems. Absolutely the WRONG failure mode is to **pretend nothing went wrong** and happily return **default values** and show lovely green lights on the instrument panel. The right thing is to **immediately inform the pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM DOWN** before it does something really, really bad, because now it is in an unknown state. The pilot then follows the procedure he's trained to, such as engage the backup. " Think of the compiler as the autopilot. Pretending nothing went wrong is passing a null into a function that doesn't expect it, or shoving it into a field that's not meant to be null. Null IS a happy default value that can be passed around without consequence from the type system. Immediately informing the pilot is refusing to compile because the code looks like it's doing something wrong. A NPE is the thermonuclear option of error handling. Your program blows up, tough luck, try again. Debugging is forensics, just like picking through a mound of dead bodies and bits of fuselage; if it's come to that, there's a problem. Non-nullable references are the compiler (or autopilot) putting up the red flag and saying "are you really sure you want to do this? I mean, it LOOKS wrong to me!" >> Finally, let me re-post something I wrote the last time this came up: >> >>> The problem with null dereference problems isn't knowing that they're >>> there: that's the easy part. You helpfully get an exception to the >>> face when that happens. The hard part is figuring out *where* the >>> problem originally occurred. It's not when the exception is thrown >>> that's the issue; it's the point at which you placed a null reference >>> in a slot where you shouldn't have. > > It's a lot harder to track down a bug when the bad initial value gets > combined with a lot of other data first. The only time I've had a > problem finding where a null came from (because they tend to fail very > close to their initialization point) is when the null was caused by > another memory corruption problem. Non-nullable references won't > mitigate that. Only when the nulls are assigned and used locally. I've had code before when a null accidentally snuck into an object through a constructor that was written before the field existed. The object gets passed around. No problem; it's not null. It gets stored inside other things, pulled out. The field itself is pulled out and passed around, put into other things. And THEN the program blows up. You can't run a debugger backwards through time, because that's what you need to do to figure out where the bloody thing came from. The NPE tells you there is a problem, but it doesn't tell you WHY or WHERE. It's your leg dropping off from necrosis and the doctor going "gee, I guess you're sick." It's the plane smashing into the ground and killing everyone inside, a specialised team spending a month analysing the wreckage and saying "well, this screw came loose but BUGGERED if we can work out why." Then, after several more crashes, someone finally realises that it didn't come loose, it was never there to begin with. "Oh! THAT'S why they keep crashing! "Gee, would've been nice if the plane wouldn't have taken off without it."