Andrei Alexandrescu wrote:
[ ... ]

Although this case is not about default values but about the result of a computation (in this case 0.0/0.0), I think it still reveals the usefulness of having a singular value in the floating point realm.

My argument was never against the usefulness of NaN for debugging... only that it should be considered a debugging feature and explicitly defined, rather than intruding on convenience and consistency (with Int) by being the default.

I completely agree NaNs are important for debugging floating point math, in fact D's default-to-NaN has caught a couple of my construction mistakes before. The problem, is that this sort of construction mistake is bigger than just floating point and NaN. You can mis-set a variable, float or not, or you can not set an int when you should have.

So the question becomes not what benefit NaN is for debugging, but what a persons thought process is when creating/debugging code, and herein lies the heart of my qualm. In D we have a bit of a conceptual double standard within the number community. I have to remember these rules when I'm creating something, not just when I'm debugging it. As often as D may have caught a construction mistake specifically related to floats in my code, 10x more so it's produced NaN's where I intended a number, because I forgot about the double standard when adding a field or creating a variable.

A C++ guy might not think twice about this because he's used to having to default values all the time (IDK, I'm not that guy), but to a C# guy, D's approach feels more like a regression, and that's a paper-cut on someone's opinion of the language.

Reply via email to