On 2011-04-02 19:58:25 -0400, Walter Bright <newshou...@digitalmars.com> said:

On 4/2/2011 4:11 PM, Michel Fortin wrote:
It's funny that D (the language) has binary notation built-in (which C doesn't
have) but no octal notation anymore (which C has).

The problem with the octal literals is, as has been often complained about, people getting surprised by it. I've never heard of anyone being surprised by the binary or hex literals.

Indeed. Isn't that a good argument for implementing octal literals the same way as binary and hex literals?


You now have to resort to a
library template for that,

I think it's a feature, not a "resort", that library templates can do this well. I think it's far better than C++0x's user defined literals, for example.

I disagree that it's better. With C++ user defined literals the user doesn't have to find by himself whether the number fits within the range of a regular integer literal and if not fall back to using a string as the template argument instead.

I don't think it's much worse, but I fail to see how it could be better.


and it doesn't work for big numbers: try
assert(octal!1777777777777777777777 == 0xFFFF_FFFF_FFFF_FFFF). Not that I expect
anyone to want to write big 64-bit numbers in octal, but it makes the new
"official" octal notation more like a hack.

If you use octal!"1777777777777777777777" it will work correctly. You're right in that the decimal literal being "converted" to octal is a bit of a hack.

The octal!1777777777777777777777 will fail at compile time with an integer overflow, it never gets to the runtime assert.

Which makes me wonder, what does the compiler suggests in the error message when it encounters 01777777777777777777777 ? I suspect it doesn't add the necessary quotes, does it?

The new syntax is certainly usable, it's just inelegant and hackish. Its your language, it's your choice, and I'll admit it won't affect me much.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/

Reply via email to