On 10/22/2010 11:11 PM, bearophile wrote:
This is a minor thing, if you aren't interested, ignore it.

The support for underscore in number literals as done in D and Ada is a feature 
I like a lot. But you may write:

long x = 1_000_000_000_00;

The usage of underscores there doesn't correspond to the thousands, this may 
lead to mistakes, and then maybe to bugs. Something similar may happen for hex 
(both integral and FP), octal or binary number literals (that usually you don't 
divide in groups of 3).

In D I have written numbers with underscores positioned in a way that I 
consider wrong.

So isn't it better to restrict the usage of the underscores every 3 digits 
(starting from the less significant one) for decimal literals, and every 4 or 8 
or 16 or 32 digits in binary/octal/hex number literals? (4 or 8 or 16 or 32 
means that you are free to use one of those four styles, but then you need to 
use it consistently in one number literal).

A problem with this is that not everybody uses groups of 3 digits in decimal 
number literals (Do Chinese people use groups of four?).

(When I have proposed to introduce underscores in Python number literals they 
have discussed about this sub topic too.)


I'm pretty opposed to this idea. Not just because it's euro-centric:

==========
From http://en.wikipedia.org/wiki/Decimal_mark#Digit_grouping:

For example, in various countries (e.g., China, India, and Japan), there have been traditional conventions of grouping by 2 or 4 digits.

==========

But also because there's a lot I do that doesn't involve 3-digit grouping. Hex numbers, for example, make sense grouped as 2 or 4 digits.

Binary numbers make sense grouped as 3 (for octal) and 4 (for nibbles),
and bit-masks will frequently be unaligned, or aligned left instead of right (to describe upper-bit masks).

It may be that a warning is convenient if the radix is 10. But it should probably be a very low-profile warning. And easy to suppress.

=Austin

Reply via email to