@StasB

> Not sure what you mean. Can you state what you think the general rule is?

The general rule is written in the manual: "Literals without a type suffix are 
of the type **int** , unless the literal contains a dot or **E|e** in which 
case it is of type **float**. " This is the reason why I asked a question. 
Later in the manual, another rule I missed states that for **int** "An integer 
literal that has no type suffix is of this type if it is in the range 
low(int32)..high(int32) otherwise the literal's type is **int64**. " which 
contradicts the previous rule. If I had known this second rule, I would have 
not asked the question but probably issued a report about the inconsistency.

> Because the idea that your code can either pass or fail type checking 
> depending on where it's being compiled is absolutely bonkers.

But this is already what is done when you use _when_ conditions: you compile a 
code depending on these conditions. You cannot expect to execute exactly the 
same code on all platforms, especially if it depends heavily on the **int** 
size.

Now, if you write var x = 10_000_000_000, the type of _x_ is not explicitly 
defined. It is logical to consider that it is an **int**. Adding a special rule 
to specify that, as it cannot fit in a 32 bits signed integer, it has type 
**int64** has the disadvantage to change its type according to its value. So, 
you have to make sure that changing the value to a smaller integer (such as 
1_000_000_000) doesn 't break the code. And the situation becomes really 
complicated on 32 bits platforms as with 10_000_000_000 you get a 64 bits value 
whereas with 1_000_000_000, you get a 32 bits value. It will be very difficult 
to manage this.

But the right way to write portable code here is var x = 10_000_000_000'i64, 
var x: int64 = 10_000_000_000 or var x: int64 = 10_000_000_000'i64. Even if you 
change the value to 1_000_000_000, the code will continue to compile and 
execute on both platforms. No need then for a special rule for, as we have 
seen, it may be dangerous on 32 bits machines. And, in the future, if we have 
to manage 128 bits integers, we will not have to add another special rule to 
give type **int128** to literals which doesn 't fit in a 64 bits signed integer 
.

@mashingan

According to the second rule, a big literal on a 32 bits machine will be of 
type **int64** , so it will be impossible to assign it to an **int** whatever 
its size. So there is no risk and this is an advantage of the rule.

Without this rule (and only the first one which gives type **int** to literals 
without suffix), if people used to work on 64 bits machines forget to specify 
the suffix, they will see an error at the first compilation.

The only problem is when porting a program written without care on a 64 bits 
machine to a 32 bits machine. But I think that, in this case, other problems, 
not related to this one, will occur. It's unlikely that a program depending on 
integer size will compile and execute without error on another platform, if not 
designed carefully (and tested on this platform). For this reason, this problem 
with big literals may not be so important.

Reply via email to