Don:

> Still, I wonder if D could simply do something like defining that for classes:
> if (x)  is always transformed into  if ( x!=0 )
> if (!x) is always transformed into  if ( x==0 )

Explicit is better than implicit. For example I'd like to have a set collection 
that is "false" when empty. In such situation I can write just the following 
method, it seems simpler and semantically cleaner than alternatives:

bool opBool() { return this.length != 0; }


>On second thoughts, y = x.toLong or y = to!(long)(x) is probably better. Casts 
>are evil, implicit casts even more so.<

I don't know. Using esplicit casts/methods may be safer. 

So with your current changes to BigInt, in the following program to replace the 
int "i" with a BigInt:

void main() {
    int i = 1;
    if (i)
        i++;
    auto a = [10, 20, 30, 40];
    printf("%d\n", a[i]);
}

you need to change the code like this:

void main() {
    BigInt i = 1;
    if (i != 0)
        i++;
    auto a = [10, 20, 30, 40];
    printf("%d\n", to!(long)a[i]);
}

The toBool will help avoid the change in the second line.

I'd like to change programs as little as possible when I change the type of a 
variable from int to BigInt. This has also the big advantage that I can write 
templated algorithms that work with both ints and BigInts with as few "static 
if" as possible (to manage BigInts in a special way, for example adding that 
to!(long) ). That's why in such situation an implicit casting is handy.

-------------

Such almost-transparent replacement of ints with BigInts can be done also if 
BigInts are fast to perform operations with small integers (like with integers 
in range -1073741824 .. 1073741824).

I have done few easy benchmarks (that I can show you if you want) and I have 
seen that when a bigint contains only ~30 bits of data or less a BigInt is much 
slower than an int. Some speed difference is inevitable, but to help such 
replacement I think it can be good if BigInts gain speed optimizations for 
small such numbers. (And when you use BigInts that contain 5000+ bits such 
optimizazions don't slow down BigInts significantly).

Possible idea: you can test if the number needs less than 31 bits, if so, you 
can compute the operation using just a "long". If the result then can be stored 
back in about 31 bits, then you are done. This is slower than a simple 
operation among "ints" but it may be much faster than the same operations done 
with BigInts.

When numbers are so small the BigInt may also avoid all heap activity (Lisp 
languages do this using tags, they use a tagged pointer, that can be a small 
integer, avoiding any memory allocation when the number is small enough).

Bye,
bearophile

Reply via email to