I have the following code in fac.d (modified from the factorial examples on RosettaCode):

#!/usr/bin/rdmd
import std.bigint;

pure BigInt factorial(BigInt n) {
    static pure BigInt inner(BigInt n, BigInt acc) {
        return n == 0 ? acc : inner(n - 1, acc * n);
    }
    return inner(n, BigInt("1"));
}

void main(string[] args) {
        import std.stdio;
        BigInt input = args[1];
        writeln(factorial(input));
        return;
}

It (more or less consistently) on my machine will calculate 'fac 47610', and (more or less consistently) will core dump with a segfault on 'fac 47611'.

Interestingly, if I redirect stdout to a file it will usually manage to get to 47612.

To satisfy my own curiosity about what's happening, are there any resources I can use to analyse the core dump?

Thanks.

Reply via email to