On Wednesday, 6 December 2017 at 09:12:08 UTC, helxi wrote:
This is question not directly related to language concepts, it's got more to do with the application. I would appreciate if anyone would point to me how I could optimize this bit of code

Here's my version:, based on fast squaring:

auto fib(ulong n) {
    import std.bigint : BigInt;
    import std.meta : AliasSeq;
    import std.typecons : tuple;
    BigInt a = 0;
    BigInt b = 1;
    auto bit = 63;
    while (bit > 0) {
        AliasSeq!(a, b) = tuple(
            a * (2*b - a),
            a*a + b*b);

        if (n & (BigInt(1) << bit)) {
            AliasSeq!(a, b) = tuple(b, a+b);
        }
        --bit;
    }
    return a;
}

unittest {
    import std.stdio : writeln;
    import std.datetime : MonoTime;

    auto t0 = MonoTime.currTime;
    writeln(fib(10_000_000));
    writeln(MonoTime.currTime - t0);
}

Takes around 600 milliseconds to compute fib(1_000_000), and 50 seconds for fib(10_000_000).

Fast squaring algorithm found and described here:
https://www.nayuki.io/page/fast-fibonacci-algorithms

I also noticed that if I try to compute it in the compile time with enum x = fib(100000) the compiler freezes. What could cause this?

That's because the poor compiler isn't as good at optimizing compile-time code as run-time code, and because fib(100000) is frigging ginormous.

--
  Biotronic

Reply via email to