D calculation:
writefln("%12.2F",log(1-0.9999)/log(1-(1-0.6)^^20));
837675572.38
C++ calculation:
cout<<setprecision(12)<< (log(1-0.9999)/log(1-pow(1-0.6,20)))
<<'\n';
837675573.587
As a second data point, changing 0.9999 to 0.75 yields
126082736.96 (Dlang) vs 126082737.142 (C++).
The discrepancy stood out as I was ultimately taking the ceil of
the results and noticed an off by one anomaly. Testing with
octave, www.desmos.com/scientific, and libreoffice(calc) gave
results consistent with the C++ result. Is the dlang calculation
within the error bound of what double precision should yield?