Thanks a lot for your comments.

On Wednesday, 28 June 2017 at 23:56:42 UTC, Stefan Koch wrote:
[...]

Nice work can you re or dual license under the boost license ?
I'd like to incorporate the qd type into newCTFE.

The original work is not mine but traces back to http://crd-legacy.lbl.gov/~dhbailey/mpdist/ which is under a (modified) BSD license. I just posted the link for context, sorry for the confusion. Doing a port to D does not allow me to change the license even though I not a single line from the original would remain (I think?).

I might do a completely new D implementation (still based on the original authors research paper, not on the details of their code). But 1. I probably would only do a subset of functions I need for my work (i.e. double-double only, no quad-double, and only a limited set of trancendental functions). 2. Given that I have seen the original code, this might still be considered a "derivative work". I'm not sure, copyright-law is kinda confusing to me in these cases.

Indeed you'll have no way to get rid of the excess precision except for creating a function per sub-expression.

No, doesn't seem to work. Here is a minimal breaking example:

double sum(double x, double y) { return x + y; }
bool equals(double x, double y) { return x == y; }

enum pi = ddouble(3.141592653589793116e+00, 1.224646799147353207e-16);

struct ddouble
{
        double hi, lo;

        invariant
        {
                if(!isNaN(hi) && !isNaN(lo))
                        assert(equals(sum(hi, lo),  hi));
        }

        this(double hi, double lo)
        {
                this.hi = hi;
                this.lo = lo;
        }
}

But there are workarounds that seem to work:
1. remove the constructor (I think this means the invariant is not checked anymore?)
2. disable the invariant in ctfe (using "if(__ctfe) return;")
3. Don't use any ctfe (by replacing enum with immutable globals, initialized in "static this").

I was using the newCTFE fork which fixes this.

Does this mean your new CTFE code (which is quite impressive work as far as I can tell), floating point no longer gets promoted to higher precision? That would be really good news for hackish floating-point code.

Honestly, this whole "compiler gets to decide which type to actually use" thing really bugs me. Kinda reminiscent of C/C++ integer types which could in principle be anything at all. I thought D had fixed this by specifying "int = 32-bit, long = 64-bit, float = IEEE-single-precision, double = IEEE-double-precision". Apparently not.

If I write "double", I would like to get IEEE-conform double-precision operations. If I wanted something depending on target-platform and compiler-optimization-level I would have used "real". Also this 80-bit-extended type is just a bad idea in general and should never be used (IMHO). Even on x86 processors, it only exists for backward-compatibility. No current instruction set (like SEE/AVX) supports it. Sorry for the long rant. But I am puzzled that the spec (https://dlang.org/spec/float.html) actually encourages double<->real convertions while at the same time it (rightfully) disallows "unsafe math optimizations" such as "x-x=0".

Reply via email to