On 19.05.2016 09:09, Walter Bright wrote:
On 5/18/2016 10:10 AM, Timon Gehr wrote:
double kahan(double[] arr){
    double sum = 0.0;
    double c = 0.0;
    foreach(x;arr){
        double y=x-c;
         double y = roundToDouble(x - c);

Those two lines producing different results is unexpected, because you are explicitly saying that y is a double, and the first line also does implicit rounding (probably -- on all compilers and targets that will be relevant in the near future -- to double). It's obviously bad language design on multiple levels.

        double t=sum+y;
         double t = roundToDouble(sum + y);
        c = (t-sum)-y;
        sum=t;
    }
    return sum;
}

Those are the only two points in the program that depend on rounding.

If you say so. I would like to see an example that demonstrates that the first roundToDouble is required.

Also, in case that the compiler chooses to internally use higher precision for all local variables in that program, the 'roundToDouble's you have inserted will reduce precision in comparison to the case where they are not inserted, reducing magical precision enhancement that would otherwise happen. (I.e., assuming that some of the ideas are valid and it is in fact desirable to have variable precision enhancement depending on target, if I find a precision bug I can fix by adding roundToDouble, this introduces a potential regression on other targets. And as none of it was guaranteed to work in the first place, the resulting mess is all my fault.)

If you're implementing Kahan,

And if you are not? (I find the standard assumption that counterexamples to wrong statements are one-off special cases tiresome. This is not usually the case, even if you cannot construct other examples right away.)

you'd know that,

I would, but what often happens in practice is that people don't. For example because they wouldn't expect roundToDouble to be required anywhere in code that only uses doubles. They are right.

so there shouldn't be anything difficult about it.
...

https://issues.dlang.org/show_bug.cgi?id=13474

Creating useful programs in D shouldn't require knowing (or thinking about) an excess amount of D-specific implicit arcane details of the language semantics. Being easy to pick up and use is a major selling point for D.

I'm not convinced there is nothing difficult about it. It's certainly a generator of incidental complexity. Also, error prone is not the same as difficult. Almost everybody makes some mistakes occasionally.

There's no reason to make the rest of the program suffer inaccuracies.

Indeed there isn't. I'm not suggesting to allow using a precision lower than the one specified.

If you are talking about programs that "benefit" from automatic extension of precision: do you think their authors are aware that e.g. their 32 bit release builds with gdc/ldc are producing wrong results?

Reply via email to