Sorry, I stopped reading this thread after my last response, as I felt I was wasting too much time on this discussion, so I didn't read your response till now.

On Saturday, 21 May 2016 at 14:38:20 UTC, Timon Gehr wrote:
On 20.05.2016 13:32, Joakim wrote:
Yet you're the one arguing against increasing precision everywhere in CTFE.
...

High precision is usually good (use high precision, up to arbitrary precision or even symbolic arithmetic whenever it improves your results and you can afford it). *Implicitly* increasing precision during CTFE is bad. *Explicitly* using higher precision during CTFE than at running time /may or may not/ be good. In case it is good, there is often no reason to stop at 80 bits.

It is not "implicitly increasing," Walter has said it will always be done for CTFE, ie it is explicit behavior for all compile-time calculation. And he agrees with you about not stopping at 80 bits, which is why he wanted to increase the precision of compile-time calculation even more.

This example wasn't specifically about CTFE, but just imagine that only part of the computation is done at CTFE, all local variables are
transferred to runtime and the computation is completed there.

Why would I imagine that?

Because that's the most direct way to go from that example to one where implicit precision enhancement during CTFE only is bad.

Obviously, but you still have not said why one would need to do that in some real situation, which is what I was asking for.

And if any part of it is done at runtime using the algorithms you gave,
which you yourself admit works fine if you use the right
higher-precision types,

What's "right" about them? That the compiler will not implicitly transform some of them to even higher precision in order to break the algorithm again? (I don't think that is even guaranteed.)

What's right is that their precision is high enough to possibly give you the accuracy you want, and increasing their precision will only better that.

you don't seem to have a point at all.
...

Be assured that I have a point. If you spend some time reading, or ask some constructive questions, I might be able to get it across to you. Otherwise, we might as well stop arguing.

I think you don't really have a point, as your argumentation and examples are labored.

No, it is intrinsic to any floating-point calculation.
...

How do you even define accuracy if you don't specify an infinitely
precise reference result?

There is no such thing as an infinitely precise result. All one can do is compute using even higher precision and compare it to lower precision.
...

If I may ask, how much mathematics have you been exposed to?

I suspect a lot more than you have. Note that I'm talking about calculation and compute, which can only be done at finite precision. One can manipulate symbolic math with all kinds of abstractions, but once you have to insert arbitrarily but finitely precise inputs and _compute_ outputs, you have to round somewhere for any non-trivial calculation.

That is a very specific case where they're implementing higher-precision
algorithms using lower-precision registers.

If I argue in the abstract, people request examples. If I provide examples, people complain that they are too specific.

Yes, and? The point of providing examples is to illustrate a general need with a specific case. If your specific case is too niche, it is not a general need, ie the people you're complaining about can make both those statements and still make sense.

If you're going to all that
trouble, you should know not to blindly run the same code at compile-time.
...

The point of CTFE is to be able to run any code at compile-time that adheres to a well-defined set of restrictions. Not using floating point is not one of them.

My point is that potentially not being able to use CTFE for floating-point calculation that is highly specific to the hardware is a perfectly reasonable restriction.

The only mention of "the last bit" is

This part is actually funny. Thanks for the laugh. :-)
I was going to say that your text search was too naive, but then I double-checked your claim and there are actually two mentions of "the last bit", and close by to the other mention, the paper says that "the first double a_0 is a double-precision approximation to the number a,
accurate to almost half an ulp."

Is there a point to this paragraph?


I don't think further explanations are required here. Maybe be more careful next time.

Not required because you have some unstated assumptions that we are supposed to read from your mind? Specifically, you have not said why doing the calculation of that "double-precision approximation" at a higher precision and then rounding would necessarily throw their algorithms off.

But as long as the way CTFE extending precision is
consistently done and clearly communicated,

It never will be clearly communicated to everyone and it will also hit people by accident who would have been aware of it.

What is so incredibly awesome about /implicit/ 80 bit precision as to justify the loss of control? If I want to use high precision for certain constants precomputed at compile time, I can do it just as well, possibly even at more than 80 bits such as to actually obtain accuracy up to the last bit.

On the other hand, what is so bad about CTFE-calculated constants being computed at a higher precision and then rounded down? Almost any algorithm would benefit from that.

Also, maybe I will need to move the computation to startup at runtime some time in the future because of some CTFE limitation, and then the additional implicit gain from 80 bit precision will be lost and cause a regression. The compiler just has no way to guess what precision is actually needed for each operation.

Another scenario that I find far-fetched.

those people can always opt out and do it some other way.
...

Sure, but now they need to _carefully_ maintain different implementations for CTFE and runtime, for an ugly misfeature. It's a silly magic trick that is not actually very useful and prone to errors.

I think the idea is to give compile-time calculations a boost in precision and accuracy, thus improving the constants computed at compile-time for almost every runtime algorithm. There may be some algorithms that have problems with this, but I think Walter and I are saying they're so few not to worry about, ie the benefits greatly outweigh the costs.

Reply via email to