On Wednesday, 7 September 2016 at 14:46:46 UTC, Sai wrote:
I suspected the same, most of the CPUs support fast floating point operations anyway (with FPUs), shouldn't take a lot more time than doing integer arithmetic. Unless we are targeting 8bit avr or something similar.

And precision argument doesn't seem strong either, since, which is more precise 3/7 = 0 or 0.4285 ?

I am not suggesting we change the promotion rules now, most likely never going to happen. But I am trying to find a good rationale for the existing rules and unable to find a good one.

x == x/k + x%k

The above formula holds in integer arithmetic. It is fundamental in many operations. You cannot achieve the same with floating point calculations. Whenever you do batch jobs, dividing you work into groups (and you don't really want 2.3 groups) and whenever you iterate the digits of a number, implicit floating point conversions can really hurt.

I think the current state of affairs is fairly good. Adding implicit conversions would make it worst. What would probably make it better (but can't be changed now) is having two distinct operators, one for integer division and one with implicit floating point conversions.

Reply via email to