On 1/5/2021 5:30 AM, Guillaume Piolat wrote:
It would be nice if no excess precision was ever used. It can sometimes gives a false sense of correctness. It has no upside except accidental correctness that can break when compiled for a different platform.

That same argument could be use to always use float instead of double. I hope you see it's fallacious <g>


What about this plan?
- use SSE all the time in DMD

That was done for OSX because their baseline CPU had SSE.

- drop real :)

No.

Reply via email to