On 20 October 2011 17:28, Simen Kjaeraas <simen.kja...@gmail.com> wrote:
> On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkey...@gmail.com> wrote: > > I could only support 2 if it chooses 'float', the highest performance >> version on all architectures AND actually available on all architectures; >> given this is meant to be a systems programming language, and supporting >> as >> many architectures as possible? >> > > D specifically supports double (as a 64-bit float), regardless of the > actual hardware. Also, the D way is to make the correct way simple, the > fast way possible. This is clearly in favor of not using float, which > *would* lead to precision loss. > Correct, on all architectures I'm aware of that don't have hardware double support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy. The reason I'm so concerned about this, is not for what I may or may not do in my code, I'm likely to be careful, but imagine some cool library that I want to make use of... some programmer has gone and written 'x = sqrt(2)' in this library somewhere; they don't require double precision, but it was implicitly used regardless of their intent. Now I can't use that library in my project. Any library that wasn't written with the intent of use in embedded systems in mind, that happens to omit all of 2 characters from their float literal, can no longer be used in my project. This makes me sad. I'd also like you to ask yourself realistically, of all the integers you've EVER cast to/from a float, how many have ever been a big/huge number? And if/when that occurred, what did you do with it? Was the precision important? Was it important enough to you to explicitly state the cast? The moment you use it in a mathematical operation you are likely throwing away a bunch of precision anyway, especially for the complex functions like sqrt/log/etc in question.