- Its length is variable across operating systems, it can be 10, 12, 16 bytes, 
or even just 8 if they get implemented with doubles. The 12 and 16 bytes long 
waste space.

across hardware systems - its not an operating system thing 80bit is the native size of x86 fpu

- Results that you can find with programs written in other languages are 
usually computed with just floats or doubles.
>If I want to test if a D program gives the same results I can't use reals in D.
> - I don't see reals (long doubles in C) used much in other languages.

but you can't port delphi extended type (since delphi2 i think), gcc suports it, llvm supports it, assembler "support" it, borland and intel compiler supports it

- Removing a built-in type makes the language and its manual a little simpler.

it doesn't change the codegeneration that much (ok, ok there are some fpu instructions that are not fully equal to double precision behaviour) but also for more than 15 years now

- I have used D1 for some time, but so far I have had hard time to find a 
purpose for 80 bit FP numbers. The slight increase in precision is not so 
useful.
- D implementations are free to use doubles to implement the real type. So in a 
D program I can't rely on their little extra precision, making them not so 
useful.

but it is an 80bit precision feature in hardware - why should i use an software based solution - if 80bits are enough for me btw: the precision lost while switching between fpu stack and the D data space is better

- While I think the 80 bit FP are not so useful, I think Quadrupe precision FP 
(128 bit, currently usually software-implemented)
>can be useful for some situations, (http://en.wikipedia.org/wiki/Quadruple_precision ). >They might be useful for High dynamic range imaging too. LLVM SPARC V9 will support its quad-precision registers.

sounds a little bit like: lets throw away the byte type - because we can do better things with int

- The D2 specs say real is the "largest hardware implemented floating point 
size", this means that they can be 128 bit too in future. A numerical simulation 
that is
>designed to work with 80 bit FP numbers (or 64 bit FP numbers) can give strange results with 128 bit precision.

ok now we got 32bit, 64bit and 80bit in hardware - that will (i hope) become 32bit,64bit,80bit,128bit,etc... but why should we throw away real - maybe we should alias it to float80 or something - and later there will be an float128 etc.

So I suggest to remove the real type; or eventually replace it with fixed-sized 
128 bit floating-point type with the same name (implemented using a software
>emulation where the hardware doesn't have them, like the __float128 of
>GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ).
> In far future, if the hardware of CPUs will support FP numbers larger than 128 bits, a larger type can be added if necessary.

why should we throw away direct hardware support - isn't it enough to add your software/hardware float128? and all the others

btw: the 80bit code-generator part is much smaller/simpler in code than your 128bit software based impl


Reply via email to