James Richters via fpc-pascal wrote:
What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.

Indeed. It is wrong. And if Delphi does it wrong, it is still wrong for modes 
other than Delphi.


What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);

Pascal doesn't attach a floating-point type to a floating-point constant. So, the only correct way for the compiler to handle it is to NOT attach a floating-point type to the declared constant in advance, that is, the compiler must store it in a symbol table as BCD or as string. And decide LATER what type it has. And in this case, where the assignment is to an extended, as soon as that is clear, and not earlier, the compiler can do the conversion of the BCD or string floating-point constant to the floating-point type in question, i.c. extended.

If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0 Compatibility with all versions of Pascal, etc..


This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.

Jonas has argued, not without reason, that calculating everything always at full precision has its disadvantages too.


The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default??

Delphi/Borland invents some seemingly clever by factually stupid scheme and FPC wants to be compatible with it. Some applaud, but I am more impressed by logical reason than by what Borland does without logical reason.


"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default
precision of all floating point constants in the compiled modules."

The first remedy is unreasonable, I should not have to go through thousands
of lines of code and cast my constants, it was never a requirement of Pascal
to do this.

Right.


Great if -CF80 worked, but even if you are happy with -CF64, my problem is:
how is anyone coming into FPC after 2.2 supposed to know that their
constants that always worked before are going to no longer be accurate??

The better thing to do would be to do it RIGHT before releasing the change
so that it can't be a problem for anyone, and make:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"  a true statement.

If the entire formula was evaluated at full precision, and only the result
was stored as a lower precision if possible, then there is never a problem
for anyone.

Regards,

Adriaan van Os
_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to