>Jonas has argued, not without reason, that calculating everything always at
full precision has its disadvantages too.

I agree with that, and I do see the value in reducing the precision when it
is possible, but not when it's causing data loss. 
The intention is perfectly fine, it's the execution that has a bug in it. 

I think that any reasonable person reading the following code would conclude
that FF, GG, HH, and II should be exactly the same.  I am defining
constants, in FF I define variables of the same type to the constants, and
it comes out correctly, in GG I use the constants directly, and its wrong.
There is nothing about this that any programmer should understand because
it's a bug. 

FF and GG are both adding an integer to a byte divided by a single, there is
no difference to any reasonable programmer between what FF and GG are
saying, and the programmer should not have to resort to ridiculous
typecasting as in II to get almost the correct answer, but is still wrong.
By the way notice that even with the casting, it's still wrong. 
II SHOULD have produced the right answer, because it's perfectly legitimate
to divide a byte by a single and expect the answer to be an extended. 

program Constant_Bug;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.0);

Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   FF, GG, HH, II : Extended;

begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   FF := A_Var+B_Var/C_Var;
   GG := A_Const+B_Const/C_Const;
   HH := Extended(A_Const+B_Const/C_Const);
   II := Extended(A_Const+Extended(B_Const/C_Const));

   WRITELN ( '     FF = ',FF: 20 : 20 ) ;
   WRITELN ( '     GG = ',GG: 20 : 20 ) ;
   WRITELN ( '     HH = ',HH: 20 : 20 ) ;
   WRITELN ( '     II = ',II: 20 : 20 ) ;
end.

     FF = 8427.02291666666666625000
     GG = 8427.02246093750000000000
     HH = 8427.02246093750000000000
     II = 8427.02291666716337204000

FF and II are correct, GG and HH are wrong.   I understand now WHY this is
happening, but I argue, that it's not obvious to anyone that it should be
happening, it's just a hidden known bug waiting to bite you.  No reasonable
programmer would think that FF and GG would come out differently,  the
datatypes are all defined legitimately, and the same, the results should
also be the same.

In my opinion the changes in v2.2 break more things than they fix, and
should be reverted, and used ONLY if asked for by a compiler directive, we
should not have to do special things to get it to work correctly.  If you
give the compiler directive to use this feature, then you know you might
have to cast some things yourself, but to apply this globally and then
require a directive to not do it, is just not right, unless ALL code can be
run the way it did pre 2.2 without modification,  this is CLEARLY not the
case.   

James

_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to