Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
>IMO, the computations of AA+BB/CC (right hand side) should be carried out the 
>same way, regardless of the type 
>on the left hand side of the assignment. So I would expect the values in DD, 
>EE and FF being the same.

In this example DD hold the value 8427.0224610 because DD it defined as a 
single and a single cannot hold the value 8427.022916625000, there 
aren’t enough bits.

There is a typo on line:
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;

It should have been 
   WRITELN ( 'EE = ',EE: 20 : 20 ) ;

And the result of it should have been:
EE = 8427.022916668000

Which is not the same as 
FF = 8427.022916625000
Again because 8427.022916625000 won’t fit in a double.

The intention with all that was to show that everything works correctly if 
variables are used.  
The problem is when you use constants you get the 8427.0224... for everything, 
even when you have defined a double or an extended


>That said: wouldn't it make more sense to give EVERY FP CONSTANT the FP type 
>with the best available precision? 
Yes, that is the old way, before the change in 2.2, but there are times when it 
would be more efficient to reduce the precision in the example of:
Const 
MyConst = 2.0;
That doesn't have to be floating point, and if you later use it as the 
denominator in a divide, it's less efficient than it would be if it was an 
integer.. I argue that on modern computers, who cares, but if you do want to 
reduce precision to increase performance, it NEEDS to be done in a way that 
guarantees no loss of precision with no modification of code.  The changes in 
v2.2 fails in this regard.   The thing I don't understand is that this was 
released as the default for all modes and indeed with no good way to turn it 
off for extended, even though it was already known that there would be 
inaccuracies when using constants with divides.   The change in 2.2 should NOT 
have been the default for everyone, it should have been an option for those who 
want performance at the cost of precision, but nearly everyone will not notice 
the performance increase on a modern computer, but we will all do not want to 
risk a loss of precision...  


>GG in this case would have the same value as HH, because the computation 
>involving the constants 
>(hopefully done by the compiler) would be done with the best available 
>precision. 

Yes, it would be!  And that is precisely why this is a bug!  GG not matching HH 
is a problem.
GG and HH should be identical, the compiler should to math exactly the same way 
as an executing program, otherwise it's a mess.

The computation SHOULD always be done at maximum precision, and that's the way 
it used to work before this: 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants
"Old behaviour: all floating point constants were considered to be of the 
highest precision available on the target platform"
This is the correct way that guarantees precision. 

"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"
This is GREAT if you can pull it off in a way that doesn't cause data loss in 
ANY condition.   I believe this bug can be fixed and we can have efficiency and 
guaranteed no data loss, and then the "Effect" and "Remedy" below would not be 
needed...
But if it's not possible to guarantee no data loss, and the "Effect" is still a 
possibility, then this entire thing should be an OPTION.

" Effect: some expressions, in particular divisions of integer values by 
floating point constants, may now default to a lower precision than in the 
past. "  
What  This IS data loss!!!  This precisely describes the BUG.  This is the 
reason this should NEVER have been made the default for everyone,  it should 
have required a compiler directive to turn on this behavior.  This lower 
precision is in direct violation of the 'New behavior' statement!
There is NO reason why anyone writing a Pascal program would expect this 
behavior.  It's NOT the way Pascal has EVER behaved.

" Remedy: if more precision is required than the default, typecast the floating 
point constant to a higher precision type, e.g. extended(2.0). Alternatively, 
you can use the -CF command line option to change the default precision of all 
floating point constants in the compiled modules. "
This is unreasonable.  How is anyone supposed to know that now to get it to 
work correctly as it should work, we need to cast out constants, old code 
should work the way it always did, and people writing new code will NEVER 
expect this needs to be done.
On top of that the -CF option only works for -CF32 and -CF64 so its no solution 
for Extended.. why do I need a special option to do things correctly?'

How about this.. if one variable is defined as a Double or Extended, then shut 
this 'feature' off, because it's asking for trouble.   Nobody uses Doubles or 
Extended in a program because they want low precision results.



Re: [fpc-pascal] Floating point question

2024-02-06 Thread Bernd Oppolzer via fpc-pascal
I didn't follow all the discussions on this topic and all the details of 
compiler options of FPC

and Delphi compatibility and so on, but I'd like to comment on this result:

program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   


begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;

   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.


result:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000


IMO, the computations of AA+BB/CC (right hand side) should be carried 
out the same way, regardless of the type
on the left hand side of the assignment. So I would expect the values in 
DD, EE and FF being the same.


But as it seems, the left hand side (and the type of the target 
variable) HAS AN INFLUENCE on the computation

on the right hand side, and so we get (for example)

DD = 8427.02246100

and

EE = 8427.022916625000

which IMHO is plain wrong.

If all computations of AA+BB/CC would be carried out involving only 
single precision,

all results DD, EE, FF (maybe not GG) should be 8427.0224...
only minor differences because of the different precisions of the target 
variables

(but not as large as the difference between DD and EE above).

This would be OK IMHO;
it would be easy to explain to everyone the reduced precision on these 
computations

as a consequence of the types of the operands involved.

Another question, which should be answered separately:

the compiler apparently assigns types to FP constants.
It does so depending on the fact if a certain decimal representation can 
exactly be represented

in the FP format or not.

1440.0 and 1440.5 can be represented as single precision, so the FP type 
single is assigned
1440.1 cannot, because 0.1 is an unlimited sequence of hex digits, so (I 
guess), the biggest available FP type is assigned

1440.25 probably can, so type single is assigned
1440.3: biggest FP type
1440.375: probably single

and so on

Now: who is supposed to know for any given decimal representation of a 
FP constant, if it can or cannot
be represented in a single precision FP variable? This depends on the 
length of the decimal representation,
among other facts ... and the fraction part has to be a multiple of 
negative powers of 2 etc. etc.


That said: wouldn't it make more sense to give EVERY FP CONSTANT the FP 
type with the best available precision?


If the compiler did this, the problems which arise here could be solved, 
I think.


GG in this case would have the same value as HH, because the computation 
involving the constants
(hopefully done by the compiler) would be done with the best available 
precision.


HTH, kind regards

Bernd


Am 06.02.2024 um 16:23 schrieb James Richters via fpc-pascal:

program TESTDBL1 ;

Const
HH = 8427.02291667;
Var
AA : Integer;
BB : Byte;
CC : Single;
DD : Single;
EE : Double;
FF : Extended;
GG : Extended;



begin
AA := 8427;
BB := 33;
CC := 1440.0;
DD := AA+BB/CC;
EE := AA+BB/CC;
FF := AA+BB/CC;
GG := 8427+33/1440.0;

WRITELN ( 'DD = ',DD: 20 : 20 ) ;

WRITELN ( 'EE = ',FF: 20 : 20 ) ;
WRITELN ( 'FF = ',FF: 20 : 20 ) ;
WRITELN ( 'GG = ',GG: 20 : 20 ) ;
WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Jean SUZINEAU via fpc-pascal
I've just made a small test with the old Borland Delphi 7.0 build 4453 
from 2002 :


...

type
  TForm1 = class(TForm)
m: TMemo;
procedure FormCreate(Sender: TObject);
  end;

...

procedure TForm1.FormCreate(Sender: TObject);
var
   GG: Extended;
   S: String;
begin
 GG := 8427+33/1440.0;
 Str( GG: 20 : 20, S);
 m.Lines.Add( 'GG = '+S);
end;

I get :

GG = 8427.02291700

But I'm cautious, it's a delphi 7 running on a broken installation of 
Wine on Ubuntu 22.04,
I had to modify an existing delphi 7 project for this test, I couldn't 
save a new project because of problems with wine.


I have an old astronomical made with delphi 7 that I've written around 
2000, and I ported a part of it to Freepascal, I'm nearly sure there are 
unnoticed errors in this freepascal port due to to this behaviour ...
(Not really a problem because the program isn't sold any more, but I'll 
have a look next time I compile it)
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Rafael Picanço via fpc-pascal
> Why (re-invent the wheel)?
> Why not use Math.Float?
> IIRC then this is Extended, double or Single depending on CPU type.
> And always the largest precision the CPU supports.

Thanks Bart. Math.Float is really great, I will start using it today.

On Tue, Feb 6, 2024 at 2:51 PM Bart  wrote:

> On Tue, Feb 6, 2024 at 6:13 PM Rafael Picanço via fpc-pascal
>  wrote:
>
>
> > type
> >   {$IFDEF CPU86}{$IFDEF CPU32}
> > TLargerFloat = Extended;
> >   {$ENDIF}{$ENDIF}
> >
> >   {$IFDEF CPUX86_64}
> > TLargerFloat = Double;
> >   {$ENDIF}
>
> Why (re-invent the wheel)?
> Why not use Math.Float?
> IIRC then this is Extended, double or Single depending on CPU type.
> And always the largest precision the CPU supports.
>
> --
> Bart
>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
>Jonas has argued, not without reason, that calculating everything always at
full precision has its disadvantages too.

I agree with that, and I do see the value in reducing the precision when it
is possible, but not when it's causing data loss. 
The intention is perfectly fine, it's the execution that has a bug in it. 

I think that any reasonable person reading the following code would conclude
that FF, GG, HH, and II should be exactly the same.  I am defining
constants, in FF I define variables of the same type to the constants, and
it comes out correctly, in GG I use the constants directly, and its wrong.
There is nothing about this that any programmer should understand because
it's a bug. 

FF and GG are both adding an integer to a byte divided by a single, there is
no difference to any reasonable programmer between what FF and GG are
saying, and the programmer should not have to resort to ridiculous
typecasting as in II to get almost the correct answer, but is still wrong.
By the way notice that even with the casting, it's still wrong. 
II SHOULD have produced the right answer, because it's perfectly legitimate
to divide a byte by a single and expect the answer to be an extended. 

program Constant_Bug;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.0);

Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   FF, GG, HH, II : Extended;

begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   FF := A_Var+B_Var/C_Var;
   GG := A_Const+B_Const/C_Const;
   HH := Extended(A_Const+B_Const/C_Const);
   II := Extended(A_Const+Extended(B_Const/C_Const));

   WRITELN ( ' FF = ',FF: 20 : 20 ) ;
   WRITELN ( ' GG = ',GG: 20 : 20 ) ;
   WRITELN ( ' HH = ',HH: 20 : 20 ) ;
   WRITELN ( ' II = ',II: 20 : 20 ) ;
end.

 FF = 8427.022916625000
 GG = 8427.022460937500
 HH = 8427.022460937500
 II = 8427.02291666716337204000

FF and II are correct, GG and HH are wrong.   I understand now WHY this is
happening, but I argue, that it's not obvious to anyone that it should be
happening, it's just a hidden known bug waiting to bite you.  No reasonable
programmer would think that FF and GG would come out differently,  the
datatypes are all defined legitimately, and the same, the results should
also be the same.

In my opinion the changes in v2.2 break more things than they fix, and
should be reverted, and used ONLY if asked for by a compiler directive, we
should not have to do special things to get it to work correctly.  If you
give the compiler directive to use this feature, then you know you might
have to cast some things yourself, but to apply this globally and then
require a directive to not do it, is just not right, unless ALL code can be
run the way it did pre 2.2 without modification,  this is CLEARLY not the
case.   

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Bart via fpc-pascal
On Tue, Feb 6, 2024 at 6:13 PM Rafael Picanço via fpc-pascal
 wrote:


> type
>   {$IFDEF CPU86}{$IFDEF CPU32}
> TLargerFloat = Extended;
>   {$ENDIF}{$ENDIF}
>
>   {$IFDEF CPUX86_64}
> TLargerFloat = Double;
>   {$ENDIF}

Why (re-invent the wheel)?
Why not use Math.Float?
IIRC then this is Extended, double or Single depending on CPU type.
And always the largest precision the CPU supports.

-- 
Bart
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Rafael Picanço via fpc-pascal
> I’m afraid I don’t qualify for the bonus, because I don’t know what
LargerFloat is.

I am a little bit embarrassed here. The TLargerFloat is a type I wrote for
a simple test some time ago and I forgot about it. I was following the
TLargeInteger convention (from struct.inc in my current windows system):

After realizing that the Extended type was not made for cross-platform, my
point with TLargerFloat was to have a central place to test some types. I
decided to use Double for everything, following the equivalence with
pythonic doubles for timestamp synchronization in the systems I use.

unit timestamps.types;

{$mode ObjFPC}{$H+}

interface

type
  {$IFDEF CPU86}{$IFDEF CPU32}
TLargerFloat = Extended;
  {$ENDIF}{$ENDIF}

  {$IFDEF CPUX86_64}
TLargerFloat = Double;
  {$ENDIF}

implementation

end.

___

So, I guess I finally found why precision was better for explicitly
typecasts in Linux (despite the higher granularity of clock_monotonic):

I guess {$MINFPCONSTPREC 64}  would avoid explicit typecasting in the
following code, is it correct?

unit timestamps;

{$mode objfpc}{$H+}

// {$MINFPCONSTPREC 64}

interface

uses
  SysUtils, timestamps.types

{$IFDEF LINUX}
  , Linux
  , UnixType
{$ENDIF}

{$IFDEF DARWIN}
  , ctypes
  , MachTime
{$ENDIF}

{$IFDEF WINDOWS}
  , Windows
{$ENDIF}
  ;

function ClockMonotonic : TLargerFloat;

implementation

{$IFDEF LINUX}
function ClockMonotonic: TLargerFloat;
var
  tp: timespec;
  a, b : TLargerFloat;
begin
  clock_gettime(CLOCK_MONOTONIC, @tp);
  a := TLargerFloat(tp.tv_sec);
  b := TLargerFloat(tp.tv_nsec) * 1e-9;
  Result := a+b;
end;
{$ENDIF}

{$IFDEF DARWIN}
{credits:
https://github.com/pupil-labs/pyuvc/blob/master/pyuvc-source/darwin_time.pxi
}

var
  timeConvert: TLargerFloat = 0.0;

function ClockMonotonic : TLargerFloat;
var
  timeBase: mach_timebase_info_data_t;
begin
  if timeConvert = 0.0 then begin
mach_timebase_info(@timeBase);
timeConvert :=
  (TLargerFloat(timeBase.numer) / TLargerFloat(timeBase.denom) /
TLargerFloat(10.0);
  end;
  Result := mach_absolute_time() * timeConvert;
end;
{$ENDIF}

{$IFDEF WINDOWS}
var
  PerSecond : TLargeInteger;

function ClockMonotonic: TLargerFloat;
var
  Count : TLargeInteger;
begin
  QueryPerformanceCounter(Count);
  Result := TLargerFloat(Count) / TLargerFloat(PerSecond);
end;

initialization
   QueryPerformanceFrequency(PerSecond);
{$ENDIF}

end.

On Tue, Feb 6, 2024 at 12:52 PM James Richters <
james.richt...@productionautomation.net> wrote:

> This is my opinion from my testing, but others may have something else to
> say.
>
>
>
> 1) Does it affects constants only?
>
> Not really, if you set a variable with constant terms, it is affected, if
> you set a variable with other variables, it is not affected.
>
> Cont
>
>Mycontant := 8432+33/1440.0;//Is affected;
>
> Var
>
>MyDoubleVariable:Double;
>
>
>
> MyDoubleVariable := 8432+33/1440.0;   //Is affected
>
>
>
>
>
> Var
>
>MyInteger : Ineger;
>
>MyByte :  Byte
>
>MySingle : Single;
>
>MyDouble : Double;
>
>
>
> MyInteger := 8432;
>
> MyByte := 33;
>
> MySingle := 1440.0;
>
> MyDouble := MyInteger + MyByte / MySingle; //   is NOT affected;
>
>
>
>
>
> 2) Does it affects the LargerFloat type?
>
> I don’t know what you mean by LargerFloat, but Double an d Extended are
> affected, and even Real if your platform defines Real as a Double.
>
> Anything that is not Single precision is affected.
>
>
>
> 3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?
>
> Everyone should use {$MINFPCONSTPREC 64} in all programs until the bug is
> fixed, unless you use extended, then you have no good solution. Because you
> can’t set it to 80.
>
> 4) BONUS: Is the LargerFloat type really the larger, or should one do
> something else?
>
> I’m afraid I don’t qualify for the bonus, because I don’t know what
> LargerFloat is.
>
>
>
> James
>
>
>
>
>
>
>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
I think the reason why this new behavior doesn't occur with 1440.1 is that this 
number cannot be reduced to "single" precision. It will keep "double" precision.

Consider this instead:

program TESTDBL1 ;

var TT : double ; EE: double;

begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / 1440.5 ;
   EE := Double(8427) + Double(33) / Double(1440.5);
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
   WRITELN ( 'ee=' , EE : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

Result:

tt=8427.022460937500
ee=8427.0229087122534000

So it's the same as with ".0". FPC treats the constant as type "single". Imho, 
this is perfectly legal, but when assigning an expression to a "double" 
variable, an implicit cast to double should occur.

When using a variable of type "single" (instead of a constant), this casting is 
done:

program TESTDBL1 ;

{$mode objfpc}

var TT : double ;
EE: double;
x: Single = 1440.5;

begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / x ;
   EE := 8427 + 33 / Double(x);
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
   WRITELN ( 'ee=' , EE : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

Prints:
tt=8427.0229087122534000
ee=8427.0229087122534000

I don't know whether this is intentional or not, but I cannot see any good 
reason why using a constant in an expression has to be treated differently than 
using a variable. To me as a programmer, this behavior is unexpected.

If the 2.2 change is not going to be reverted (and as far as I understand 
Florian correctly, it won't be changed), maybe one could at least introduce a 
warning about a loss of precision when using a constant of type "single" in an 
expression which will be assigned to a variable of type "double".

Kind regards,
Thomas




- Original Message - 
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 16:23:30
Subject: [fpc-pascal] Floating point question

What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.

What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


I just realized something...  regardless of when or how the reduction in
precision is happening, the bug is different than that,  because the result
of a byte divided by a single when stored in a double is a double, NOT a
single,  there should be no problem here, there is a definite bug. 

Consider this:
program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000

GG,  the one with constants, is doing it wrong... 

If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  

This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.

The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default?? 

"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Adriaan van Os via fpc-pascal

James Richters via fpc-pascal wrote:

What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.


Indeed. It is wrong. And if Delphi does it wrong, it is still wrong for modes 
other than Delphi.



What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


Pascal doesn't attach a floating-point type to a floating-point constant. So, the only correct way 
for the compiler to handle it is to NOT attach a floating-point type to the declared constant in 
advance, that is, the compiler must store it in a symbol table as BCD or as string. And decide 
LATER what type it has. And in this case, where the assignment is to an extended, as soon as that 
is clear, and not earlier, the compiler can do the conversion of the BCD or string floating-point 
constant to the floating-point type in question, i.c. extended.



If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  




This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.


Jonas has argued, not without reason, that calculating everything always at full precision has its 
disadvantages too.




The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default??


Delphi/Borland invents some seemingly clever by factually stupid scheme and FPC wants to be 
compatible with it. Some applaud, but I am more impressed by logical reason than by what Borland 
does without logical reason.




"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default
precision of all floating point constants in the compiled modules."

The first remedy is unreasonable, I should not have to go through thousands
of lines of code and cast my constants, it was never a requirement of Pascal
to do this. 


Right.



Great if -CF80 worked, but even if you are happy with -CF64, my problem is:
how is anyone coming into FPC after 2.2 supposed to know that their
constants that always worked before are going to no longer be accurate??

The better thing to do would be to do it RIGHT before releasing the change
so that it can't be a problem for anyone, and make:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"  a true statement.

If the entire formula was evaluated at full precision, and only the result
was stored as a lower precision if possible, then there is never a problem
for anyone.


Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question (Rafael Picanço)

2024-02-06 Thread James Richters via fpc-pascal
This is my opinion from my testing, but others may have something else to say.
 
1) Does it affects constants only? 
Not really, if you set a variable with constant terms, it is affected, if you 
set a variable with other variables, it is not affected.
Cont
   Mycontant := 8432+33/1440.0;//Is affected;
Var
   MyDoubleVariable:Double;
 
MyDoubleVariable := 8432+33/1440.0;   //Is affected
 
 
Var
   MyInteger : Ineger;
   MyByte :  Byte
   MySingle : Single;
   MyDouble : Double;
 
MyInteger := 8432;
MyByte := 33;
MySingle := 1440.0;
MyDouble := MyInteger + MyByte / MySingle; //   is NOT affected;
 
 
2) Does it affects the LargerFloat type?  
I don’t know what you mean by LargerFloat, but Double an d Extended are 
affected, and even Real if your platform defines Real as a Double.
Anything that is not Single precision is affected.
 
3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?   
Everyone should use {$MINFPCONSTPREC 64} in all programs until the bug is 
fixed, unless you use extended, then you have no good solution. Because you 
can’t set it to 80.


4) BONUS: Is the LargerFloat type really the larger, or should one do something 
else?  
I’m afraid I don’t qualify for the bonus, because I don’t know what LargerFloat 
is.
 
James
 
 
 
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.

What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


I just realized something...  regardless of when or how the reduction in
precision is happening, the bug is different than that,  because the result
of a byte divided by a single when stored in a double is a double, NOT a
single,  there should be no problem here, there is a definite bug. 

Consider this:
program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000

GG,  the one with constants, is doing it wrong... 

If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  

This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.

The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default?? 

"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default
precision of all floating point constants in the compiled modules."

The first remedy is unreasonable, I should not have to go through thousands
of lines of code and cast my constants, it was never a requirement of Pascal
to do this. 

Great if -CF80 worked, but even if you are happy with -CF64, my problem is:
how is anyone coming into FPC after 2.2 supposed to know that their
constants that always worked before are going to no longer be accurate??

The better thing to do would be to do it RIGHT before releasing the change
so that it can't be a problem for anyone, and make:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"  a true statement.

If the entire formula was evaluated at full precision, and only the result
was stored as a lower precision if possible, then there is never a problem
for anyone.


James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question (Rafael Picanço)

2024-02-06 Thread Rafael Picanço via fpc-pascal
I have some questions about {$MINFPCONSTPREC 64} and the mentioned change
introduced by FPC 2.2 (the "it" from here after).

1) Does it affects constants only?

2) Does it affects the LargerFloat type?

3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?

4) BONUS: Is the LargerFloat type really the larger, or should one do
something else?

Best regards,
Rafael
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
I have the exact same intuition and expectation.  

I think this whole issue is easy to fix, just detect the .0s and cast them
to integers by default instead of singles, because then everything does work
fine.

If I had a clue where the code for this might reduction in precision might
be, I would try to fix it, but it's way over my head I'm afraid.   I think
the intention and theory behind doing it the new way is great, it just has
this one flaw in it that could be fixed so the true behavior matches what is
in the documentation,  that things will be reduced that would not cause a
loss in precision.   That is true for almost all cases except when you put a
.0 then it fails... it's losing precision. Reducing the .0 to an integer
solves the problem... and I think if you had X = 2.0 it would be reduced to
an integer or a byte, it's just when it's in a formula that it's getting set
to a single, and that single is throwing everything off... it just wasn't
reduced far enough. 

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 7:53 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

Well, this is funny because I *did* compile it on DOS with Turbo Pascal 5.5,
and I got the correct result there. Cross-compiling with FPC to msdos target
gave the "wrong" (aka unexpected) result again. There were so many factors
involved which caused great confusion.

>From my point of view, an expression being an assigned to a variable of type
"double" should be evaluated with double precision, not single. This is
obviously the way integers are handled by internally using int64. A few
weeks ago, I had incosistent behavior between x64 and x86 modes and it
turned out that 32-bit code did internal castings to int64 thus resulting in
the expected value whereas 64-bit cannot cast to int128 (because there is no
int128) and thus gives an unexpected result (well, at least to me). So my
intuition would (and obviously did!) expect double precision throughout the
calculation.

Kind regards,
Thomas



- Original Message -
From: James Richters 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 13:44:37
Subject: [fpc-pascal] Floating point question

I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. From the beginning, I
knew that 

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
Well, this is funny because I *did* compile it on DOS with Turbo Pascal 5.5, 
and I got the correct result there. Cross-compiling with FPC to msdos target 
gave the "wrong" (aka unexpected) result again. There were so many factors 
involved which caused great confusion.

>From my point of view, an expression being an assigned to a variable of type 
>"double" should be evaluated with double precision, not single. This is 
>obviously the way integers are handled by internally using int64. A few weeks 
>ago, I had incosistent behavior between x64 and x86 modes and it turned out 
>that 32-bit code did internal castings to int64 thus resulting in the expected 
>value whereas 64-bit cannot cast to int128 (because there is no int128) and 
>thus gives an unexpected result (well, at least to me). So my intuition would 
>(and obviously did!) expect double precision throughout the calculation.

Kind regards,
Thomas



- Original Message - 
From: James Richters 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 13:44:37
Subject: [fpc-pascal] Floating point question

I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. From the beginning, I
knew that the error was on my side, but I didn't know *what* I'm doing
wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message -
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it
wrong for most users, a compiler directive should have been needed to get
rid of the warnings, or ONLY applied in Mode Delphi.  Not to make everything
incorrect for everyone unless you add a directive. The problem with this
that no one is expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to
be a SINGLE, that is 

Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. From the beginning, I
knew that the error was on my side, but I didn't know *what* I'm doing
wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message -
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it
wrong for most users, a compiler directive should have been needed to get
rid of the warnings, or ONLY applied in Mode Delphi.  Not to make everything
incorrect for everyone unless you add a directive. The problem with this
that no one is expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to
be a SINGLE, that is NUTS!!
I expect it to be all done in Extended. Why would anyone expect the contents
of MyVariable to be butchered by storing the 33/1440 in single precision.

In other words
I expect the result of these both to be the same:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
end.

But they are NOT
DD =8427.022916625000
EE =8427.022460937500

EE is WRONG and can never be considered right.   Why would ANY user with the
code above expect that the 33/1440 would be done as a single, thus causing a
loss of precision. 

Again:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

This was NOT done in the lowest precision which doesn't cause data loss.. we
lost data   We are no longer Extended precision, anything at all we use
EE for is WRONG.

This is CLEARLY WRONG!  

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
I'd like to apologize, because my intention hasn't been to raise controverse 
discussions. I'm very thankful about the explanation. From the beginning, I 
knew that the error was on my side, but I didn't know *what* I'm doing wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message - 
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code 
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it wrong 
for most users, a compiler directive should have been needed to get rid of the 
warnings, or ONLY applied in Mode Delphi.  Not to make everything incorrect for 
everyone unless you add a directive. The problem with this that no one is 
expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to be 
a SINGLE, that is NUTS!!
I expect it to be all done in Extended. Why would anyone expect the contents of 
MyVariable to be butchered by storing the 33/1440 in single precision.

In other words
I expect the result of these both to be the same:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
end.

But they are NOT
DD =8427.022916625000
EE =8427.022460937500

EE is WRONG and can never be considered right.   Why would ANY user with the 
code above expect that the 33/1440 would be done as a single, thus causing a 
loss of precision. 

Again:
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

This was NOT done in the lowest precision which doesn't cause data loss.. we 
lost data   We are no longer Extended precision, anything at all we use EE 
for is WRONG.

This is CLEARLY WRONG!  The default should be the old way and if you don't like 
the Delphi warnings, you can make a switch to do it this new stupider and WRONG 
way.

I strongly feel this should be reverted, it's just wrong.   This makes no sense 
to me at all.   It's wrong to need to add a compiler directive to do things as 
they are expected by the vast majority to be, the directive should be needed 
for those few who even noticed the warnings in Delphi, and they were just 
warnings, not a substantial reduction in precision. 

James

>But not at the price of loss in precision ! Unless an explicit compiler switch 
>like --fast-math is passed 


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
Thank you all

Finally I understand what's going wrong and can take care of that.

I'm now using the "{$MINFPCONSTPREC 64}" and have the correct result. Again, 
thank you for pointing me to that behavior!



- Original Message - 
From: Adriaan van Os via fpc-pascal 
To: FPC-Pascal users discussions 
Sent: Sunday, February 4, 2024, 13:50:48
Subject: [fpc-pascal] Floating point question

Jonas Maebe via fpc-pascal wrote:
> On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
>> Constants are also evaluated wrong,you don’t know what that constant 
>> is going to be used for, so all steps of evaluating a constant MUST be 
>> done in extended by the compiler, or the answer is just wrong.

> See 
> https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
> and https://www.freepascal.org/daily/doc/prog/progsu19.html

I think this discussion shows that the 2.2 compiler change was a bad idea (for 
modes other than 
Delphi).

Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal