Re: [fpc-pascal] Floating point question

2024-02-22 Thread Peter B via fpc-pascal

On 22/02/2024 14:22, Jean SUZINEAU via fpc-pascal wrote:

As far as I know Extended is not supported on Linux.


This is wrong, sorry.  I'm using Extended on Linux and it works just fine.


Cheers,
Peter
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Marco van de Voort via fpc-pascal



Op 22-2-2024 om 15:08 schreef Thomas Kurz via fpc-pascal:

If you're using Win64, then the answer is simple: x86_64-win64 unlike any
other x86 target does not support Extended, so neither the compiler nor the
code in runtime will ever calculate anything with that precision.

That's another thing I've never understood. How can it depend on the OS? It's the CPU 
which does math, and I don't understand what the OS has to do with that? If amd64 
architecture didn't support the extended-type at all, I'd say "ok". But it's 
supported on Linux but not on Windows? Huh?


The problem is not that it is only Extended that is deprecated on win64, 
but the whole of x87. To replace it, the Windows 64-bit ABI points to 
SSE2 floating point math which only goes up to 64-bit Double.


I.e. it is not that Microsoft might not in time save the few extra bits 
of an extended in a x87 context save, but more that it won't save the 
x87 state at all, and only save the SSE2 state.


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread James Richters via fpc-pascal
I seem to recall there is some way to get 80 Bit Extended on 64 Bit Windows, 
but it involved compiling a 64bit version of FPC myself somehow, and I can't 
remember what it was all about, I'm pretty sure I was doing that for a while, 
but then I wanted to upgrade and couldn't remember how it was all done, so I 
went back to Win32, just to get 80 Bit Extended.   It's something to do with 
the cross compiler to 64 bit makes extended a double on 64bit, but if you 
weren't cross compiling and had just a native 64bit compiler then Extended is 
80 bits again.

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Jean SUZINEAU via fpc-pascal
I see that Wikipedia is not very clear on this,  you just find "x86" 
mentioned, but for Pascal: "this Extended type is available on 16, 32 
and 64-bit platforms, possibly with padding"


https://en.wikipedia.org/wiki/Extended_precision

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Thomas Kurz via fpc-pascal
> For constants, the compiler will choose a type and consequently the 
> precision. 
> Jonas and others have explained the rules that the compiler uses.
>
> If you don't like the rules that the compiler uses, you can set a type for 
> your
> constants. When you explicitly set a type, you are also specifying the 
> precision of the > calculation.

If the ruleset won't change - and from what I've read from the developers, it 
won't change - could we please have the compiler issue a warning (or a hint) if 
a loss in precision happens.

E.g. "Warning: constant reduced to single precision. Use {$MINFPCONSTPREC} or 
Double() to keep full precision."

I am aware about this behavior now, but nevertheless I'd like to get warned if 
I forget about either of those.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Thomas Kurz via fpc-pascal
Aaaah, ok. Thank you very much for alrifying this long-standing question!


- Original Message - 
From: Tomas Hajny via fpc-pascal 
To: FPC-Pascal users discussions 
Sent: Thursday, February 22, 2024, 15:25:34
Subject: [fpc-pascal] Floating point question

On 2024-02-22 15:08, Thomas Kurz via fpc-pascal wrote:
>> If you're using Win64, then the answer is simple: x86_64-win64 unlike 
>> any
>> other x86 target does not support Extended, so neither the compiler 
>> nor the
>> code in runtime will ever calculate anything with that precision.

> That's another thing I've never understood. How can it depend on the
> OS? It's the CPU which does math, and I don't understand what the OS
> has to do with that? If amd64 architecture didn't support the
> extended-type at all, I'd say "ok". But it's supported on Linux but
> not on Windows? Huh?

The reason is that the operating system is among others responsible for 
controlling the multitasking. That includes saving the context of the 
originally active task and restoring the context of the new task to the 
original state before it was interrupted. If Win64 doesn't guarantee 
restoring FPU registers specific to the extended type, using this type 
would get you into troubles despite its support in the FPU.

Tomas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Thomas Kurz via fpc-pascal
> for example, here on Earth, (7 decimal places) 0.001 degree latitude is 
> ""only"" 1cm... (8 decimal places) 0.0001 degree latitude is ""only"" 
> 1mm... 
> longitude, on the other hand, is variable such that 7 decimal places at the 
> equator is the same as latitude but as you move toward the poles, it changes 
> such that 4 decimal places is 20cm...

My initial problem (i.e. why I asked the original question -- which I do regret 
meanwhile *g*) was that we do use floating-point numbers for date and time 
operations. (TDateTime = Double)

And I had discrepancies of about 40 seconds when converting between 
astronomical dates and TDateTime. This was how it all started...

We need approx. 5 decimals to represent one second (because the non-fractional 
part is considered being the day number). So "single" precision isn't 
acceptable here. If TDateTime were Unix-timestamp, it wouldn't matter. But 
since TDateTime is Julian day number (maybe with an offset, but that's 
irrelevant here), it unfortunately does matter.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Tomas Hajny via fpc-pascal

On 2024-02-22 15:08, Thomas Kurz via fpc-pascal wrote:
If you're using Win64, then the answer is simple: x86_64-win64 unlike 
any
other x86 target does not support Extended, so neither the compiler 
nor the

code in runtime will ever calculate anything with that precision.


That's another thing I've never understood. How can it depend on the
OS? It's the CPU which does math, and I don't understand what the OS
has to do with that? If amd64 architecture didn't support the
extended-type at all, I'd say "ok". But it's supported on Linux but
not on Windows? Huh?


The reason is that the operating system is among others responsible for 
controlling the multitasking. That includes saving the context of the 
originally active task and restoring the context of the new task to the 
original state before it was interrupted. If Win64 doesn't guarantee 
restoring FPU registers specific to the extended type, using this type 
would get you into troubles despite its support in the FPU.


Tomas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Jean SUZINEAU via fpc-pascal

Le 22/02/2024 à 15:08, Thomas Kurz via fpc-pascal a écrit :

But it's supported on Linux but not on Windows? Huh?

As far as I know Extended is not supported on Linux.___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-22 Thread Thomas Kurz via fpc-pascal
> If you're using Win64, then the answer is simple: x86_64-win64 unlike any
> other x86 target does not support Extended, so neither the compiler nor the
> code in runtime will ever calculate anything with that precision.

That's another thing I've never understood. How can it depend on the OS? It's 
the CPU which does math, and I don't understand what the OS has to do with 
that? If amd64 architecture didn't support the extended-type at all, I'd say 
"ok". But it's supported on Linux but not on Windows? Huh?

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-20 Thread James Richters via fpc-pascal
>>If you're using Win64, then the answer is simple: x86_64-win64 unlike any
other x86 target does not support Extended, so neither the compiler nor the
code in runtime will ever calculate anything with that precision.
To clarify,  I am using i386-win32 on a 64 bit specifically because Extended
is really just a Double on x86_64-win64.  All my test programs were done
with Win32.

>>you see the pattern? You simply have to rotate the six digits in a certain
manner ...
I see it now that you pointed it out and I think that it is really cool that
it's the same digits rotated!   Thanks!

>>I don't think you need the cast to extended around the divisions;   
Correct, I don't need to do this and I should not need to do it, but I also
should not need to re-cast the terms of the division, but with constants
that's what I must do to get the correct result.

Also, my programs would never re-cast the constants, I was just making it
clear that a byte divided by a single in fact does produce a correct
extended answer when done with variables, and I didn't want any doubt that
it was dividing a byte by a single.
This is the correct behavior.  My point was that the problem isn't that the
3.5 was stored or cast as a single,  It's valid for it to be a single,  and
that should make no difference at all, and in fact my results as exactly the
same without all the casting, 
But the only way to get the correct answer with constants is to do what
should be an unnecessary cast to extended of terms in my expression:

program Const_Vs_Var;

Const
   A_const = 1;
   B_const = 3.5;
Var
   A_Var : Byte;
   B_Var : Single;
   Const_Ans1, Var_Ans1, Difference1 : Extended;
   Const_Ans2, Var_Ans2, Difference2 : Extended;


Begin
   A_Var := A_Const;
   B_Var := B_Const;

   Const_Ans1  := A_Const/B_Const;
   Var_Ans1:= A_Var/B_Var;
   Difference1 := Var_Ans1-Const_Ans1;
   Const_Ans2  := A_Const/Extended(B_Const);   //I should not need to cast
B_Const in this way
   Difference2 := Var_Ans1-Const_Ans2;
   
   WRITELN ( '  Const_Ans1 = ', Const_Ans1);
   WRITELN ( 'Var_Ans1 = ', Var_Ans1);
   WRITELN ( ' Difference1 = ', Difference1);
   Writeln;
   WRITELN ( '  Const_Ans2 = ', Const_Ans2);
   WRITELN ( 'Var_Ans1 = ', Var_Ans1);
   WRITELN ( ' Difference2 = ', Difference2);

End.
  Const_Ans1 =  2.85714298486709594727E-0001 //This is a single precision
calculation stored in an extended
Var_Ans1 =  2.85714285714285714282E-0001   //The nice repeating decimal
I expect
 Difference1 = -1.27724238804447203649E-0008  //This should have been 0

  Const_Ans2 =  2.85714285714285714282E-0001  //I should not have had to
cast Extended(B_Var) to get this
Var_Ans1 =  2.85714285714285714282E-0001   // The correct answer again
just for clarification
 Difference2 =  0.E+  //Now it is 0 as I expected


>>When casting this way
>>Byte(A_Var)/Single(B_Var)
>>I would expect the division to be done with single precision, but
apparently it is done 
>>using extended (or another) precision ... on Windows, not on Linux. And
this is what 
>>causes your headaches.

I would NOT expect this to result in single precision,  when I divide a Byte
by a single in Turbo Pascal and assign it to a Double, the result is correct
in double precision.  
The ONLY way to get a single for an answer in Turbo Pascal is to define a
variable as a single and use that to do that calculation.
MySingle := A_Var/B_Var;

If the variable is a double:
MyDouble := A_Var/B_Var;
Then no matter what B_Var is, whether it's a single or a double, MyDouble is
the same correct number

If I want the result to be a single then:
Single(A_Var/B_Var)  
Should be what I require.

It doesn't matter in Turbo Pascal if I am dividing by a variable defined as
a single or a variable defined as a double, or an undefined constant,
and it does not matter in FPC either as long as my division is done with
variables.

It's FPC with Constants that is making MyByte/MySingle come out as a single,
and that is incorrect.   Division by a single does not force the answer to
be a single.
Please See this:

program Const_Vs_Var;

Const
   A_const = 1;
   B_Const = 3.5;
   C_Const = 7;
Var
   A_Var : Byte;
   B_Var : Single;
   C_Var : Byte;
   VPi : Extended;
   Const_Ans1, Var_Ans1, Difference1 : Extended;
   Const_Ans2, Var_Ans2, Difference2 : Extended;
   Const_Ans3, Var_Ans3, Difference3 : Extended;


Begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;
   VPi := Pi;

   Const_Ans1  := A_Const/B_Const;
   Var_Ans1:= A_Var/B_Var;
   Difference1 := Var_Ans1-Const_Ans1;

   Const_Ans2  := A_Const/C_Const;
   Var_Ans2:= A_Var/C_Var;
   Difference2 := Var_Ans2-Const_Ans2;

   Const_Ans3  := Pi/B_Const;
   Var_Ans3:= VPi/B_Var;
   Difference3 := Var_Ans3-Const_Ans3;

   WRITELN ( '  Const_Ans1 = ', Const_Ans1);
   WRITELN ( 'Var_Ans1 = ', Var_Ans1);
   WRITELN ( ' Difference1 = ', Difference1);
   Writeln;
   WRITELN ( '  Const_Ans2 = ', Const_Ans2);
   WRITELN ( '  

Re: [fpc-pascal] Floating point question

2024-02-20 Thread Tomas Hajny via fpc-pascal

On 2024-02-20 08:03, Sven Barth via fpc-pascal wrote:

James Richters via fpc-pascal 
schrieb am Di., 20. Feb. 2024, 04:42:


I don't know why it would be different in Windows than on Linux.


If you're using Win64, then the answer is simple: x86_64-win64 unlike
any other x86 target does not support Extended, so neither the
compiler nor the code in runtime will ever calculate anything with
that precision.


Well, this probably isn't the (sole) reason in this particular case, 
because the results posted by Michael and coming from Linux (probably 
x86_64) were equal for constants and variables, but less precise than 
the result of run-time posted for Windows (unfortunately without 
specifying whether it was i386-win32 or x86_64-win64, but I'd guess for 
the former based on the results - exactly due to the reason mentioned by 
you).


Tomas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-20 Thread Bernd Oppolzer via fpc-pascal

See below ...


Am 19.02.2024 um 02:00 schrieb James Richters via fpc-pascal:


>And if you have set the precision, then the calculation will be 
identical to the calculation when you use a variable of the same type 
(if not, it's indeed a bug).


This is what I have been trying to point out.Math with identical 
casting with variables and constants are not the same.


Maybe if I try with a simpler example:

program Const_Vs_Var;

Const

A_const = Byte(1);

B_const = Single(3.5);

Var

A_Var : Byte;

B_Var : Single;

Const_Ans1, Var_Ans1 : Extended;

Begin

A_Var := A_Const;

B_Var := B_Const;

Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));

Var_Ans1:= Extended(Byte(A_Var)/Single(B_Var));

WRITELN ( ' Const_Ans1 = ', Const_Ans1);

WRITELN ( 'Var_Ans1 = ',Var_Ans1);

End.

Const_Ans1 =2.85714298486709594727E-0001

Var_Ans1 =2.85714285714285714282E-0001

Windows 10 Calculator shows the answer to be

0.28571428571428571428571428571429Which matches up with the way 
variables have done this math, not the way constants have done it.




you don't need a calculator for 2 / 7 or 1 / 3.5. There is a simple rule 
for the decimal representation when dividing by 7:


1 / 7 = 0.142857 ...   repeat ad infinitum
2 / 7 = 0.285714
3 / 7 = 0.428571
4 / 7 = 0.571428
5 / 7 = 0.714285
6 / 7 = 0.857142

you see the pattern? You simply have to rotate the six digits in a 
certain manner ...



I am explicitly casting everything I possibly can.



I don't think you need the cast to extended around the divisions;
the divisions are done at different precision, which makes your problem,
but the cast to extended at the end doesn't help ... it will be done 
anyway,

because the target field is extended.

The problem indeed is that the division is done differently for consts 
and for vars,
and this seems to be the case for Windows only, as another poster 
pointed out.

This seems to be a real bug.

When casting this way

Byte(A_Var)/Single(B_Var)

I would expect the division to be done with single precision, but 
apparently it is done
using extended (or another) precision ... on Windows, not on Linux. And 
this is what

causes your headaches.


Without the :20:20 you can see that the result of each of these is in 
fact extended, but they are VERY different numbers, even though my 
casting is IDENTICAL , and I can’t make it any more the same, the 
results are very different.Math with Variables allows the result of a 
low precision entity, in this case a Byte, divided by a low precision 
entity, in this case a Single, to be calculated and stored in an 
Extended, Math with Constants does not allow this possibility, and 
this is where all the confusion is coming from.Two identical pieces of 
code not producing the same results.


Math with Constants is NOT the same as Math with Variables, and if 
this one thing was fixed, then all the other problems go away.


I am doing:

Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));

Var_Ans1:= Extended(Byte(A_Var)/Single(B_Var));

Just to make a point, but the code:

Const_Ans1 := A_Const/B_Const;

Var_Ans1:= A_Var/B_Var;

Should also produce identical results without re-casting, because 
A_Const and A_Var are both defined to be a Byte, and B_Const and B_Var 
are both defined to be a Single, and Const_Ans1 and Var_Ans1 are both 
defined to be Extended.


Why are the result different?

As I tried to explain before, if I force all constants to be Extended:

Const_Ans1 := Extended(Extended(A_Const)/Extended(B_Const));

Then I do get the correct results, but this should not be needed, and 
this casting is wrong,because a byte divided by a single should be 
able to be extended without first storing them in extended entities, 
the same as it is with variables.


With variables I do not need to re-cast every single term in an 
expression as Extended to get an Extended answer.


With constants this is the ONLY way I can get an extended answer.

Before the changes to 2.2, all constants WERE at highest precision, so 
the math involving constants never had to bother with considering that 
a low precision number divided by a low precision number could end up 
as an extended, because there were no low precision constants at all. 
But now there are, and that’s really fine, because we often have low 
precision variables, and that’s fine, but the math needs to be done 
the same way whether with constants or variables to produce identical 
results so now math with constants also has to take into consideration 
that math with low precision entities can and often does result in a 
high precision answer.


To demonstrate that a low precision entity divided by a low precision 
entity should always be able to be an Extended, use this example my 
constants as BYTES so there can be no lower precision:


program Const_Vs_Var;

Const

A_const = Byte(2);

B_const = Byte(7);

Var

A_Var : Byte;

B_Var : Byte;

Const_Ans1, Const_Ans2, Var_Ans1 : Extended;

Begin

A_Var := Byte(A_Const);

B_Var := Byte(B_Const);


Re: [fpc-pascal] Floating point question

2024-02-19 Thread Sven Barth via fpc-pascal
James Richters via fpc-pascal  schrieb am
Di., 20. Feb. 2024, 04:42:

> I don't know why it would be different in Windows than on Linux.


If you're using Win64, then the answer is simple: x86_64-win64 unlike any
other x86 target does not support Extended, so neither the compiler nor the
code in runtime will ever calculate anything with that precision.

Regards,
Sven
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-19 Thread James Richters via fpc-pascal
>I would not put too much trust in Windows calculator, since there you have
no control over the precision at all.
The actual CORRECT answer according to
https://www.wolframalpha.com/input?i=1%2F3.5 Is 
0.285714 Repeating forever

Which is what I get on windows only when using Variables.
   Var_Ans1 =  2.85714285714285714282E-0001
Which as you can see the .285714 repeats exactly 3 times with two more
correct digits, for a total of 20 correct digits before we run out of
precision. 

>It seems to be a windows-specific problem. Here is the result of your
program when executed on Linux:
>  Const_Ans1 =  2.85714298486709594727E-0001
>Var_Ans1 =  2.85714298486709594727E-0001

>As you can see, the result is identical.
Great, Linux is consistently giving you the wrong answer, and I notice it's
the same wrong I answer I get when I use constants.

I don't know why it would be different in Windows than on Linux. 
I am curious what you get in Linux for:
A_const = (2/7);   // should be 0.285714 Repeating forever also
Or 
B_const = (2/7)-(1/3.5);  //should always be 0;

Just for fun I ran it on Turbo Pascal 7.0 for DOS and got:
 Const_Ans1 =  2.85714285714286E-0001
 Var_Ans1 =  2.85714285714286E-0001

I also noticed that re-casting constants is not allowed by Turbo Pascal 7.0,
so I removed the unnecessary re-casting so it would compile, but I did get
the expected answer even without re-casting it... as I expected I should. 
The fact that Turbo Pascal gives be the correct answer to 14 digits even
though in Turbo Pascal I also divided a byte by a single shows that I should
expect my pascal programs to provide THIS correct answer, 0.285714 repeating
until you run out of precision,  not something else. And it shows that a
byte divided by a single is in fact able to be an extended... it also proves
that in Pascal the result of a byte divided by a single is NOT limited to
single precision. 

2.85714298486709594727E-0001 makes no sense to me at all, It's like it did
the calculation in Single precision, then stored it in an extended, but if I
wanted to work in single precision, I would not have set my variable to
extended.

There is some flawed logic somewhere that says if you do math with a single,
the result is a single, but this is simply not correct, you can divide a
byte by a byte and get an extended, and you can divide a single by a single
and get an extended, Pascal has ALWAYS worked that way.
Forcing the result of an equation to be a single because one term is a
single is just not right. 

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-19 Thread Michael Van Canneyt via fpc-pascal




On Sun, 18 Feb 2024, James Richters via fpc-pascal wrote:


And if you have set the precision, then the calculation will be identical to 
the calculation when you use a variable of the same type (if not, it's indeed a 
bug).


This is what I have been trying to point out.  Math with identical casting with 
variables and constants are not the same.
Maybe if I try with a simpler example:

program Const_Vs_Var;

Const
  A_const = Byte(1);
  B_const = Single(3.5);
Var
  A_Var : Byte;
  B_Var : Single;
  Const_Ans1, Var_Ans1 : Extended;

Begin
  A_Var := A_Const;
  B_Var := B_Const;

  Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));
  Var_Ans1   := Extended(Byte(A_Var)/Single(B_Var));

  WRITELN ( ' Const_Ans1 = ', Const_Ans1);
  WRITELN ( '   Var_Ans1 = ',   Var_Ans1);
End.

Const_Ans1 =  2.85714298486709594727E-0001
  Var_Ans1 =  2.85714285714285714282E-0001

Windows 10 Calculator shows the answer to be
0.28571428571428571428571428571429  Which matches up with the way variables 
have done this math, not the way constants have done it.


I would not put too much trust in Windows calculator, since there you have
no control over the precision at all.

It seems to be a windows-specific problem. Here is the result of your
program when executed on Linux:

 Const_Ans1 =  2.85714298486709594727E-0001
   Var_Ans1 =  2.85714298486709594727E-0001

As you can see, the result is identifical.

As for the explanation, I will have to leave that to the compiler developers.

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-19 Thread James Richters via fpc-pascal
>And if you have set the precision, then the calculation will be identical to 
>the calculation when you use a variable of the same type (if not, it's indeed 
>a bug).
 
This is what I have been trying to point out.  Math with identical casting with 
variables and constants are not the same.
Maybe if I try with a simpler example:
 
program Const_Vs_Var;
 
Const
   A_const = Byte(1);
   B_const = Single(3.5);
Var
   A_Var : Byte;
   B_Var : Single;
   Const_Ans1, Var_Ans1 : Extended;
 
Begin
   A_Var := A_Const;
   B_Var := B_Const;
 
   Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));
   Var_Ans1   := Extended(Byte(A_Var)/Single(B_Var));
 
   WRITELN ( ' Const_Ans1 = ', Const_Ans1);
   WRITELN ( '   Var_Ans1 = ',   Var_Ans1);
End.
 
Const_Ans1 =  2.85714298486709594727E-0001
   Var_Ans1 =  2.85714285714285714282E-0001
 
Windows 10 Calculator shows the answer to be
0.28571428571428571428571428571429  Which matches up with the way variables 
have done this math, not the way constants have done it.
 
I am explicitly casting everything I possibly can.
 
Without the :20:20 you can see that the result of each of these is in fact 
extended, but they are VERY different numbers, even though my casting is 
IDENTICAL , and I can’t make it any more the same, the results are very 
different.  Math with Variables allows the result of a low precision entity, in 
this case a Byte, divided by a low precision entity, in this case a Single, to 
be calculated and stored in an Extended, Math with Constants does not allow 
this possibility, and this is where all the confusion is coming from.Two 
identical pieces of code not producing the same results.
 
Math with Constants is NOT the same as Math with Variables, and if this one 
thing was fixed, then all the other problems go away.
I am doing:  
   Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));
   Var_Ans1   := Extended(Byte(A_Var)/Single(B_Var));
 
Just to make a point, but the code:
 
   Const_Ans1 := A_Const/B_Const;
   Var_Ans1   := A_Var/B_Var;
 
Should also produce identical results without re-casting, because A_Const  and 
A_Var are both defined to be a Byte, and B_Const and B_Var are both defined to 
be a Single, and Const_Ans1 and Var_Ans1 are both defined to be Extended. 
 
Why are the result different?  
 
As I tried to explain before, if I force all constants to be Extended:
Const_Ans1 := Extended(Extended(A_Const)/Extended(B_Const));
 
Then I do get the correct results, but this should not be needed, and this 
casting is wrong,  because a byte divided by a single should be able to be 
extended without first storing them in extended entities, the same as it is 
with variables. 
 
With variables I do not need to re-cast every single term in an expression as 
Extended to get an Extended answer. 
With constants this is the ONLY way I can get an extended answer. 
 
Before the changes to 2.2, all constants WERE at highest precision, so the math 
involving constants never had to bother with considering that a low precision 
number divided by a low precision number could end up as an extended, because 
there were no low precision constants at all.But now there are, and that’s 
really fine, because we often have low precision variables, and that’s fine, 
but the math needs to be done the same way whether with constants or variables 
to produce identical results so now math with constants also has to take into 
consideration that math with low precision entities can and often does result 
in a high precision answer. 
 
To demonstrate that a low precision entity divided by a low precision entity 
should always be able to be an Extended, use this example my constants as BYTES 
so there can be no lower precision:
 
program Const_Vs_Var;
 
Const
   A_const = Byte(2);
   B_const = Byte(7);
Var
   A_Var : Byte;
   B_Var : Byte;
   Const_Ans1, Const_Ans2, Var_Ans1 : Extended;
 
Begin
   A_Var := Byte(A_Const);
   B_Var := Byte(B_Const);
 
   Const_Ans1 := A_Const/B_Const;
   Var_Ans1   := A_Var/B_Var;
 
   WRITELN ( ' Const_Ans1 = ', Const_Ans1);
   WRITELN ( '   Var_Ans1 = ',   Var_Ans1);
End.
 
Const_Ans1 =  2.85714285714285714282E-0001
   Var_Ans1 =  2.85714285714285714282E-0001
 
Now as you can see math with constants is EXCATLY the same as math with 
variables again, and I did not need to do any ridiculous casting to get the 
correct answer. 
 
I hope this makes sense.  I know exactly what is happening, but I don’t know 
why and I don’t know how to explain it other than to give these examples.
 
1/3.5 == 2/7
 
What should this program produce?
 
program Const_Vs_Var;
 
Const
   A_const = (2/7)-(1/3.5);
Begin
   WRITELN ( ' A_Const = ', A_Const);
End.
 
 
How can this possibly be right?
A_Const = -1.27724238804447203649E-0008
 
 
 
James
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 20:18 schrieb Florian Klämpfl via fpc-pascal:



const Xconst : single = 1440.0;

var y1, y2 : real;

y1 := 33.0 / 1440.0;

y2 :=  33.0 / Xconst;

the division in the first assignment (to y1) should be done at 
maximum precision, that is,
both constants should be converted by the compiler to the maximum 
available precision and

the division should be done (best at compile time) using this precision.

Constant folding is an optimization technique, so the first expression 
could be also evaluated at run time in case of a simple compiler 
(constant folding is not something which is mandatory) which means 
that we have to use always full precision (what full means depends on 
the host and target platform thought) for real operations. So either: 
always full precision with the result all operations get bloated or 
some approach to assign a precision to real constants.


no problem here; the result of y1 must be the same, no matter if the 
computation is done at compile time or at run time.
the result should always be computed at the best precision available, 
IMO (maybe controlled by a compiler option,

which I personally would set).

y2: the computation could be done using single precision, because the 
second operand says so.
IMO: even if the first operand was a literal constant which cannont be 
represented exactly in a single FP field


It gets even more hairy if more advanced optimization techniques are 
involved:


Consider

var
   y1,y2 : single;

 y1 := 1440.0
 y2 := 33.0 / y1;

When constant propagation and constant folding are on (both are 
optimizations), y2 can be calculated at compile time and everything 
reduced to one assignment to y2. So with your proposal the value of y2 
would differ depending on the optimization level.


if y2 is computed at compile time (which is fine), then the result IMO 
is determined by the way the source code is written.
A possible optimization must not change the meaning of the program, 
given by the source code.
So in this case, the compiler would have to do a single precision 
division (if we could agree on the rules that we discussed so far),
and the meaning of the program may not be changed by optimization 
techniques (that is: optimization may not change the
result to a double or extended precision division ... otherwise the 
optimization is wrong).


BTW: many of the ideas about what a compiler should do come from my 30+ 
years experience with PL/1.
That may be a sort of "deformation professionelle", as the French call 
it, but that's how it is.


Apart from the proper handling of literal FP constants (which is what we 
discuss here, IMO), there is another topic,

which is IMO also part of the debate:

does

 y2 := 33.1 / y1;

require the division to be done at single precision or not?

We have here a literal constant, which is NOT single (33.1) and a single 
variable operand.
I understood from some postings here, that some people want the 
divisions with singles carried out using
single arithmetic, for performance reasons, so I asked for a single 
division here (in my previous postings).
But IMO that's different in the current implementation ... what do 
others think about this?


I, for my part, would find it strange, if the precision of the division 
in this case would depend on the (implicit)

type of the operand, that is:

 y2 := 33.015625 / y1;  { single precision, because constant is single 
- 33 + 1 / 64 }

 y2 := 33.1 / y1;   { extended precision, because constant is extended }

IMO, both of these divisions should be done at single precision, 
controlled by the type of y1.

But this could be controlled by ANOTHER new option, if someone asks for it.

Kind regards

Bernd
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Florian Klämpfl via fpc-pascal


> Am 16.02.2024 um 15:34 schrieb Bernd Oppolzer via fpc-pascal 
> :
> 
> Am 16.02.2024 um 08:32 schrieb Florian Klämpfl via fpc-pascal:
>> Am 16.02.2024 um 08:23 schrieb Ern Aldo via fpc-pascal 
>>  :
>>> 
>>>  Compile-time math needs to be as correct as possible. RUN-time math can 
>>> worry about performance.
>> So you are saying when constant propagation is on, an expression should have 
>> a different result than with constant propagation off?
> I don't know exactly, what you mean by constant propagation.
> 
> But IMO, given this (sort of fictive) Pascal code snippet:
> 
> 
> const Xconst : single = 1440.0; 
> 
> var y1, y2 : real; 
> 
> y1 := 33.0 / 1440.0; 
> 
> y2 :=  33.0 / Xconst;
> 
> the division in the first assignment (to y1) should be done at maximum 
> precision, that is, 
> both constants should be converted by the compiler to the maximum available 
> precision and 
> the division should be done (best at compile time) using this precision. 
> 
Constant folding is an optimization technique, so the first expression could be 
also evaluated at run time in case of a simple compiler (constant folding is 
not something which is mandatory) which means that we have to use always full 
precision (what full means depends on the host and target platform thought) for 
real operations. So either: always full precision with the result all 
operations get bloated or some approach to assign a precision to real constants.

It gets even more hairy if more advanced optimization techniques are involved:

Consider

var
   y1,y2 : single;

 y1 := 1440.0
 y2 := 33.0 / y1;

When constant propagation and constant folding are on (both are optimizations), 
y2 can be calculated at compile time and everything reduced to one assignment 
to y2. So with your proposal the value of y2 would differ depending on the 
optimization level.
> in the second case, if the compiler supports constants of the reduced type 
> (which I believe it does, 
> 
> no matter how the syntax is), I find it acceptable if the computation is done 
> using single precision, 
> because that's what the developer calls for.
> 
> So probably the answer to your question is: yes.
> 
> Kind regards
> 
> Bernd
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
> https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 16:38 schrieb Bernd Oppolzer:


IMO, a compiler switch that gives all FP constants the best available 
precision would solve the problem -
BTW: WITHOUT forcing expressions where they appear to use this 
precision, if the other parts of the expression

have lower precision.

In fact, when parsing and compiling the expressions, you always can 
break the problem down to TWO operands
that you have to consider, and if one of them is a literal constant, 
it should not force the type of the operation to

a higher precision ... that's what I would do.



Commenting on my own post (this time):

const xs : single = 1440.5;
  xd : double = 1440.5;
  xu = 1440.5;  /* double or single, depending on new 
option */

  z : single = 33.0;

y1 := xs / z;   { single precision }
y2 := xd / z;   { double precision }
y3 := xu / z;   { different result, depending on new option }
y4 := 1440.5 / z;   { single, because z dictates it, independent of 
option }
y5 := 1440.1 / z;   { IMO: single, because z dictates it, independent of 
option }

y6 := 1440.5 / 33.0;   { depending on new option }


This may be in contrast to what's today done in FPC,
but that's how I (personally) would like to have it done.
Maybe the behaviour without the new option set is the same as now.

Not sure about y5.

Kind regards

Bernd





___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 14:38 schrieb Michael Van Canneyt via fpc-pascal:


There can be discussion about the rules that the compiler uses when it 
chooses a type, but any given set of rules will always have 
consequences that may or may not be desirable.


Possibly some compiler switches can be invented that modify the 
compiler's

rules for the constant type to use.


If the rules at the moment make this a single:

const xs = 64.015625;   { 64 + 1 / 64 }

because it can be represented correctly (without rounding error) in a 
binary single FP IEEE representation,

and this a double or extended type:

const xd = 64.1;  { no finite representation in binary or hex }

with all the observed effects on computations that the other posters 
here have pointed out


my personal opinion would be:

- never use such (implicitly typed) definitions ... but that's standard 
Pascal, after all

- try to convince the compiler builders that we need a better solution here

IMO, a compiler switch that gives all FP constants the best available 
precision would solve the problem -
BTW: WITHOUT forcing expressions where they appear to use this 
precision, if the other parts of the expression

have lower precision.

In fact, when parsing and compiling the expressions, you always can 
break the problem down to TWO operands
that you have to consider, and if one of them is a literal constant, it 
should not force the type of the operation to

a higher precision ... that's what I would do.

That's why I write all those mails (although I am not an active FPC 
user), because I want all Pascal versions around

to implement a clear and UNDERSTANDABLE language without strange effects.

Kind regards

Bernd


(incidentally, this is one of the reasons the FPC team does not want 
to make
inline variables as Delphi does, since there the type will again be 
determined by

the compiler - just as for constants, leading to ambiguity...)

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Michael Van Canneyt via fpc-pascal




On Sat, 17 Feb 2024, wkitty42--- via fpc-pascal wrote:


On 2/16/24 9:57 AM, James Richters via fpc-pascal wrote:
So you are saying when constant propagation is on, an expression should 
have a different result than with constant propagation off?


The result of math when using constants MUST be the same as the result of 
identical math using variables.


As far as I can see, in that case you must simply type your constants.

Variables are always typed - their precision is determined by the type you
have specified.

For constants, the compiler will choose a type and consequently the precision. 
Jonas and others have explained the rules that the compiler uses.


If you don't like the rules that the compiler uses, you can set a type for your
constants. When you explicitly set a type, you are also specifying the 
precision of the calculation.

And if you have set the precision, then the calculation will be identical to the
calculation when you use a variable of the same type (if not, it's indeed a 
bug).

There are 2 ways to do so:

Const
  MyConst : Double = 1.23e45;

or

Const
  MyConst = Double(1.23e45);

The latter can also be used in an expression

 X := Y * Double(1.23e45);

There can be discussion about the rules that the compiler uses when it chooses a type, 
but any given set of rules will always have consequences that may or may not be desirable.


Possibly some compiler switches can be invented that modify the compiler's
rules for the constant type to use.

(incidentally, this is one of the reasons the FPC team does not want to make
inline variables as Delphi does, since there the type will again be determined 
by
the compiler - just as for constants, leading to ambiguity...)

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread wkitty42--- via fpc-pascal

On 2/16/24 9:57 AM, James Richters via fpc-pascal wrote:

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?


The result of math when using constants MUST be the same as the result of 
identical math using variables.

There should never be a difference if I did my formula with hard coded 
constants vs variables.

   Const_Ans = 2.0010627116630224
  Const_Ans1 = 2.0010627116630224
Var_Ans1 = 2.

This should not be happening.


i've been quietly reading this entire thread and wow... i mean i do fully 
understand the situation and both sides of the problem... looking at the 
numbers, alone, the difference hits me hard in my OCD and i do agree with the 
assessment that max precision should be used for the const math and the result 
reduced when possible... but then, on the other hand, it really really really 
depends, _a lot_, on exactly /what/ is being calculated and what the numbers 
represent...


for example, here on Earth, (7 decimal places) 0.001 degree latitude is 
""only"" 1cm... (8 decimal places) 0.0001 degree latitude is ""only"" 1mm... 
longitude, on the other hand, is variable such that 7 decimal places at the 
equator is the same as latitude but as you move toward the poles, it changes 
such that 4 decimal places is 20cm...


for me, this can be important when it comes to placing scenery objects in a 
"true to life" simulator of the Earth, its terrain, and building/object 
placement... a building or road being 1cm out of position isn't noticeable... 
even 20cm out of place isn't really noticeable...


if this math is being used for measuring parsecs, yeah, i don't think i want to 
be on that space ship when it arrives at the wrong position in space... if this 
math is being used to measure angstroms, yeah... it kinda of matters... if this 
math is being used for cutting wood as a carpenter does, it doesn't matter so 
much... so please, put some perspective on this problem...


i'll have to look at my code for calculating a satellite's position in space in 
relation to Earth's center... especially depending on the satellite's altitude 
above the Earth as the higher the satellite is, the further off position the 
calculation could be... depending on what the calculation is being used for, it 
could be the difference between a good pass of two satellites crossing paths or 
it could be quite the ugly loss of two satellites plus a huge spew of debris 
from their impact...



--
 NOTE: No off-list assistance is given without prior approval.
   *Please keep mailing list traffic on the list where it belongs!*
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 02:12 schrieb Ern Aldo via fpc-pascal:


It is possible math is being done differently by the compiler than by 
programs? For math-related source code, the compiler compiles the 
instructions and writes them to the program file for execution at 
runtime. For compile-time constant calculations that produce run-time 
constant values, one would expect the compiler to compile the 
instructions, execute them during compilation, and write the resulting 
value to the program file for use at runtime. Such instructions are 
discarded, because the program does not need them. If math is being 
compiled differently for program-executed calculations versus 
compiler-executed calculations, then that would be a problem.


I'll try to comment on this using some source code which hopefully does 
conform to FPC,
but I am not sure, because I am not familiar with FPC standards. Please 
look:


Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.5);

y1 := A_const + C_const / B_const;
y2 := 8427 + 1440.5 / 33;

In my understanding, in the first assignment the constants have types, 
which are given to them
by the const declarations. And that's why the computation is done using 
single precision.
This would be OK for me, because the developers decided to do the 
definitions this way,

and so he or she takes responsibility.
If the computation is done at run time or at compile time, DOESN'T MATTER.

In the second case, using literal constants, the compiler should do the 
math using the maximum
precision available (IMO), because one constant (1440.5) has a FP 
representation. It does and should
not matter, that this constant can be stored exactly in a single FP 
field. Here again:

If the computation is done at run time or at compile time, DOESN'T MATTER.

Maybe this is not how FPC works today, but IMO this is how it should be 
done, because we want
(IMO) Pascal to be a clear language which is simple to explain, easy to 
use and easy to implement.


The case would be different, of course, if you do the same casting in 
the y2 case as in the const

declarations.

Kind regards

Bernd

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread Ern Aldo via fpc-pascal

Am 16.02.2024 um 08:32 schrieb Florian Kl?mpfl via fpc-pascal:

Am 16.02.2024 um 08:23 schrieb Ern Aldo via fpc-pascal:
Compile-time math needs to be as correct as possible. RUN-time math can 
worry about performance. 
So you are saying when constant propagation is on, an expression should have 
a different result than with constant propagation off? 

Bernd: I don't know exactly, what you mean by constant propagation.
I believe this question is a red herring. In case "constant propagation" is an 
optimization aimed primarily at run-time program code (this is mainly why 
compilers exist) then it doesn't exactly apply to compile-time math, aka 
compile-time execution of non-program code. In other words, compile-time 
execution for the purpose of solving compiler math (not program math) could be 
done entirely without any optimizations at all. If this would preserve 
compiler-math correctness and/or make it match run-time program-math 
correctness then it would yes be worth doing.


James Richters: The result of math when using constants MUST be the same as 
the result of identical math using variables.

Agree.


if the developer explicitly wants reduced precision, then [that's fine].

Agree.


But reduced precision should not come unexpectedly

Agree.

It is possible math is being done differently by the compiler than by programs? 
For math-related source code, the compiler compiles the instructions and writes 
them to the program file for execution at runtime. For compile-time constant 
calculations that produce run-time constant values, one would expect the 
compiler to compile the instructions, execute them during compilation, and 
write the resulting value to the program file for use at runtime. Such 
instructions are discarded, because the program does not need them. If math is 
being compiled differently for program-executed calculations versus 
compiler-executed calculations, then that would be a problem.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread James Richters via fpc-pascal
>But the reduced precision should not come unexpectedly simply because the
compiler attaches type attributes to constants (which can't be easily 
>explained), and then the outcome of simple decimal arithmetic is incorrect.

How are you disagreeing? This is EXACTLY what I am saying. My math with
variables is ALWAYS correct, my math with Constants is not coming out the
same and can't be easily explained.  I guess I should have re-posted the
example because it's not as it seems,  this is a counterexample I had
provided to show the  incorrect math indeed could force an INCREASE in
Precision, but even if I don't cast the variables you still get the same
unexpected results.. this is just to demonstrate that the problem is not the
reduction and assignment itself, but it's the way calculations are done by
the compiler being wrong. 

The math with constants is what's wrong, not the reduction in precision.
Look closely at the following example.  I am ALWAYS adding an integer to a
byte divided by a Single... The reduction in precision has been removed from
this example because I explicitly cast what I wanted.

This proves that the math with constants is what the problem is, not that
the constants themselves were reduced in precision.

The results of Const_Ans1 MUST Equal Var_Ans1 EXACTLY or all kinds of
problems will arise. 

I have no reason to expect that the results of Var_Ans1 and Const_Ans1 would
be different, yet they are.   THIS is the Problem, and that's pretty much
exactly what you just said, so I believe we are in agreement.

The problem isn't that the reduction in precision assigned a type attribute
to the constant, it's that the math with that type is being done wrong,
because I can do the math with the EXACT same typed variables and it comes
out correctly.  I'm trying to show where the real problem is, (not very
successfully)

Math with Variables in the executing program, a byte / single can very well
be extended. 
With the compiler math a byte / single is FORCED to be a single, which is
incorrect.
Dividing by a single does NOT imply the result should be a single.

Before the changes in v2.2, this was NEVER an issue, because all the
constants were full precision, so this could never possible come up as there
would be no math with a single in it.

My point is, no matter what is wrong, the result of math with constants must
always be exactly the same as math with variables otherwise no one can
figure out what the heck is going on. 

James


program Const_Vs_Var;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.5);
   Win_Calc = 16854.045817424505380076362374176;
   Const_Ans = 16854.045817424505380076362374176 / (8427 + 33 / 1440.5); Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   Const_Ans1, Var_Ans1 : Extended;

Begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   Var_Ans1   := Win_Calc / (A_Var+B_Var/C_Var);
   Const_Ans1 := Win_Calc / (A_Const+B_Const/C_Const);

   WRITELN ( '  Const_Ans = ',  Const_Ans:20:20);
   WRITELN ( ' Const_Ans1 = ', Const_Ans1:20:20);
   WRITELN ( '   Var_Ans1 = ',   Var_Ans1:20:20);
End.

The result is:
  Const_Ans = 2.0010627116630224
 Const_Ans1 = 2.0010627116630224
   Var_Ans1 = 2.


When I do:
   Var_Ans1   := Win_Calc / (A_Var+B_Var/C_Var);
   Const_Ans1 := Win_Calc / (A_Const+B_Const/C_Const);





-Original Message-
From: fpc-pascal  On Behalf Of
Bernd Oppolzer via fpc-pascal
Sent: Friday, February 16, 2024 10:48 AM
To: James Richters via fpc-pascal 
Cc: Bernd Oppolzer 
Subject: Re: [fpc-pascal] Floating point question


Am 16.02.2024 um 15:57 schrieb James Richters via fpc-pascal:
>> So you are saying when constant propagation is on, an expression should
have a different result than with constant propagation off?
> The result of math when using constants MUST be the same as the result of
identical math using variables.
>
> There should never be a difference if I did my formula with hard coded
constants vs variables.
>
>Const_Ans = 2.0010627116630224
>   Const_Ans1 = 2.0010627116630224
> Var_Ans1 = 2.
>
> This should not be happening.
>
> James

See my other post;

if the developer explicitly wants reduced precision, then this is what
happens.
But the reduced precision should not come unexpectedly simply because the
compiler attaches type attributes to constants (which can't be easily
explained), and then the outcome of simple decimal arithmetic is incorrect.

So I have to disagree, sorry.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread Bernd Oppolzer via fpc-pascal



Am 16.02.2024 um 15:57 schrieb James Richters via fpc-pascal:

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?

The result of math when using constants MUST be the same as the result of 
identical math using variables.

There should never be a difference if I did my formula with hard coded 
constants vs variables.

   Const_Ans = 2.0010627116630224
  Const_Ans1 = 2.0010627116630224
Var_Ans1 = 2.

This should not be happening.

James


See my other post;

if the developer explicitly wants reduced precision, then this is what 
happens.

But the reduced precision should not come unexpectedly simply because the
compiler attaches type attributes to constants (which can't be easily 
explained),

and then the outcome of simple decimal arithmetic is incorrect.

So I have to disagree, sorry.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread James Richters via fpc-pascal
>So you are saying when constant propagation is on, an expression should have a 
>different result than with constant propagation off?

The result of math when using constants MUST be the same as the result of 
identical math using variables.

There should never be a difference if I did my formula with hard coded 
constants vs variables.

  Const_Ans = 2.0010627116630224
 Const_Ans1 = 2.0010627116630224
   Var_Ans1 = 2.

This should not be happening.

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread Bernd Oppolzer via fpc-pascal

Am 16.02.2024 um 08:32 schrieb Florian Klämpfl via fpc-pascal:
Am 16.02.2024 um 08:23 schrieb Ern Aldo via fpc-pascal 
:

 Compile-time math needs to be as correct as possible. RUN-time math can worry 
about performance.

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?


I don't know exactly, what you mean by constant propagation.

But IMO, given this (sort of fictive) Pascal code snippet:


const Xconst : single = 1440.0;

var y1, y2 : real;

y1 := 33.0 / 1440.0;

y2 :=  33.0 / Xconst;


the division in the first assignment (to y1) should be done at maximum 
precision, that is,
both constants should be converted by the compiler to the maximum 
available precision and

the division should be done (best at compile time) using this precision.

in the second case, if the compiler supports constants of the reduced 
type (which I believe it does,
no matter how the syntax is), I find it acceptable if the computation is 
done using single precision,

because that's what the developer calls for.

So probably the answer to your question is: yes.

Kind regards

Bernd




___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-15 Thread Florian Klämpfl via fpc-pascal


> Am 16.02.2024 um 08:23 schrieb Ern Aldo via fpc-pascal 
> :
> 
>  Compile-time math needs to be as correct as possible. RUN-time math can 
> worry about performance.

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

My opinions about the solutions below ...


Am 13.02.2024 um 12:07 schrieb Thomas Kurz via fpc-pascal:

But, sorry, because we are talking about compile time math, performance 
(nanoseconds) in this case doesn't count, IMO.



That's what i thought at first, too. But then I started thinking about how to 
deal with it and sumbled upon difficulties very soon:

a) 8427.0 + 33.0 / 1440.0
An easy case: all constants, so do the calculation at highest precision and 
reduce it afterwards, if possible.

I agree; I would say:
all constants, so do the calculation at highest precision and reduce it 
afterwards, if required by the target


b) var_single + 33.0 / 1440.0
Should also be feasable by evaluating the constant expression first, then 
reducing it to single (if possible) and adding the variable in the end.
yes ... first evaluate the constant expression with maximum precision 
(best at compile time), then reduce the result.
The reduction to single must be done in any case, because the var_single 
in the expression dictates it, IMO


c) 8427.0 + var_double / 1440.0
Because of using the double-type variable here, constants should be treated as 
double even at the cost of performance due to not knowing whether the result 
will be assigned to a single or double.

yes


d) 8427.0 + var_single / 1440.0
And this is the one I got to struggle with. And I can imagine this is the 
reason for the decision about how to handle decimal constants.
My first approach would have been to implicitly use single precision values throughout 
the expression. This would mean to lose precision if the result will be assigned to a 
double-precision variable. One could say: "bad luck - if the programmer intended to 
get better precision, he should have used a double-precision variable as in case c". 
But this wouldn't be any better than the current state we have now.

8427.0 + (var_single / 1440.0)

the 1440.0 can be reduced to single, because the other operand is single
and so the whole operation is done using single arithmetic.

If here we had a FP constant instead of var_single, the whole operation 
IMO should be done
with maximum precision and at compile time in the best case. I have no 
problem that this
operation may give a different result with decimal constants than with 
explicitly typed
(reduced) FP variables. This can be easily explained to the users. 
Operations involving
FP variables with reduced precision may give reduced precision results. 
This seems to
be desirable for performance reasons and can be avoided by appropriate 
type casting.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread James Richters via fpc-pascal
Ok, maybe this example will prove why it's not happening correctly:

program Const_Vs_Var;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.5);
   Win_Calc = 16854.045817424505380076362374176;
   Const_Ans = 16854.045817424505380076362374176 / (8427 + 33 / 1440.5);
Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   Const_Ans1, Var_Ans1 : Extended;

Begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   Var_Ans1   := Win_Calc / (A_Var+B_Var/C_Var);
   Const_Ans1 := Win_Calc / (A_Const+B_Const/C_Const);

   WRITELN ( '  Const_Ans = ',  Const_Ans:20:20);
   WRITELN ( ' Const_Ans1 = ', Const_Ans1:20:20);
   WRITELN ( '   Var_Ans1 = ',   Var_Ans1:20:20);
End.

The result is:
  Const_Ans = 2.0010627116630224
 Const_Ans1 = 2.0010627116630224
   Var_Ans1 = 2.



Now you can see, if the math was done the same as the way math is done for
variables, we could have stored the constants as Byte(2).   But because the
math is being carried out after the reduction in precision we are left with
storing this as extended. 

If the result of all the math can be reduced, or if there is no math, then
it's great to reduce precision, but if the reduction in precision happens
before the math, you can end up with the opposite of what you intended.
Sure the compiler is working with faster math, but who cares what the
compiler has to do, now we're going to be stuck with a program using
extended(2.0010627116630224) for any calculations that use Const_Ans
instead of byte(2);  if Const_Ans is used in some kind of iterative process,
it the program could be using this extended millions of times when it could
have been using a byte.

Notice when I do the EXACT same math with variables, it DOES give me a
result of 2, and THAT can be reduced.

If the answer after all the math can be reduced, it should be reduced, if it
can't be, then it should not be.

Math with constants should be the same as math with variables.

I'm trying to show there doesn't need to be a trade off at all, the math
with constants just needs to be done correctly... as in the exact same way
math with variables is done.

What has happened is the math with constants was written and tested with the
assumption that all constants would be full precision, because it was
impossible for constants to be anything other than full precision, but now
that is no longer the case and the math with constants isn't working
correctly anymore.  Either the math needs to happen before the reduction in
precision or the math needs to be fixed so it works the same as math with
variables, either way there won't need to be a trade off and everything will
work the way everyone wants it to.. performance when possible and precision
when needed.

James



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread James Richters via fpc-pascal
>As Jonas said, this would result in less efficient code, since all the math
will then be done at full precision, which is slower.
I don't think I'm explaining it well,  I'm saying where there is an entire
formula that the compiler needs to evaluate, what's happening now, is that
each term is being reduced in precision first, Then the math happens, and
the result it stored.  
If instead the compiler did all the math first, THEN ran the function that
determines if the entire answer should be reduced in precision, then the
math would work correctly. 

But we don't care how long it takes to do the math during the compile, the
constants are only compiled once and stored in the executable.   The reason
to do all this is to make the executing program that ends up using the
constants over and over many times more efficient, the speed of the
compilation is irrelevant.

>As usual, it is a trade-off between size (=precision) and speed.
I agree with that, but only in the executing program, not the compiler.  

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Thomas Kurz via fpc-pascal
> But, sorry, because we are talking about compile time math, performance 
(nanoseconds) in this case doesn't count, IMO.

That's what i thought at first, too. But then I started thinking about how to 
deal with it and sumbled upon difficulties very soon:

a) 8427.0 + 33.0 / 1440.0
An easy case: all constants, so do the calculation at highest precision and 
reduce it afterwards, if possible.

b) var_single + 33.0 / 1440.0
Should also be feasable by evaluating the constant expression first, then 
reducing it to single (if possible) and adding the variable in the end.

c) 8427.0 + var_double / 1440.0
Because of using the double-type variable here, constants should be treated as 
double even at the cost of performance due to not knowing whether the result 
will be assigned to a single or double.

d) 8427.0 + var_single / 1440.0
And this is the one I got to struggle with. And I can imagine this is the 
reason for the decision about how to handle decimal constants.
My first approach would have been to implicitly use single precision values 
throughout the expression. This would mean to lose precision if the result will 
be assigned to a double-precision variable. One could say: "bad luck - if the 
programmer intended to get better precision, he should have used a 
double-precision variable as in case c". But this wouldn't be any better than 
the current state we have now.

Overall, I must admit that the choice ain't easy at all.

In this situation, it might be a good choice to ask "what would other languages 
do here?". As far as I know about C, it treats constants as double-precision by 
default. You have to write "1.0f" if you explicitly want single precision.
But I think it's too late for introducing yet another change. Imho, the correct 
decision at FPC v2.2 would have been to keep the previous behavior and instruct 
those concering performance to use "{$MINFPCONSTPREC 32}" (or using the "1.0f" 
notation) instead of requiring everyone to use "{$MINFPCONSTPREC 64}" to keep 
compatibility with previous releases.

Thomas

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Marco van de Voort via fpc-pascal



Op 13-2-2024 om 11:39 schreef Bernd Oppolzer via fpc-pascal:



But, sorry, because we are talking about compile time math, 
performance (nanoseconds) in this case doesn't count, IMO.


But probably compiled code is then automatically upscaled to the higher 
type too, since if one of the terms of an expression is of higher 
precision, then the whole expression is.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

Am 13.02.2024 um 10:54 schrieb Michael Van Canneyt via fpc-pascal:



On Tue, 13 Feb 2024, James Richters via fpc-pascal wrote:

Sorry for the kind of duplicate post, I submitted it yesterday 
morning and I

thought it failed, so I re-did it and tried again.. then after that the
original one showed up.

A thought occurred to me.   Since the complier math is expecting all the
constants would be in full precision, then the compiler math doesn't 
need to

change, it's just that the reduction in precision is just happening too
soon.  It's evaluating and reducing each term of an expression, then the
math is happening, and the answer is not coming out right.

If instead everything was left full precision until after the 
compiler math
(because this is what the compiler math expects), and then the final 
answer
was reduced in precision where possible, then it would work 
flawlessly.  So

the reduction in precision function only needs to run once on the final
answer, not on every term before the calculation.


As Jonas said, this would result in less efficient code, since all the 
math will then be done at full precision, which is slower.


As usual, it is a trade-off between size (=precision) and speed.

Michael.



But, sorry, because we are talking about compile time math, performance 
(nanoseconds) in this case doesn't count, IMO.



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Michael Van Canneyt via fpc-pascal




On Tue, 13 Feb 2024, James Richters via fpc-pascal wrote:


Sorry for the kind of duplicate post, I submitted it yesterday morning and I
thought it failed, so I re-did it and tried again.. then after that the
original one showed up.

A thought occurred to me.   Since the complier math is expecting all the
constants would be in full precision, then the compiler math doesn't need to
change, it's just that the reduction in precision is just happening too
soon.  It's evaluating and reducing each term of an expression, then the
math is happening, and the answer is not coming out right.

If instead everything was left full precision until after the compiler math
(because this is what the compiler math expects), and then the final answer
was reduced in precision where possible, then it would work flawlessly.  So
the reduction in precision function only needs to run once on the final
answer, not on every term before the calculation.


As Jonas said, this would result in less efficient code, since all the math 
will then be done at full precision, which is slower.


As usual, it is a trade-off between size (=precision) and speed.

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread James Richters via fpc-pascal
Sorry for the kind of duplicate post, I submitted it yesterday morning and I
thought it failed, so I re-did it and tried again.. then after that the
original one showed up.  
 
A thought occurred to me.   Since the complier math is expecting all the
constants would be in full precision, then the compiler math doesn't need to
change, it's just that the reduction in precision is just happening too
soon.  It's evaluating and reducing each term of an expression, then the
math is happening, and the answer is not coming out right.
 
If instead everything was left full precision until after the compiler math
(because this is what the compiler math expects), and then the final answer
was reduced in precision where possible, then it would work flawlessly.  So
the reduction in precision function only needs to run once on the final
answer, not on every term before the calculation. 
 
Sorry again for the duplicate.   
 
James
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread James Richters via fpc-pascal
>>Overall, the intermediate float precision is a very difficult topic.
 
I agree it's a difficult topic, it all comes down to what your program is
doing, and whether you need performance or precision.
 
>>And generate the slowest code possible on most platforms.
 
I can appreciate the need to reduce precision where it's possible for the
sake of performance, especially when it won't make any difference.
 
What makes it difficult is there are many different reasons for wanting it
one way, or the other, it depends on the purpose of the program, and the
compiler has no way to know what the purpose is.   
 
It occurs to me that one could want part of a program to be optimized for
performance and another part of the same program to be optimized for
precision, for example if you are doing calculations to generate geometry,
and also want to display the geometry on the screen, the data you write out
to a file you would want maximum precision, but since what you will display
on the screen will eventually become only integer values of pixels you want
to do that math as fast as possible, especially if you want to pan / zoom /
rotate, and even though what the screen data is based on might be double
precision or more, I can see how reducing its precision as fast as possible
would be beneficial to increase performance. 
 
So I’m trying to learn something, I agree it would be better have
performance where it’s possible and precision when needed.  But I just don't
understand what is going on.   I'm not trying to say that this reduction in
precision should not be done, I'm understanding the value in it.  I’m trying
to figure out why the math done with constants where the compiler is doing
the math is not the same as when the program does with math with variables.
If the solution is to typecast where needed to get the desired results, they
why isn’t it working the way I expect it to?
 
Below is a sample program, I’m not trying to make everything extended, in
fact quite the opposite, there is no need for the input constants /
variables to be Extended because they all fit perfectly in smaller data
types, so I put them all into smaller datatypes as an example.  I am
defining constants explicitly and defining variables the exact same way, so
I’m comparing apples to apples here, I have A as always an Integer, B as
always a Byte, and C as always a single, with a value the fits in a single. 
 
My goal is to add the integer to a byte that’s been divided by a single and
get the result in Extended.  When I do this with the variables, everything
is as I expected, when I do this with constants, it’s not as I expect.
This is what I don’t understand, and if this worked as expected then I think
everyone is happy.What ever is happening for it to work correctly during
program execution should also be happening when the compiler does the math.
The problem isn’t that the constants got stored in lower precision it’s that
they are somehow forcing the result of the calculation to also be at the
lower precision and not re-evaluated after the math.  It’s completely
legitimate to divide a low precision number by a low precision number and
get a high precision result, it works with Variables, why doesn’t it work
with Constants?
 
I suspect that what’s happened is that there is something missing in the way
the compiler does math, something that is not needed if it was always done
at maximum precision, but that is needed with mixed precision.   It’s not
that the fact that the constants were reduced in precision, it’s something
to do with the way the math is done with constants of reduced precision that
isn’t being accounted for, and that is not necessary if calculating with
full precision.   It’s not that the changes in 2.2 are the problem at all,
it’s that something else needed to be done at the same time that was missed.
 
The only way I can get the correct result when using constants is to re-cast
ALL of them as extended, not just the ones involving division, and not the
entire formula, but every single constant.   This is what I don’t
understand.  
 
>>The evaluation of the expression on the right of := does not know (and
should not know) what the type is of the expression on the left.
Why can’t the compiler do tall the math at full precision and then evaluate
only the result to see if that can be stored in a lower precision.  If the
expression on the right cannot and should not know the type on the left,
then there is a good possibility that it’s a high precision data type, and
then there should be some provision to safeguard against data loss if the
type is of high precision. 
Why doesn’t this work?JJ := Extended(A_Const+B_Const/C_Const); It
requires no knowledge of what is on the left.
Why can’t the math be done with high precision and the result be reduced to
the smallest datatype,  Math with low precision data types often results in
high precision results. 
 
If I want to have a mixed program with portions in high precision and

Re: [fpc-pascal] Floating point question

2024-02-13 Thread James Richters via fpc-pascal
It occurs to me that there is merit in reduction of precision to increase
performance, and so I'm trying to learn how to do this correctly, but the
thing that confuses me is that math with constants doesn't seem to be the
same as math with variables, and I don't know why.

It also looks to me like when there is an expression such as:
e := 8427.0 + 33.0 / 1440.0;
what is happening each term of the  expression is evaluated individually to
see if it can be reduced in precision, and then the math is carried out, but
if the math was carried out at full precision first by the compiler, THEN
the entire answer was evaluated to see if it can be reduced in precision,
the results would be what we are all expecting. 

Regardless of that however, when I am working with variables, an integer
added to a byte that has been divided by a single results in an
extended...it's legitimate to expect you could get an extended result from
such an operation, just as dividing a byte by another byte could result in
an extended answer.  With variables, this seems to always be the case, but
with constants, it does not seems to be the case.  If constants just did the
math the same as variables, then all this reduction in precision stuff would
work flawlessly for everyone without re-casting everything.

Please consider the code below, I am comparing the results to what I get
when I perform this math with the Windows Calculator, as you can see no
matter how I cast it, when using variables, I get the expected answer, but
when the compiler does the math, it's not working the same way. 
What seems to be happening with variables is that the answer to lower
precision entities can result in higher precision results, while with
constants, the resulting precision is limited in some way, but in a way I
don't understand, because it's being reduced to single precision, but the
lowest precision element is a byte.

In other words with variables a byte / single is perfectly capable of
producing an extended result, without re-casting.  But with constants doing
the exact same thing forces the result to always be a single.

I don't think the real issue has anything to do with this reduction in
precision at all, I think it has to do with whatever causes the compiler to
do math differently than the executing program does with variables.  I don't
understand why I must individually re-cast every element of the equation
using constants to extended, while when I do the exact same thing with
variables it's not necessary. 

I am wondering if the way the compiler does the math, it's is expecting that
all constants would be full precision, and therefore the way it did the math
before always came out right, but when the change was made in 2.2 to reduce
the precision to variables, no corresponding adjustment was made to the way
the compiler carries out math to compensate for the possibility that there
was such a thing as a constant with reduced precision.   So the compiler is
doing math as if all input terms are at highest precision, therefore not
needing to bother considering the answer might be higher precision than the
input terms, but now that there is the possibility of the result being of
higher precision, some adjustment to the way math is done by the compiler is
necessary. 

I just think if the compiler did all the math the same way the executing
program does with math with variables, then everything is solved for
everyone... without any re-casting or unexpected results due to division,
and while also preventing unnecessary precision.  this has nothing to do
with the reduction of precision, only the way the compiler is doing it's
calculations needs to be adjusted for this new situation.

Just fixing the way the compiler does the math also requires no knowledge of
the left side of the equation by the right.  The compiler just needs to do
the calculations the same way as variables are calculated with the extra
step of re-evaluating to see if the precision can be reduced when it's done.

James

program Const_Vs_Var;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.5);
   Win_Calc = 8427.0229087122526900381811870878;
   Const_Ans = A_Const+B_Const/C_Const;

Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   Const_Ans1, Const_Ans2, Const_Ans3, Var_Ans1, Var_Ans2, Var_Ans3 :
Extended;

Begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   Var_Ans1   := A_Var+B_Var/C_Var;
   Const_Ans1 := A_Const+B_Const/C_Const;
   Var_Ans2   := Integer(A_Var)+Byte(B_Var)/Single(C_Var);
   Const_Ans2 := Integer(A_Const)+Byte(B_Const)/Single(C_Const);
   Var_Ans3   := Extended(A_Var)+Extended(B_Var)/Extended(C_Var);
   Const_Ans3 := Extended(A_Const)+Extended(B_Const)/Extended(C_Const);

   WRITELN ( '   Win_Calc = ',   Win_Calc:20:20) ;
   WRITELN ( '  Const_Ans = ',  Const_Ans:20:20 ,'  Win_Calc-Const_Ans =
',Win_Calc-Const_Ans:20:20) ;
   WRITELN ( ' Const_Ans1 = ', Const_Ans1:20:20 ,' Win_Calc-Const_Ans1 =

Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

In this example below, the performance argument does not count IMO,
because the complete computation can be done at compile time.

That's why IMO in all 3 cases the values on the right side should be 
computed with
maximum precision (of course independent of the left side), and in an 
ideal world

it should be done at compile time. But if not: anyway with max precision.
Tagging the FP constants with FP attributes like single, double and 
extended and
then doing arithmetic on them which leads to MATHEMATICAL results which 
are unexpected
is IMO wrong and would not be accepted in most other programming 
languages or compilers.


This is NOT about variables ... they have attributes and there you can 
explain all sort of
strange behaviour. It's about CONSTANT EXPRESSIONS (which can and should 
be evaluated
at compile time, and the result should be the same, no matter if the 
evaluation is done at

compile time or not).

That said:

if you have arithmetic involving a single variable and a FP constant, say

x + 1440.0

you don't need to handle this as an extended arithmetic IMO, if you 
accept my statement above.
You can treat the 1440.0 as a single constant in this case, if you wish. 
It's all about context ...


Kind regards

Bernd


Am 12.02.2024 um 10:44 schrieb Thomas Kurz via fpc-pascal:

I wouldn't say so. Or at least, not generally. Why can't the compiler do what 
the programer intends to do:

var
   s: single;
   d: double;
   e: extended;
   
begin

   s := 8427.0 + 33.0 / 1440.0; // treat all constants all "single"
   d := 8427.0 + 33.0 / 1440.0; // treat all constants all "double"
   e := 8427.0 + 33.0 / 1440.0; // treat all constants all "extended"
end.

Shouldn't this satisfy all the needs? Those caring for precision will work with 
double precision and don't have to take care for a loss in precision. Those 
caring for speed can use the single precision type and be sure that no costly 
conversion to double or extended will take place.




- Original Message -
From: Jonas Maebe via fpc-pascal 
To: fpc-pascal@lists.freepascal.org 
Sent: Sunday, February 11, 2024, 23:29:42
Subject: [fpc-pascal] Floating point question

On 11/02/2024 23:21, Bernd Oppolzer via fpc-pascal wrote:

and this would IMHO be the solution which is the easiest to document and
maybe to implement
and which would satisfy the users.

And generate the slowest code possible on most platforms.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-12 Thread Thomas Kurz via fpc-pascal
>> You cannot do this in Pascal. The evaluation of the expression on the 
>> right of := does not
>> know (and should not know) what the type is of the expression on the left.

> It's even theoretically impossible to do in case the result is passed to 
> a function or intrinsic that is overloaded with single/double/extended 
> parameters.

In other cases, I got a "can't determine which overloaded function to call" 
error, so I think this should be handable; but I understand the first argument 
of course.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-12 Thread Jonas Maebe via fpc-pascal

On 12/02/2024 10:55, Michael Van Canneyt via fpc-pascal wrote:

On Mon, 12 Feb 2024, Thomas Kurz via fpc-pascal wrote:

I wouldn't say so. Or at least, not generally. Why can't the compiler 
do what the programer intends to do:


var
 s: single;
 d: double;
 e: extended;

begin
 s := 8427.0 + 33.0 / 1440.0; // treat all constants all "single"
 d := 8427.0 + 33.0 / 1440.0; // treat all constants all "double"
 e := 8427.0 + 33.0 / 1440.0; // treat all constants all "extended"
end.


You cannot do this in Pascal. The evaluation of the expression on the 
right of := does not

know (and should not know) what the type is of the expression on the left.


It's even theoretically impossible to do in case the result is passed to 
a function or intrinsic that is overloaded with single/double/extended 
parameters.



Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-12 Thread Michael Van Canneyt via fpc-pascal




On Mon, 12 Feb 2024, Thomas Kurz via fpc-pascal wrote:


I wouldn't say so. Or at least, not generally. Why can't the compiler do what 
the programer intends to do:

var
 s: single;
 d: double;
 e: extended;

begin
 s := 8427.0 + 33.0 / 1440.0; // treat all constants all "single"
 d := 8427.0 + 33.0 / 1440.0; // treat all constants all "double"
 e := 8427.0 + 33.0 / 1440.0; // treat all constants all "extended"
end.


You cannot do this in Pascal. 
The evaluation of the expression on the right of := does not

know (and should not know) what the type is of the expression on the left.

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-12 Thread Thomas Kurz via fpc-pascal
I wouldn't say so. Or at least, not generally. Why can't the compiler do what 
the programer intends to do:

var
  s: single;
  d: double;
  e: extended;
  
begin
  s := 8427.0 + 33.0 / 1440.0; // treat all constants all "single"
  d := 8427.0 + 33.0 / 1440.0; // treat all constants all "double"
  e := 8427.0 + 33.0 / 1440.0; // treat all constants all "extended"
end.

Shouldn't this satisfy all the needs? Those caring for precision will work with 
double precision and don't have to take care for a loss in precision. Those 
caring for speed can use the single precision type and be sure that no costly 
conversion to double or extended will take place.




- Original Message - 
From: Jonas Maebe via fpc-pascal 
To: fpc-pascal@lists.freepascal.org 
Sent: Sunday, February 11, 2024, 23:29:42
Subject: [fpc-pascal] Floating point question

On 11/02/2024 23:21, Bernd Oppolzer via fpc-pascal wrote:
> and this would IMHO be the solution which is the easiest to document and 
> maybe to implement
> and which would satisfy the users.

And generate the slowest code possible on most platforms.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-11 Thread Jonas Maebe via fpc-pascal

On 11/02/2024 23:21, Bernd Oppolzer via fpc-pascal wrote:
and this would IMHO be the solution which is the easiest to document and 
maybe to implement

and which would satisfy the users.


And generate the slowest code possible on most platforms.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-11 Thread Bernd Oppolzer via fpc-pascal

Am 11.02.2024 um 17:31 schrieb Florian Klämpfl via fpc-pascal:

On 09.02.24 15:00, greim--- via fpc-pascal wrote:

Hi,

my test with Borland Pascal 7.0 running in dosemu2 running 80x87 code.
The compiler throws an error message for calculating HH and II with 
explicit type conversion.

The results of FF and GG are the same!
Even on 16 bit system!

I think this behavior is right!


The x87 fpu behavior is completely flawed as its precision is not 
dependent on the instruction used but the state of the fpu.


Overall, the intermediate float precision is a very difficult topic. 
The famous Goldberg article 
(https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) does 
not suggest to use the highest possible precision after all. And an 
additional interesting read: 
https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/


Many thanks for the links, I read them with interest; for me - working 
almost every day with IBM systems -
the remarks on the old IBM FP hex format (base 16) are very interesting. 
Today's IBM systems support IEEE as well.


IMO, the question regarding FP constants (not variables) in compilers is 
not yet answered fully. If we have an expression
consisting only of FP constants like in the original coding: should the 
FP constants indeed given different
FP types by the compiler? Or should the FP constants maybe have all the 
same type .. the largest type available?
This would automatically lead to a computation using the maximum 
precision, no matter if it is done at compile time
or at run time ... and this would IMHO be the solution which is the 
easiest to document and maybe to implement

and which would satisfy the users.

Kind regards

Bernd Oppolzer

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-11 Thread Florian Klämpfl via fpc-pascal

On 09.02.24 15:00, greim--- via fpc-pascal wrote:

Hi,

my test with Borland Pascal 7.0 running in dosemu2 running 80x87 code.
The compiler throws an error message for calculating HH and II with 
explicit type conversion.

The results of FF and GG are the same!
Even on 16 bit system!

I think this behavior is right!


The x87 fpu behavior is completely flawed as its precision is not 
dependent on the instruction used but the state of the fpu.


Overall, the intermediate float precision is a very difficult topic. The 
famous Goldberg article 
(https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) does 
not suggest to use the highest possible precision after all. And an 
additional interesting read: 
https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-10 Thread greim--- via fpc-pascal
Hi, 



my test with Borland Pascal 7.0 running in dosemu2 running 80x87 code. 

The compiler throws an error message for calculating HH and II with explicit 
type conversion. 

The results of FF and GG are the same!
Even on 16 bit system!


I think this behavior is right!


In the 80x87 emulation mode data type single is not available and throws also 
an error during compilation. 



PROGRAM Consta;


Const
A_const : integer = 8427;
B_const : byte = 33;
C_const : Single = 1440.0;


Var
A_Var : Integer;
B_Var : Byte;
C_Var : Single;
FF, GG, HH, II : Extended;


begin
A_Var := A_Const;
B_Var := B_Const;
C_Var := C_Const;


FF := A_Var+B_Var/C_Var;
GG := A_Const+B_Const/C_Const;
(* HH := Extended(A_Const 

Re: [fpc-pascal] Floating point question

2024-02-09 Thread James Richters via fpc-pascal
>However, adding support for an option called -CFMax or similar should be no
problem.

It would be VERY much appreciated!

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-09 Thread Jean SUZINEAU via fpc-pascal

Le 09/02/2024 à 20:53, Jonas Maebe via fpc-pascal a écrit :
However, adding support for an option called -CFMax or similar should 
be no problem. 

It would be very nice to compile old code
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-09 Thread Jonas Maebe via fpc-pascal

On 09/02/2024 14:04, James Richters via fpc-pascal wrote:
Is there any way we can please get -CF80 or {$MINFPCONSTPREC 80} or some 
other way to turn off the new behavior for applications that use Extended.


The reason I didn't add it back then is because when parsing the 
options, there is no good way to get the maximum supported floating 
point precision by the target platform.


However, adding support for an option called -CFMax or similar should be 
no problem.



Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-09 Thread James Richters via fpc-pascal
>Because 1440.1 cannot be represented exactly as a single precision floating
point number. Nor as a double or extended precision floating point >number
for that matter, and in that case the compiler uses the maximum precision is
supported by the target platform.

I see that now, I think someone pointed out that 1440.5 would also be a
problem since it fits in a single.

So my idea of trying to change all the x.0s to x only helps some cases, not
all cases, as I can't change x.5 to anything quickly with a global search.
There could be anything that happens to fit in a single, making my Extended
calculation come out to a Single.

How does one get the old behavior for programs that use Extended without
analyzing and re-writing thousands of lines of code?

Is there any way we can please get -CF80 or {$MINFPCONSTPREC 80} or some
other way to turn off the new behavior for applications that use Extended.

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-08 Thread Jonas Maebe via fpc-pascal

On 05/02/2024 01:31, James Richters via fpc-pascal wrote:

So I need to do this?
AA = Extended(8427+Extended(33/Extended(1440.0)));


Just typecasting 1440.0 to extended should be enough.


This is inconsistent,  why didn't the 1440.1 version reduce to a single?


Because 1440.1 cannot be represented exactly as a single precision 
floating point number. Nor as a double or extended precision floating 
point number for that matter, and in that case the compiler uses the 
maximum precision is supported by the target platform.




Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-08 Thread Jonas Maebe via fpc-pascal

On 06/02/2024 16:23, James Richters via fpc-pascal wrote:
Great if -CF80 worked, but even if you are happy with -CF64, my problem 
is: how is anyone coming into FPC after 2.2 supposed to know that their 
constants that always worked before are going to no longer be accurate??


By reading the release notes for that compiler version (aka the "user 
changes" wiki doc).



Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
>IMO, the computations of AA+BB/CC (right hand side) should be carried out the 
>same way, regardless of the type 
>on the left hand side of the assignment. So I would expect the values in DD, 
>EE and FF being the same.

In this example DD hold the value 8427.0224610 because DD it defined as a 
single and a single cannot hold the value 8427.022916625000, there 
aren’t enough bits.

There is a typo on line:
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;

It should have been 
   WRITELN ( 'EE = ',EE: 20 : 20 ) ;

And the result of it should have been:
EE = 8427.022916668000

Which is not the same as 
FF = 8427.022916625000
Again because 8427.022916625000 won’t fit in a double.

The intention with all that was to show that everything works correctly if 
variables are used.  
The problem is when you use constants you get the 8427.0224... for everything, 
even when you have defined a double or an extended


>That said: wouldn't it make more sense to give EVERY FP CONSTANT the FP type 
>with the best available precision? 
Yes, that is the old way, before the change in 2.2, but there are times when it 
would be more efficient to reduce the precision in the example of:
Const 
MyConst = 2.0;
That doesn't have to be floating point, and if you later use it as the 
denominator in a divide, it's less efficient than it would be if it was an 
integer.. I argue that on modern computers, who cares, but if you do want to 
reduce precision to increase performance, it NEEDS to be done in a way that 
guarantees no loss of precision with no modification of code.  The changes in 
v2.2 fails in this regard.   The thing I don't understand is that this was 
released as the default for all modes and indeed with no good way to turn it 
off for extended, even though it was already known that there would be 
inaccuracies when using constants with divides.   The change in 2.2 should NOT 
have been the default for everyone, it should have been an option for those who 
want performance at the cost of precision, but nearly everyone will not notice 
the performance increase on a modern computer, but we will all do not want to 
risk a loss of precision...  


>GG in this case would have the same value as HH, because the computation 
>involving the constants 
>(hopefully done by the compiler) would be done with the best available 
>precision. 

Yes, it would be!  And that is precisely why this is a bug!  GG not matching HH 
is a problem.
GG and HH should be identical, the compiler should to math exactly the same way 
as an executing program, otherwise it's a mess.

The computation SHOULD always be done at maximum precision, and that's the way 
it used to work before this: 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants
"Old behaviour: all floating point constants were considered to be of the 
highest precision available on the target platform"
This is the correct way that guarantees precision. 

"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"
This is GREAT if you can pull it off in a way that doesn't cause data loss in 
ANY condition.   I believe this bug can be fixed and we can have efficiency and 
guaranteed no data loss, and then the "Effect" and "Remedy" below would not be 
needed...
But if it's not possible to guarantee no data loss, and the "Effect" is still a 
possibility, then this entire thing should be an OPTION.

" Effect: some expressions, in particular divisions of integer values by 
floating point constants, may now default to a lower precision than in the 
past. "  
What  This IS data loss!!!  This precisely describes the BUG.  This is the 
reason this should NEVER have been made the default for everyone,  it should 
have required a compiler directive to turn on this behavior.  This lower 
precision is in direct violation of the 'New behavior' statement!
There is NO reason why anyone writing a Pascal program would expect this 
behavior.  It's NOT the way Pascal has EVER behaved.

" Remedy: if more precision is required than the default, typecast the floating 
point constant to a higher precision type, e.g. extended(2.0). Alternatively, 
you can use the -CF command line option to change the default precision of all 
floating point constants in the compiled modules. "
This is unreasonable.  How is anyone supposed to know that now to get it to 
work correctly as it should work, we need to cast out constants, old code 
should work the way it always did, and people writing new code will NEVER 
expect this needs to be done.
On top of that the -CF option only works for -CF32 and -CF64 so its no solution 
for Extended.. why do I need a special option to do things correctly?'

How about this.. if one variable is defined as a Double or Extended, then shut 
this 'feature' off, because it's asking for trouble.   Nobody uses Doubles or 
Extended in a program because they want low precision results.



Re: [fpc-pascal] Floating point question

2024-02-06 Thread Bernd Oppolzer via fpc-pascal
I didn't follow all the discussions on this topic and all the details of 
compiler options of FPC

and Delphi compatibility and so on, but I'd like to comment on this result:

program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   


begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;

   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.


result:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000


IMO, the computations of AA+BB/CC (right hand side) should be carried 
out the same way, regardless of the type
on the left hand side of the assignment. So I would expect the values in 
DD, EE and FF being the same.


But as it seems, the left hand side (and the type of the target 
variable) HAS AN INFLUENCE on the computation

on the right hand side, and so we get (for example)

DD = 8427.02246100

and

EE = 8427.022916625000

which IMHO is plain wrong.

If all computations of AA+BB/CC would be carried out involving only 
single precision,

all results DD, EE, FF (maybe not GG) should be 8427.0224...
only minor differences because of the different precisions of the target 
variables

(but not as large as the difference between DD and EE above).

This would be OK IMHO;
it would be easy to explain to everyone the reduced precision on these 
computations

as a consequence of the types of the operands involved.

Another question, which should be answered separately:

the compiler apparently assigns types to FP constants.
It does so depending on the fact if a certain decimal representation can 
exactly be represented

in the FP format or not.

1440.0 and 1440.5 can be represented as single precision, so the FP type 
single is assigned
1440.1 cannot, because 0.1 is an unlimited sequence of hex digits, so (I 
guess), the biggest available FP type is assigned

1440.25 probably can, so type single is assigned
1440.3: biggest FP type
1440.375: probably single

and so on

Now: who is supposed to know for any given decimal representation of a 
FP constant, if it can or cannot
be represented in a single precision FP variable? This depends on the 
length of the decimal representation,
among other facts ... and the fraction part has to be a multiple of 
negative powers of 2 etc. etc.


That said: wouldn't it make more sense to give EVERY FP CONSTANT the FP 
type with the best available precision?


If the compiler did this, the problems which arise here could be solved, 
I think.


GG in this case would have the same value as HH, because the computation 
involving the constants
(hopefully done by the compiler) would be done with the best available 
precision.


HTH, kind regards

Bernd


Am 06.02.2024 um 16:23 schrieb James Richters via fpc-pascal:

program TESTDBL1 ;

Const
HH = 8427.02291667;
Var
AA : Integer;
BB : Byte;
CC : Single;
DD : Single;
EE : Double;
FF : Extended;
GG : Extended;



begin
AA := 8427;
BB := 33;
CC := 1440.0;
DD := AA+BB/CC;
EE := AA+BB/CC;
FF := AA+BB/CC;
GG := 8427+33/1440.0;

WRITELN ( 'DD = ',DD: 20 : 20 ) ;

WRITELN ( 'EE = ',FF: 20 : 20 ) ;
WRITELN ( 'FF = ',FF: 20 : 20 ) ;
WRITELN ( 'GG = ',GG: 20 : 20 ) ;
WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Jean SUZINEAU via fpc-pascal
I've just made a small test with the old Borland Delphi 7.0 build 4453 
from 2002 :


...

type
  TForm1 = class(TForm)
m: TMemo;
procedure FormCreate(Sender: TObject);
  end;

...

procedure TForm1.FormCreate(Sender: TObject);
var
   GG: Extended;
   S: String;
begin
 GG := 8427+33/1440.0;
 Str( GG: 20 : 20, S);
 m.Lines.Add( 'GG = '+S);
end;

I get :

GG = 8427.02291700

But I'm cautious, it's a delphi 7 running on a broken installation of 
Wine on Ubuntu 22.04,
I had to modify an existing delphi 7 project for this test, I couldn't 
save a new project because of problems with wine.


I have an old astronomical made with delphi 7 that I've written around 
2000, and I ported a part of it to Freepascal, I'm nearly sure there are 
unnoticed errors in this freepascal port due to to this behaviour ...
(Not really a problem because the program isn't sold any more, but I'll 
have a look next time I compile it)
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Rafael Picanço via fpc-pascal
> Why (re-invent the wheel)?
> Why not use Math.Float?
> IIRC then this is Extended, double or Single depending on CPU type.
> And always the largest precision the CPU supports.

Thanks Bart. Math.Float is really great, I will start using it today.

On Tue, Feb 6, 2024 at 2:51 PM Bart  wrote:

> On Tue, Feb 6, 2024 at 6:13 PM Rafael Picanço via fpc-pascal
>  wrote:
>
>
> > type
> >   {$IFDEF CPU86}{$IFDEF CPU32}
> > TLargerFloat = Extended;
> >   {$ENDIF}{$ENDIF}
> >
> >   {$IFDEF CPUX86_64}
> > TLargerFloat = Double;
> >   {$ENDIF}
>
> Why (re-invent the wheel)?
> Why not use Math.Float?
> IIRC then this is Extended, double or Single depending on CPU type.
> And always the largest precision the CPU supports.
>
> --
> Bart
>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
>Jonas has argued, not without reason, that calculating everything always at
full precision has its disadvantages too.

I agree with that, and I do see the value in reducing the precision when it
is possible, but not when it's causing data loss. 
The intention is perfectly fine, it's the execution that has a bug in it. 

I think that any reasonable person reading the following code would conclude
that FF, GG, HH, and II should be exactly the same.  I am defining
constants, in FF I define variables of the same type to the constants, and
it comes out correctly, in GG I use the constants directly, and its wrong.
There is nothing about this that any programmer should understand because
it's a bug. 

FF and GG are both adding an integer to a byte divided by a single, there is
no difference to any reasonable programmer between what FF and GG are
saying, and the programmer should not have to resort to ridiculous
typecasting as in II to get almost the correct answer, but is still wrong.
By the way notice that even with the casting, it's still wrong. 
II SHOULD have produced the right answer, because it's perfectly legitimate
to divide a byte by a single and expect the answer to be an extended. 

program Constant_Bug;

Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.0);

Var
   A_Var : Integer;
   B_Var : Byte;
   C_Var : Single;
   FF, GG, HH, II : Extended;

begin
   A_Var := A_Const;
   B_Var := B_Const;
   C_Var := C_Const;

   FF := A_Var+B_Var/C_Var;
   GG := A_Const+B_Const/C_Const;
   HH := Extended(A_Const+B_Const/C_Const);
   II := Extended(A_Const+Extended(B_Const/C_Const));

   WRITELN ( ' FF = ',FF: 20 : 20 ) ;
   WRITELN ( ' GG = ',GG: 20 : 20 ) ;
   WRITELN ( ' HH = ',HH: 20 : 20 ) ;
   WRITELN ( ' II = ',II: 20 : 20 ) ;
end.

 FF = 8427.022916625000
 GG = 8427.022460937500
 HH = 8427.022460937500
 II = 8427.02291666716337204000

FF and II are correct, GG and HH are wrong.   I understand now WHY this is
happening, but I argue, that it's not obvious to anyone that it should be
happening, it's just a hidden known bug waiting to bite you.  No reasonable
programmer would think that FF and GG would come out differently,  the
datatypes are all defined legitimately, and the same, the results should
also be the same.

In my opinion the changes in v2.2 break more things than they fix, and
should be reverted, and used ONLY if asked for by a compiler directive, we
should not have to do special things to get it to work correctly.  If you
give the compiler directive to use this feature, then you know you might
have to cast some things yourself, but to apply this globally and then
require a directive to not do it, is just not right, unless ALL code can be
run the way it did pre 2.2 without modification,  this is CLEARLY not the
case.   

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Bart via fpc-pascal
On Tue, Feb 6, 2024 at 6:13 PM Rafael Picanço via fpc-pascal
 wrote:


> type
>   {$IFDEF CPU86}{$IFDEF CPU32}
> TLargerFloat = Extended;
>   {$ENDIF}{$ENDIF}
>
>   {$IFDEF CPUX86_64}
> TLargerFloat = Double;
>   {$ENDIF}

Why (re-invent the wheel)?
Why not use Math.Float?
IIRC then this is Extended, double or Single depending on CPU type.
And always the largest precision the CPU supports.

-- 
Bart
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Rafael Picanço via fpc-pascal
> I’m afraid I don’t qualify for the bonus, because I don’t know what
LargerFloat is.

I am a little bit embarrassed here. The TLargerFloat is a type I wrote for
a simple test some time ago and I forgot about it. I was following the
TLargeInteger convention (from struct.inc in my current windows system):

After realizing that the Extended type was not made for cross-platform, my
point with TLargerFloat was to have a central place to test some types. I
decided to use Double for everything, following the equivalence with
pythonic doubles for timestamp synchronization in the systems I use.

unit timestamps.types;

{$mode ObjFPC}{$H+}

interface

type
  {$IFDEF CPU86}{$IFDEF CPU32}
TLargerFloat = Extended;
  {$ENDIF}{$ENDIF}

  {$IFDEF CPUX86_64}
TLargerFloat = Double;
  {$ENDIF}

implementation

end.

___

So, I guess I finally found why precision was better for explicitly
typecasts in Linux (despite the higher granularity of clock_monotonic):

I guess {$MINFPCONSTPREC 64}  would avoid explicit typecasting in the
following code, is it correct?

unit timestamps;

{$mode objfpc}{$H+}

// {$MINFPCONSTPREC 64}

interface

uses
  SysUtils, timestamps.types

{$IFDEF LINUX}
  , Linux
  , UnixType
{$ENDIF}

{$IFDEF DARWIN}
  , ctypes
  , MachTime
{$ENDIF}

{$IFDEF WINDOWS}
  , Windows
{$ENDIF}
  ;

function ClockMonotonic : TLargerFloat;

implementation

{$IFDEF LINUX}
function ClockMonotonic: TLargerFloat;
var
  tp: timespec;
  a, b : TLargerFloat;
begin
  clock_gettime(CLOCK_MONOTONIC, @tp);
  a := TLargerFloat(tp.tv_sec);
  b := TLargerFloat(tp.tv_nsec) * 1e-9;
  Result := a+b;
end;
{$ENDIF}

{$IFDEF DARWIN}
{credits:
https://github.com/pupil-labs/pyuvc/blob/master/pyuvc-source/darwin_time.pxi
}

var
  timeConvert: TLargerFloat = 0.0;

function ClockMonotonic : TLargerFloat;
var
  timeBase: mach_timebase_info_data_t;
begin
  if timeConvert = 0.0 then begin
mach_timebase_info(@timeBase);
timeConvert :=
  (TLargerFloat(timeBase.numer) / TLargerFloat(timeBase.denom) /
TLargerFloat(10.0);
  end;
  Result := mach_absolute_time() * timeConvert;
end;
{$ENDIF}

{$IFDEF WINDOWS}
var
  PerSecond : TLargeInteger;

function ClockMonotonic: TLargerFloat;
var
  Count : TLargeInteger;
begin
  QueryPerformanceCounter(Count);
  Result := TLargerFloat(Count) / TLargerFloat(PerSecond);
end;

initialization
   QueryPerformanceFrequency(PerSecond);
{$ENDIF}

end.

On Tue, Feb 6, 2024 at 12:52 PM James Richters <
james.richt...@productionautomation.net> wrote:

> This is my opinion from my testing, but others may have something else to
> say.
>
>
>
> 1) Does it affects constants only?
>
> Not really, if you set a variable with constant terms, it is affected, if
> you set a variable with other variables, it is not affected.
>
> Cont
>
>Mycontant := 8432+33/1440.0;//Is affected;
>
> Var
>
>MyDoubleVariable:Double;
>
>
>
> MyDoubleVariable := 8432+33/1440.0;   //Is affected
>
>
>
>
>
> Var
>
>MyInteger : Ineger;
>
>MyByte :  Byte
>
>MySingle : Single;
>
>MyDouble : Double;
>
>
>
> MyInteger := 8432;
>
> MyByte := 33;
>
> MySingle := 1440.0;
>
> MyDouble := MyInteger + MyByte / MySingle; //   is NOT affected;
>
>
>
>
>
> 2) Does it affects the LargerFloat type?
>
> I don’t know what you mean by LargerFloat, but Double an d Extended are
> affected, and even Real if your platform defines Real as a Double.
>
> Anything that is not Single precision is affected.
>
>
>
> 3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?
>
> Everyone should use {$MINFPCONSTPREC 64} in all programs until the bug is
> fixed, unless you use extended, then you have no good solution. Because you
> can’t set it to 80.
>
> 4) BONUS: Is the LargerFloat type really the larger, or should one do
> something else?
>
> I’m afraid I don’t qualify for the bonus, because I don’t know what
> LargerFloat is.
>
>
>
> James
>
>
>
>
>
>
>
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
I think the reason why this new behavior doesn't occur with 1440.1 is that this 
number cannot be reduced to "single" precision. It will keep "double" precision.

Consider this instead:

program TESTDBL1 ;

var TT : double ; EE: double;

begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / 1440.5 ;
   EE := Double(8427) + Double(33) / Double(1440.5);
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
   WRITELN ( 'ee=' , EE : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

Result:

tt=8427.022460937500
ee=8427.0229087122534000

So it's the same as with ".0". FPC treats the constant as type "single". Imho, 
this is perfectly legal, but when assigning an expression to a "double" 
variable, an implicit cast to double should occur.

When using a variable of type "single" (instead of a constant), this casting is 
done:

program TESTDBL1 ;

{$mode objfpc}

var TT : double ;
EE: double;
x: Single = 1440.5;

begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / x ;
   EE := 8427 + 33 / Double(x);
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
   WRITELN ( 'ee=' , EE : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

Prints:
tt=8427.0229087122534000
ee=8427.0229087122534000

I don't know whether this is intentional or not, but I cannot see any good 
reason why using a constant in an expression has to be treated differently than 
using a variable. To me as a programmer, this behavior is unexpected.

If the 2.2 change is not going to be reverted (and as far as I understand 
Florian correctly, it won't be changed), maybe one could at least introduce a 
warning about a loss of precision when using a constant of type "single" in an 
expression which will be assigned to a variable of type "double".

Kind regards,
Thomas




- Original Message - 
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 16:23:30
Subject: [fpc-pascal] Floating point question

What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.

What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


I just realized something...  regardless of when or how the reduction in
precision is happening, the bug is different than that,  because the result
of a byte divided by a single when stored in a double is a double, NOT a
single,  there should be no problem here, there is a definite bug. 

Consider this:
program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000

GG,  the one with constants, is doing it wrong... 

If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  

This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.

The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default?? 

"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Adriaan van Os via fpc-pascal

James Richters via fpc-pascal wrote:

What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.


Indeed. It is wrong. And if Delphi does it wrong, it is still wrong for modes 
other than Delphi.



What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


Pascal doesn't attach a floating-point type to a floating-point constant. So, the only correct way 
for the compiler to handle it is to NOT attach a floating-point type to the declared constant in 
advance, that is, the compiler must store it in a symbol table as BCD or as string. And decide 
LATER what type it has. And in this case, where the assignment is to an extended, as soon as that 
is clear, and not earlier, the compiler can do the conversion of the BCD or string floating-point 
constant to the floating-point type in question, i.c. extended.



If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  




This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.


Jonas has argued, not without reason, that calculating everything always at full precision has its 
disadvantages too.




The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default??


Delphi/Borland invents some seemingly clever by factually stupid scheme and FPC wants to be 
compatible with it. Some applaud, but I am more impressed by logical reason than by what Borland 
does without logical reason.




"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default
precision of all floating point constants in the compiled modules."

The first remedy is unreasonable, I should not have to go through thousands
of lines of code and cast my constants, it was never a requirement of Pascal
to do this. 


Right.



Great if -CF80 worked, but even if you are happy with -CF64, my problem is:
how is anyone coming into FPC after 2.2 supposed to know that their
constants that always worked before are going to no longer be accurate??

The better thing to do would be to do it RIGHT before releasing the change
so that it can't be a problem for anyone, and make:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"  a true statement.

If the entire formula was evaluated at full precision, and only the result
was stored as a lower precision if possible, then there is never a problem
for anyone.


Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question (Rafael Picanço)

2024-02-06 Thread James Richters via fpc-pascal
This is my opinion from my testing, but others may have something else to say.
 
1) Does it affects constants only? 
Not really, if you set a variable with constant terms, it is affected, if you 
set a variable with other variables, it is not affected.
Cont
   Mycontant := 8432+33/1440.0;//Is affected;
Var
   MyDoubleVariable:Double;
 
MyDoubleVariable := 8432+33/1440.0;   //Is affected
 
 
Var
   MyInteger : Ineger;
   MyByte :  Byte
   MySingle : Single;
   MyDouble : Double;
 
MyInteger := 8432;
MyByte := 33;
MySingle := 1440.0;
MyDouble := MyInteger + MyByte / MySingle; //   is NOT affected;
 
 
2) Does it affects the LargerFloat type?  
I don’t know what you mean by LargerFloat, but Double an d Extended are 
affected, and even Real if your platform defines Real as a Double.
Anything that is not Single precision is affected.
 
3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?   
Everyone should use {$MINFPCONSTPREC 64} in all programs until the bug is 
fixed, unless you use extended, then you have no good solution. Because you 
can’t set it to 80.


4) BONUS: Is the LargerFloat type really the larger, or should one do something 
else?  
I’m afraid I don’t qualify for the bonus, because I don’t know what LargerFloat 
is.
 
James
 
 
 
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
What's apparently happening now is:
MyExtended := ReducePrecisionIfNoLossOfData (8246) +
ReducePrecisionIfNoLossOfData (33.0) / ReducePrecisionIfNoLossOfData
(1440.0);
But it is not being done correctly, the 1440.0 is not being reduced all the
way to an integer, because it was, everything would work.  The 1440.0 is
being considered a single, and the division is also being now considered a
single, even though that is incorrect.   But 1440.1 is not being considered
a single, because 1440.1 is not breaking everything.

What should be happening is:
MyExtended := ReducePrecisionIfNoLossOfData(8246+33.0/1440.0);


I just realized something...  regardless of when or how the reduction in
precision is happening, the bug is different than that,  because the result
of a byte divided by a single when stored in a double is a double, NOT a
single,  there should be no problem here, there is a definite bug. 

Consider this:
program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;
   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000

GG,  the one with constants, is doing it wrong... 

If the entire formula was calculated the original way at full precision,
then only result was reduced if there was no loss in precision right before
storing as a constant,  then this solves the problems for everyone, and this
is the correct way to do this.  Then everyone is happy, no Delphi warnings,
no needlessly complex floating point computations if the result of all the
math is a byte, and no confusion as to why it works with 1440.1 and not
1440.0  Compatibility with all versions of Pascal,  etc..  

This calculation is only done once by the compiler, the calculation should
be done at full possible precision and only the result stored in a reduced
way if it makes sense to do so.

The problem I have with the changes made with v2.2, is that it's obvious
that the change was going to introduce a known bug at the time:
"Effect: some expressions, in particular divisions of integer values by
floating point constants, may now default to a lower precision than in the
past."
How is this acceptable or the default?? 

"Remedy: if more precision is required than the default, typecast the
floating point constant to a higher precision type, e.g. extended(2.0).
Alternatively, you can use the -CF command line option to change the default
precision of all floating point constants in the compiled modules."

The first remedy is unreasonable, I should not have to go through thousands
of lines of code and cast my constants, it was never a requirement of Pascal
to do this. 

Great if -CF80 worked, but even if you are happy with -CF64, my problem is:
how is anyone coming into FPC after 2.2 supposed to know that their
constants that always worked before are going to no longer be accurate??

The better thing to do would be to do it RIGHT before releasing the change
so that it can't be a problem for anyone, and make:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"  a true statement.

If the entire formula was evaluated at full precision, and only the result
was stored as a lower precision if possible, then there is never a problem
for anyone.


James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question (Rafael Picanço)

2024-02-06 Thread Rafael Picanço via fpc-pascal
I have some questions about {$MINFPCONSTPREC 64} and the mentioned change
introduced by FPC 2.2 (the "it" from here after).

1) Does it affects constants only?

2) Does it affects the LargerFloat type?

3) Should I use {$MINFPCONSTPREC 64} in {$mode objfpc} too to avoid it?

4) BONUS: Is the LargerFloat type really the larger, or should one do
something else?

Best regards,
Rafael
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
I have the exact same intuition and expectation.  

I think this whole issue is easy to fix, just detect the .0s and cast them
to integers by default instead of singles, because then everything does work
fine.

If I had a clue where the code for this might reduction in precision might
be, I would try to fix it, but it's way over my head I'm afraid.   I think
the intention and theory behind doing it the new way is great, it just has
this one flaw in it that could be fixed so the true behavior matches what is
in the documentation,  that things will be reduced that would not cause a
loss in precision.   That is true for almost all cases except when you put a
.0 then it fails... it's losing precision. Reducing the .0 to an integer
solves the problem... and I think if you had X = 2.0 it would be reduced to
an integer or a byte, it's just when it's in a formula that it's getting set
to a single, and that single is throwing everything off... it just wasn't
reduced far enough. 

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 7:53 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

Well, this is funny because I *did* compile it on DOS with Turbo Pascal 5.5,
and I got the correct result there. Cross-compiling with FPC to msdos target
gave the "wrong" (aka unexpected) result again. There were so many factors
involved which caused great confusion.

>From my point of view, an expression being an assigned to a variable of type
"double" should be evaluated with double precision, not single. This is
obviously the way integers are handled by internally using int64. A few
weeks ago, I had incosistent behavior between x64 and x86 modes and it
turned out that 32-bit code did internal castings to int64 thus resulting in
the expected value whereas 64-bit cannot cast to int128 (because there is no
int128) and thus gives an unexpected result (well, at least to me). So my
intuition would (and obviously did!) expect double precision throughout the
calculation.

Kind regards,
Thomas



- Original Message -
From: James Richters 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 13:44:37
Subject: [fpc-pascal] Floating point question

I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. 

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
Well, this is funny because I *did* compile it on DOS with Turbo Pascal 5.5, 
and I got the correct result there. Cross-compiling with FPC to msdos target 
gave the "wrong" (aka unexpected) result again. There were so many factors 
involved which caused great confusion.

>From my point of view, an expression being an assigned to a variable of type 
>"double" should be evaluated with double precision, not single. This is 
>obviously the way integers are handled by internally using int64. A few weeks 
>ago, I had incosistent behavior between x64 and x86 modes and it turned out 
>that 32-bit code did internal castings to int64 thus resulting in the expected 
>value whereas 64-bit cannot cast to int128 (because there is no int128) and 
>thus gives an unexpected result (well, at least to me). So my intuition would 
>(and obviously did!) expect double precision throughout the calculation.

Kind regards,
Thomas



- Original Message - 
From: James Richters 
To: 'FPC-Pascal users discussions' 
Sent: Tuesday, February 6, 2024, 13:44:37
Subject: [fpc-pascal] Floating point question

I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. From the beginning, I
knew that the error was on my side, but I didn't know *what* I'm doing
wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message -
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it
wrong for most users, a compiler directive should have been needed to get
rid of the warnings, or ONLY applied in Mode Delphi.  Not to make everything
incorrect for everyone unless you add a directive. The problem with this
that no one is expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result i

Re: [fpc-pascal] Floating point question

2024-02-06 Thread James Richters via fpc-pascal
I don't think you were doing anything wrong, that's what I am simply trying
to point out.  If you ran your code on Turbo Pascal 7.0, you would not have
an issue, it would be fine.  There is no reason for a programmer to expect
this behavior and it's very confusing when it does come up.

There is a bug here and it should be acknowledged instead of defended.
Discovering bugs is a good thing, it can lead to improvements to make the
system better for everyone, but only if the discovery is learned from and
acted upon.  I'm sure everyone here can relate to how frustrating it can be
to encounter a bug and have no idea whatsoever what the problem is.
Undiscovered bugs are much worse than those which have been figured out.  

I think this is one that can be very frustrating for a lot of people, and
it's very difficult to figure out what's happening,  because everything
happens correctly >99.9% of the time.  If you put anything from x.001 to
x.999 it has no problem, if you put x.0, you have a problem.  Put as many
decimals as you like to see why there is no reason why any programmer should
expect this behavior.   On top of that x has no problem, and many
programmers use x.0 when x would have been fine, they are just in the habit
of putting the .0 and in Turbo Pascal, there was never a problem with doing
this. 

I am glad we at least have an explanation, but how many others are going to
need to re-discover this issue that should not even be an issue?  
It can still be a problem for people who didn't happen to come across this.
I didn't expect it to be an issue.  While compiling with -CF64 or using
{$MINFPCONSTPREC 64}  fixes it for programs that use doubles, there is no
good solution I can find for programs that use extended, because you can't
put 80 into either of those.  So for extended programs the only solution I
can think of at the moment is to go through the WHOLE thing and replace all
the x.0's with x  Which I have started doing but it's a tedious chore. 

I appreciate the discussion here, because I had noticed inaccuracies from
time but I was never able to get far enough in to realize this is what was
happening.   It's very frustrating indeed and I think if something can be
done to save others this frustration and unexpected behavior, it would be
helpful.

James

-Original Message-
From: fpc-pascal  On Behalf Of
Thomas Kurz via fpc-pascal
Sent: Tuesday, February 6, 2024 6:59 AM
To: 'FPC-Pascal users discussions' 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question

I'd like to apologize, because my intention hasn't been to raise controverse
discussions. I'm very thankful about the explanation. From the beginning, I
knew that the error was on my side, but I didn't know *what* I'm doing
wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message -
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it
wrong for most users, a compiler directive should have been needed to get
rid of the warnings, or ONLY applied in Mode Delphi.  Not to make everything
incorrect for everyone unless you add a directive. The problem with this
that no one is expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to
be a SINGLE, that is NUTS!!
I expect it to be all done in Extended. Why would anyone expect the contents
of MyVariable to be butchered by storing the 33/1440 in single precision.

In other words
I expect the result of these both to be the same:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
end.

But they are NOT
DD =8427.022916625000
EE =8427.022460937500

EE is WRONG and can never be considered right.   Why would ANY user with the
code above expect that the 33/1440 would be done as a single, thus causing a
loss of precision. 

Again:
"New behaviour: floating point constants are now considered to be of the
lowest precision which doesn't cause data loss"

This was NOT done in the lowest precision which doesn't cause data loss.. we
lost data   We are no longer Extended precision, anything at all we use
EE for is WRONG.

Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
I'd like to apologize, because my intention hasn't been to raise controverse 
discussions. I'm very thankful about the explanation. From the beginning, I 
knew that the error was on my side, but I didn't know *what* I'm doing wrong.

Again, thanks for helping.

Kind regards,
Thomas



- Original Message - 
From: James Richters via fpc-pascal 
To: 'FPC-Pascal users discussions' 
Sent: Sunday, February 4, 2024, 18:25:39
Subject: [fpc-pascal] Floating point question

I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code 
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it wrong 
for most users, a compiler directive should have been needed to get rid of the 
warnings, or ONLY applied in Mode Delphi.  Not to make everything incorrect for 
everyone unless you add a directive. The problem with this that no one is 
expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to be 
a SINGLE, that is NUTS!!
I expect it to be all done in Extended. Why would anyone expect the contents of 
MyVariable to be butchered by storing the 33/1440 in single precision.

In other words
I expect the result of these both to be the same:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
end.

But they are NOT
DD =8427.022916625000
EE =8427.022460937500

EE is WRONG and can never be considered right.   Why would ANY user with the 
code above expect that the 33/1440 would be done as a single, thus causing a 
loss of precision. 

Again:
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

This was NOT done in the lowest precision which doesn't cause data loss.. we 
lost data   We are no longer Extended precision, anything at all we use EE 
for is WRONG.

This is CLEARLY WRONG!  The default should be the old way and if you don't like 
the Delphi warnings, you can make a switch to do it this new stupider and WRONG 
way.

I strongly feel this should be reverted, it's just wrong.   This makes no sense 
to me at all.   It's wrong to need to add a compiler directive to do things as 
they are expected by the vast majority to be, the directive should be needed 
for those few who even noticed the warnings in Delphi, and they were just 
warnings, not a substantial reduction in precision. 

James

>But not at the price of loss in precision ! Unless an explicit compiler switch 
>like --fast-math is passed 


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Thomas Kurz via fpc-pascal
Thank you all

Finally I understand what's going wrong and can take care of that.

I'm now using the "{$MINFPCONSTPREC 64}" and have the correct result. Again, 
thank you for pointing me to that behavior!



- Original Message - 
From: Adriaan van Os via fpc-pascal 
To: FPC-Pascal users discussions 
Sent: Sunday, February 4, 2024, 13:50:48
Subject: [fpc-pascal] Floating point question

Jonas Maebe via fpc-pascal wrote:
> On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
>> Constants are also evaluated wrong,you don’t know what that constant 
>> is going to be used for, so all steps of evaluating a constant MUST be 
>> done in extended by the compiler, or the answer is just wrong.

> See 
> https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
> and https://www.freepascal.org/daily/doc/prog/progsu19.html

I think this discussion shows that the 2.2 compiler change was a bad idea (for 
modes other than 
Delphi).

Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-05 Thread James Richters via fpc-pascal
What is the proper way to use $EXCESSPRECISION ?   I tried:
 
program TESTDBL1 ;
{$EXCESSPRECISION ON}
 
Const
TT_Const = 8427 + 33 / 1440.0 ;
SS_Const = 8427 + Double(33 / 1440.0) ;
 
Begin
   WRITELN ( 'TT_Const = 8427 + 33 / 1440.0 ;   =' , 
TT_Const  : 20 : 20 ) ;
   WRITELN ( 'SS_Const = Double(8427 + 33 / 1440.0);=' , 
SS_Const  : 20 : 20 ) ;
end.
 
 
I get 
 
TT_Const = 8427 + 33 / 1440.0 ;   
=8427.02246100
SS_Const = Double(8427 + 33 / 1440.0);
=8427.0229166671634000
 
I expected them to be both the same. 
 
James
 
-Original Message-
From: fpc-pascal  On Behalf Of James 
Richters via fpc-pascal
Sent: Sunday, February 4, 2024 10:52 AM
To: 'FPC-Pascal users discussions' 
Cc: James Richters 
Subject: Re: [fpc-pascal] Floating point question
 
Hi Jonas,
That’s Interesting,  Thank you very much for the links!! Not only an 
explanation but a solution. 
The original is how I would expect it to work,  If it's for Delphi 
compatibility why not only do that when in Mode Delphi?   If not in mode Delphi 
who cares if it's compatible?
Delphi is completely wrong to do it this way.
I'm glad there is  $EXCESSPRECISION   I am Immediately putting that in every 
single program I have, because that is I always thought it would work, and I do 
have divisions where this can be a problem.
 
James
-Original Message-
From: fpc-pascal < <mailto:fpc-pascal-boun...@lists.freepascal.org> 
fpc-pascal-boun...@lists.freepascal.org> On Behalf Of Jonas Maebe via fpc-pascal
Sent: Sunday, February 4, 2024 7:21 AM
To:  <mailto:fpc-pascal@lists.freepascal.org> fpc-pascal@lists.freepascal.org
Cc: Jonas Maebe < <mailto:jo...@freepascal.org> jo...@freepascal.org>
Subject: Re: [fpc-pascal] Floating point question
 
On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
> Constants are also evaluated wrong,you don’t know what that constant 
> is going to be used for, so all steps of evaluating a constant MUST be 
> done in extended by the compiler, or the answer is just wrong.
 
See
 <https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants> 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants
and  <https://www.freepascal.org/daily/doc/prog/progsu19.html> 
https://www.freepascal.org/daily/doc/prog/progsu19.html
fpc-pascal maillist  -   <mailto:fpc-pascal@lists.freepascal.org> 
fpc-pascal@lists.freepascal.org  
<https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal> 
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
I got the -CF argument to work... it's not just -CF, it's is -CF and then the 
limiting precision...
-CF32 for single, or -CF64 for double,   but it won't take -CF80 so Extended 
still doesn't come out right.

With -CF64 I get better results but it's not completely doing it the old way.
BB = 8427+33/1440.0;  comes out the same as doing:
BB = Extended(8427+Double(33/1440));  which is  8427.0229166678793000

But 
BB = 8427+33/1440; still comes out right:  8427.022916625000

I still can't get  $EXCESSPRECISION   to work.

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
So I need to do this?
AA = Extended(8427+Extended(33/Extended(1440.0)));

That seems excessive when   BB = 8427+33/1440.1;   Has no problem

The thing I have an issue with is 
BB = 8427+33/1440.1; is done the way I want it to, and
BB = 8427+33/1440.01;   is done the way I want it to, and
BB = 8427+33/1440;is done the way I want it to, but
BB = 8427+33/1440.0; is done a different way.

To me these should all be done the same way.  And they would have all been done 
the same way before the 2.2 change, that change works for every case except 
when there is a .0,  A  .01 at the end is fine,  a .001234 at the end is fine, 
its JUST .0 that's not fine.

This is inconsistent,  why didn't the 1440.1 version reduce to a single?  It 
fits in a single, and I didn't specify it any differently than 1440.0.
This lack of consistency is that's leading me to think it's more of a bug..
Everything I put in the denominator, other than something that ends with .0 
works as I expect, it's ONLY when there is a .0 that things go wrong. 

Why is 1440.0 different that 1440.1? 

Say I have a program with some constants at the top, and the program has been 
working flawlessly for years, and now I change one
Constant from something with a .001 in the denominator to a .000 in the 
denominator,  I know it's lazy, just change the 1 to a 0 and don't delete the 
useless 0's after the decimal point, , but now I have all kinds of imprecision, 
but if I would have changed it to .001 instead, it is still fine, and 
anything other than .0 is fine.  Why is .0 special and .1 is not?

Having one single way to do it that causes drastically differently results is a 
bug,  it should be consistent.  That way when I'm testing I will see that I 
need to cast the denominator as an extended and it will always work the same 
way.   It's this working different ways with nearly the same input that I have 
an issue with.When I learned Turbo Pascal in technical school, no one EVER 
said you need to cast your constants, and it wasn't in the text book either.   

All of my examples above should be processed the same way,  if 1440.1 doesn't 
force single precision, then 1440.0 should not force single precision either. 

James

-Original Message-
From: fpc-pascal  On Behalf Of Jonas 
Maebe via fpc-pascal
Sent: Sunday, February 4, 2024 5:25 PM
To: fpc-pascal@lists.freepascal.org
Cc: Jonas Maebe 
Subject: Re: [fpc-pascal] Floating point question

On 04/02/2024 23:21, James Richters via fpc-pascal wrote:
> Shouldn’t this do all calculations as Extended?
> 
> AA = Extended(8427+33/1440.0);

No, this only tells the compiler to interpret the result as extended.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org 
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Jonas Maebe via fpc-pascal

On 04/02/2024 23:21, James Richters via fpc-pascal wrote:

Shouldn’t this do all calculations as Extended?

AA = Extended(8427+33/1440.0);


No, this only tells the compiler to interpret the result as extended.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
Shouldn’t this do all calculations as Extended?
 
   AA = Extended(8427+33/1440.0);
 
It does NOT
 
Const
   AA = Extended(8427+33/1440.0);
   BB = 8427+33/1440;
   CC = 8427.02291667;
 
 
A_Ext = 8427.022460937500
B_Ext = 8427.022916625000
C_Ext = 8427.022916625000
 
A_Dbl = 8427.022460937500
B_Dbl = 8427.022916668000
C_Dbl = 8427.022916668000
 
A_Sgl = 8427.02246100
B_Sgl = 8427.02246100
C_Sgl = 8427.02246100
 
 
James
 
 
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Ralf Quint via fpc-pascal

On 2/4/2024 12:32 PM, James Richters via fpc-pascal wrote:


>> Not specifying in a program, specially in a strict programming 
language like Pascal, will always result in implementation depending


>> variations/assumptions.

The problem is, I feel that I DID specify what should be by declaring 
my variable as Extended. And Apparently FPC agrees with me, because it 
DOES work the way I expect, except if I put a .0 in my constant 
terms.This is all just a bug if you put .0 after any integers in an 
expression.I just put a better example that shows how it works 
correctly except if you put a .0


Strangely, upon discovering this, the solution is opposite what I 
thought it should be.If all the terms of an expression were reduced to 
the lowest precision possible without loosing data, then my 1440.0 
would be reduced from a float to a word, and then the entire problem 
would have went away, because when I put in 1440 without the .0, there 
is no problem.The .0 is apparently defining it to be a floating point 
and the smallest floating point is a single… but that’s not the 
smallest data structure, the smallest data structure that can be used 
is a word and that would have solved it.


Sorry, but that doesn't make any sense. If you just add the .0, you 
specify a floating point value, ANY floating point value. This is kind 
of obvious in a programming language that has only one type of floating 
point value (yes, they are less common these days as they used to be in 
the "days of old"). But you did not specify WHICH type of the possibly 
floating point values you are expecting, and there are three different 
ones (single, double, extended). What happens when you assume an 
integer/word/longint by omitting the decimal fraction, that's a 
different discussion.


But I would expect that you you explicitly use a typecast as in 
"extended (1440.0)" that the compiler is using an extended floating 
point calculation at that point. Specifying the resulting variable to be 
a specific type is IMHO not implying that ALL calculations of a whole 
expression are performed in that variable's type. If the compiler would 
ignore an explicit type cast of a constant, THEN I would call this a bug.



Ralf

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
>> Not specifying in a program, specially in a strict programming language like 
>> Pascal, will always result in implementation depending 
>> variations/assumptions.
 
The problem is, I feel that I DID specify what should be by declaring my 
variable as Extended.   And Apparently FPC agrees with me, because it DOES work 
the way I expect, except if I put a .0 in my constant terms.   This is all just 
a bug if you put .0 after any integers in an expression.  I just put a better 
example that shows how it works correctly except if you put a .0

Strangely, upon discovering this, the solution is opposite what I thought it 
should be.  If all the terms of an expression were reduced to the lowest 
precision possible without loosing data, then my 1440.0 would be reduced from a 
float to a word, and then the entire problem would have went away, because when 
I put in 1440 without the .0, there is no problem.The .0 is apparently 
defining it to be a floating point and the smallest floating point is a single… 
but that’s not the smallest data structure, the smallest data structure that 
can be used is a word and that would have solved it. 
 
James
 
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
Here is a more concise example that illustrates the issue.   For me, being a
human, I see 1440 and 1440.0 as exactly the same thing, but they are not
acting as the same thing, and the 1440.0 is causing all the grief here.
See how it makes a difference whether the .0 is there or not.

Then replace the 1440.1, and notice how it's no longer an issue, note it's
only a problem with .0  if it's a .1, or anything other than .0, it seems
fine.

program TESTDBL1 ;

Const
   AA = 8427+33/1440.0;
   BB = 8427+33/1440;
   CC = 8427.02291667;   //Windows Calculator
Var
   A_Ext : Extended;
   B_Ext : Extended;
   C_Ext : Extended;
   A_Dbl : Double;
   B_Dbl : Double;
   C_Dbl : Double;
   A_Sgl : Single;
   B_Sgl : Single;
   C_Sgl : Single;

begin
   A_Ext := AA;
   B_Ext := BB;
   C_Ext := CC;
   A_Dbl := AA;
   B_Dbl := BB;
   C_Dbl := CC;
   A_Sgl := AA;
   B_Sgl := BB;
   C_Sgl := CC;

   WRITELN ( 'A_Ext = ',A_Ext: 20 : 20 ) ;
   WRITELN ( 'B_Ext = ',B_Ext: 20 : 20 ) ;
   WRITELN ( 'C_Ext = ',C_Ext: 20 : 20 ) ;
   WRITELN;
   WRITELN ( 'A_Dbl = ',A_Dbl: 20 : 20 ) ;
   WRITELN ( 'B_Dbl = ',B_Dbl: 20 : 20 ) ;
   WRITELN ( 'C_Dbl = ',C_Dbl: 20 : 20 ) ;
   WRITELN;
   WRITELN ( 'A_Sgl = ',A_Sgl: 20 : 20 ) ;
   WRITELN ( 'B_Sgl = ',B_Sgl: 20 : 20 ) ;
   WRITELN ( 'C_Sgl = ',C_Sgl: 20 : 20 ) ;
end.

A_Ext = 8427.022460937500
B_Ext = 8427.022916625000
C_Ext = 8427.022916625000

A_Dbl = 8427.022460937500
B_Dbl = 8427.022916668000
C_Dbl = 8427.022916668000

A_Sgl = 8427.02246100
B_Sgl = 8427.02246100
C_Sgl = 8427.02246100

Notice for Double and Extended they are getting the value for Single for the
division, throwing off the result, but only with 1440.0, not with 1440

With the constants defined as:
Const
   AA = 8427+33/1440.10;
   BB = 8427+33/1440.1;
   CC = 8427.0229150753419901395736407194;   //Windows Calculator

Now notice:

A_Ext = 8427.02291507534198978000
B_Ext = 8427.02291507534198978000
C_Ext = 8427.02291507534198978000

A_Dbl = 8427.0229150753421000
B_Dbl = 8427.0229150753421000
C_Dbl = 8427.0229150753421000

A_Sgl = 8427.02246100
B_Sgl = 8427.02246100
C_Sgl = 8427.02246100

All versions of Extended, Double, and Single are identical. As expected.
Everything I try works, except for .0

James


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Ralf Quint via fpc-pascal

On 2/4/2024 11:15 AM, James Richters via fpc-pascal wrote:

I understand that the result depends on the variables and expressions,
The problem with constants used in an expression is that some determination
needs to be made because it's not specified.
Since it's not specified, then I think it should be implied to be the same
as the variable it would be stored in, if that determination cannot be made,
then maximum precision should be used.
I don't think that this "implied" applies in my experience to pretty 
much all programming languages that I have used in the last 47 years 
that do offer various forms of floating point formats.
Not specifying in a program, specially in a strict programming language 
like Pascal, will always result in implementation depending 
variations/assumptions.


And if those variations are not to your liking, then simply specify 
(type cast) those constants to more precisely get the result you expect. 
This is Pascal after all, not Python or other over-ooped programming 
language that is making assumptions about your code all the time...



Ralf

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
>No need to yell.
Yes, that's true, I apologize, I did not mean to come across that way.  

>This is how reasonable programing languages work. The result type depends
only on the type of the involved variables/expressions. *Never* the variable
it is assigned to.

If it's never defined by the variable it's assigned to, then maximum
precision should be used, because you don't know how it will be used.

I understand that the result depends on the variables and expressions,
The problem with constants used in an expression is that some determination
needs to be made because it's not specified.
Since it's not specified, then I think it should be implied to be the same
as the variable it would be stored in, if that determination cannot be made,
then maximum precision should be used.

In fact sometimes it does use the precision of the variable being assigned
to look at this:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;
FF : Extended;
GG : Double;
HH : Single;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   FF := 8427+33/1440;
   GG := 8427+33/1440;
   HH := 8427+33/1440;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
   WRITELN ( 'FF =' , FF : 20 : 20 ) ;
   WRITELN ( 'GG =' , GG : 20 : 20 ) ;
   WRITELN ( 'HH =' , HH : 20 : 20 ) ;
end.

DD =8427.022916625000
EE =8427.022460937500
FF =8427.022916625000
GG =8427.022916668000
HH =8427.02246100

For FF, GG, and HH, I did not put the .0, so it must have made them al
integers... but now the division is carried out in the way that makes sense
for the variable it's being stored in,  it's only when I force the 1440 to
be a float by putting the .0 on that it gets it wrong. 
But if it was supposed to be 1440.1 then I couldn't leave the .1 off and
maybe I still have the issue but no.. I  DON'T have it... it's only
getting it wrong if it's 1440.0

Look at THIS:


program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;
FF : Extended;
GG : Double;
HH : Single;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := Extended(8427+Extended(33/Extended(1440.1)));
   FF := 8427+33/1440.1;
   GG := 8427+33/1440.1;
   HH := 8427+33/1440.1;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
   WRITELN ( 'FF =' , FF : 20 : 20 ) ;
   WRITELN ( 'GG =' , GG : 20 : 20 ) ;
   WRITELN ( 'HH =' , HH : 20 : 20 ) ;
end.

DD =8427.022916625000
EE =8427.02291507534198978000
FF =8427.02291507534198978000
GG =8427.0229150753421000
HH =8427.02246100

Just FYI, windows calculator gives 8427.0229150753419901395736407194 so I
expect to get the following for this:
FF =8427.02291507534198978000
GG =8427.0229150753421000
HH =8427.02246100
And YES that is what I get.

Things are only broken if I put 1440.0  there is a bug in this condition.

   FF := 8427+33/1440.0;
   GG := 8427+33/1440.0;
   HH := 8427+33/1440.0;

Windows calculator gets: 8,427.02291667
I expect to get:
FF =8427.022916625000
GG =8427.022916668000
HH =8427.02246100

But no, I get:
FF =8427.022460937500
GG =8427.022460937500
HH =8427.02246100


If I leave off the .0 then it's correct:
   FF := 8427+33/1440;
   GG := 8427+33/1440;
   HH := 8427+33/1440;

FF =8427.022916625000
GG =8427.022916668000
HH =8427.02246100


I feel much better about it all now.. I think it's SUPPOSED to work the way
I expect, but there is a bug if you put something like 1440.0 in your
constant expression.

Sorry again for my earlier tone.

James

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Florian Klämpfl via fpc-pascal

Am 04.02.2024 um 18:54 schrieb James Richters:

I can understand storing the constant in the lowest precision that doesn't 
cause data loss, thus making thing more efficient, but the actual calculation 
done by the compiler should be done at maximum precision and only the final 
result stored in the lowest required precision.
This calculation is only carried out buy the compiler once, during compilation, 
not by the executing program.

The calculation should be done completely using extended, and if the result of 
the entire calculation is a 2, then store it as a byte, if it's 1.23 then store 
it as a single, and if it's 1.2324241511343 store it as Extended.   The problem 
is the 33/1440 is being stored as a single and IS LOSING DATA, the division 
should have been detected and therefore the lowest precision that doesn't cause 
data loss is NOT a single.

In all cases in our example, we should not be getting different values for the 
same constant.   The implementation not the right way of doing it.  It's not 
doing what is required by the statement:

"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

The "New behaviour"  has a DEFINATE bug in it, because we are experiencing data 
loss.


You are understand the statement wrong: it says nothing about 
operations/expressions, only constants.



The correct way to implement this is to have the compiler ALWAYS evaluate 
everything at highest precision, THEN after all computations are complete 
evaluate the final answer to store in the constant and reduce the precision of 
only the constant if it's justified.   If it was done this way then it would 
always give the expected result.


This is plainly wrong. Simply because it would mean that we have to carry out also all calculations with variables 
always with the highest precision. And this is not how things are supposed to be done in any reasonable programming 
language. The legacy x87 fpu doing so causes already enough headache.


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
I can understand storing the constant in the lowest precision that doesn't 
cause data loss, thus making thing more efficient, but the actual calculation 
done by the compiler should be done at maximum precision and only the final 
result stored in the lowest required precision.
This calculation is only carried out buy the compiler once, during compilation, 
not by the executing program.

The calculation should be done completely using extended, and if the result of 
the entire calculation is a 2, then store it as a byte, if it's 1.23 then store 
it as a single, and if it's 1.2324241511343 store it as Extended.   The problem 
is the 33/1440 is being stored as a single and IS LOSING DATA, the division 
should have been detected and therefore the lowest precision that doesn't cause 
data loss is NOT a single.   

In all cases in our example, we should not be getting different values for the 
same constant.   The implementation not the right way of doing it.  It's not 
doing what is required by the statement:

"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

The "New behaviour"  has a DEFINATE bug in it, because we are experiencing data 
loss. 

The correct way to implement this is to have the compiler ALWAYS evaluate 
everything at highest precision, THEN after all computations are complete 
evaluate the final answer to store in the constant and reduce the precision of 
only the constant if it's justified.   If it was done this way then it would 
always give the expected result.

James


-Original Message-
From: fpc-pascal  On Behalf Of Florian 
Klämpfl via fpc-pascal
Sent: Sunday, February 4, 2024 8:20 AM
To: FPC-Pascal users discussions 
Cc: Florian Klämpfl 
Subject: Re: [fpc-pascal] Floating point question



> Am 04.02.2024 um 13:50 schrieb Adriaan van Os via fpc-pascal 
> :
> 
> Jonas Maebe via fpc-pascal wrote:
>> On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
>>> Constants are also evaluated wrong,you don’t know what that constant is 
>>> going to be used for, so all steps of evaluating a constant MUST be done in 
>>> extended by the compiler, or the answer is just wrong.
>> See 
>> https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constan
>> ts and https://www.freepascal.org/daily/doc/prog/progsu19.html
> 
> I think this discussion shows that the 2.2 compiler change was a bad idea 
> (for modes other than Delphi).

The result with the old code was that all floating point operations involving 
constants were carried out in full precision (normally double or extended) 
which is something unexpected and results in slow code.

Example:

const
  c2 = 2;
var
  s1, s2 : single;

…
s1:=s2/c2;

generated an expensive double division for no good reason.

OTOH:

const
  c2 : single = 2;
var
  s1, s2 : single;

…
s1:=s2/c2;

generated a single division.


 There is still the -CF option as a workaround to get the old behavior.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org 
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Florian Klämpfl via fpc-pascal

Am 04.02.2024 um 18:25 schrieb James Richters via fpc-pascal:

I agree with Aadrian 100%
  
"New behaviour: floating point constants are now considered to be of the lowest precision which doesn't cause data loss"


We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code 
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it wrong 
for most users, a compiler directive should have been needed to get rid of the 
warnings, or ONLY applied in Mode Delphi.  Not to make everything incorrect for 
everyone unless you add a directive. The problem with this that no one is 
expecting to need to add a directive to do things right.

Consider this:
  
Var

   MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to be 
a SINGLE, that is NUTS!!


No need to yell.

This is how reasonable programing languages work. The result type depends only on the type of the involved 
variables/expressions. *Never* the variable it is assigned to.


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
How do I get -CF to work with the Text IDE?
I put -CF  and  just CF in "Additional Compiler Args"  either way I get:

Error: Illegal parameter: -CF   

James


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
I agree with Aadrian 100%
 
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

We are getting data loss So it's doing it WRONG.

So we are all living with a stupid way of doing things so some Delphi code 
won't have warnings?

Who came up with this???

The old way was CORRECT,   instead of changing it for everyone making it wrong 
for most users, a compiler directive should have been needed to get rid of the 
warnings, or ONLY applied in Mode Delphi.  Not to make everything incorrect for 
everyone unless you add a directive. The problem with this that no one is 
expecting to need to add a directive to do things right. 

Consider this:
 
Var
  MyVariable : Extended;

MyVariable := 8427 + 33 / 1440.0;

Since I am storing the result in an Extended, I DO NOT EXPECT the 33/1440 to be 
a SINGLE, that is NUTS!!
I expect it to be all done in Extended. Why would anyone expect the contents of 
MyVariable to be butchered by storing the 33/1440 in single precision.

In other words
I expect the result of these both to be the same:

program TESTDBL1 ;

Var
AA : Extended;
BB : Extended;
CC : Extended;
DD : Extended;
EE : Extended;

begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := 8427+33/1440.0;
   WRITELN ( 'DD =' , DD : 20 : 20 ) ;
   WRITELN ( 'EE =' , EE : 20 : 20 ) ;
end.

But they are NOT
DD =8427.022916625000
EE =8427.022460937500

EE is WRONG and can never be considered right.   Why would ANY user with the 
code above expect that the 33/1440 would be done as a single, thus causing a 
loss of precision. 

Again:
"New behaviour: floating point constants are now considered to be of the lowest 
precision which doesn't cause data loss"

This was NOT done in the lowest precision which doesn't cause data loss.. we 
lost data   We are no longer Extended precision, anything at all we use EE 
for is WRONG.

This is CLEARLY WRONG!  The default should be the old way and if you don't like 
the Delphi warnings, you can make a switch to do it this new stupider and WRONG 
way.

I strongly feel this should be reverted, it's just wrong.   This makes no sense 
to me at all.   It's wrong to need to add a compiler directive to do things as 
they are expected by the vast majority to be, the directive should be needed 
for those few who even noticed the warnings in Delphi, and they were just 
warnings, not a substantial reduction in precision. 

James

>But not at the price of loss in precision ! Unless an explicit compiler switch 
>like --fast-math is passed 


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
Hi Jonas, 
That’s Interesting,  Thank you very much for the links!! Not only an 
explanation but a solution. 
The original is how I would expect it to work,  If it's for Delphi 
compatibility why not only do that when in Mode Delphi?   If not in mode Delphi 
who cares if it's compatible?
Delphi is completely wrong to do it this way.
I'm glad there is  $EXCESSPRECISION   I am Immediately putting that in every 
single program I have, because that is I always thought it would work, and I do 
have divisions where this can be a problem.

James
-Original Message-
From: fpc-pascal  On Behalf Of Jonas 
Maebe via fpc-pascal
Sent: Sunday, February 4, 2024 7:21 AM
To: fpc-pascal@lists.freepascal.org
Cc: Jonas Maebe 
Subject: Re: [fpc-pascal] Floating point question

On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
> Constants are also evaluated wrong,you don’t know what that constant 
> is going to be used for, so all steps of evaluating a constant MUST be 
> done in extended by the compiler, or the answer is just wrong.

See
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants
and https://www.freepascal.org/daily/doc/prog/progsu19.html


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org 
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Adriaan van Os via fpc-pascal

Jonas Maebe via fpc-pascal wrote:

On 04/02/2024 13:50, Adriaan van Os via fpc-pascal wrote:

Jonas Maebe via fpc-pascal wrote:

On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
Constants are also evaluated wrong,you don’t know what that constant 
is going to be used for, so all steps of evaluating a constant MUST 
be done in extended by the compiler, or the answer is just wrong.


See 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
and https://www.freepascal.org/daily/doc/prog/progsu19.html


I think this discussion shows that the 2.2 compiler change was a bad 
idea (for modes other than Delphi).


This is not just about Delphi. It's also about being able to perform 
floating point calculations efficiently and getting rid of useless 
warnings.


But not at the price of loss in precision ! Unless an explicit compiler switch like --fast-math is 
passed  but then it is the resposibility of the 
programmer, not of the compiler.


Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Florian Klämpfl via fpc-pascal


> Am 04.02.2024 um 13:50 schrieb Adriaan van Os via fpc-pascal 
> :
> 
> Jonas Maebe via fpc-pascal wrote:
>> On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
>>> Constants are also evaluated wrong,you don’t know what that constant is 
>>> going to be used for, so all steps of evaluating a constant MUST be done in 
>>> extended by the compiler, or the answer is just wrong.
>> See https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
>> and https://www.freepascal.org/daily/doc/prog/progsu19.html
> 
> I think this discussion shows that the 2.2 compiler change was a bad idea 
> (for modes other than Delphi).

The result with the old code was that all floating point operations involving 
constants were carried out in full precision (normally double or extended) 
which is something unexpected and results in slow code.

Example:

const
  c2 = 2;
var
  s1, s2 : single;

…
s1:=s2/c2;

generated an expensive double division for no good reason.

OTOH:

const
  c2 : single = 2;
var
  s1, s2 : single;

…
s1:=s2/c2;

generated a single division.


 There is still the -CF option as a workaround to get the old behavior.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Jonas Maebe via fpc-pascal

On 04/02/2024 13:50, Adriaan van Os via fpc-pascal wrote:

Jonas Maebe via fpc-pascal wrote:

On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
Constants are also evaluated wrong,you don’t know what that constant 
is going to be used for, so all steps of evaluating a constant MUST 
be done in extended by the compiler, or the answer is just wrong.


See 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants and https://www.freepascal.org/daily/doc/prog/progsu19.html


I think this discussion shows that the 2.2 compiler change was a bad 
idea (for modes other than Delphi).


This is not just about Delphi. It's also about being able to perform 
floating point calculations efficiently and getting rid of useless warnings.



Jonas



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Adriaan van Os via fpc-pascal

Jonas Maebe via fpc-pascal wrote:

On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
Constants are also evaluated wrong,you don’t know what that constant 
is going to be used for, so all steps of evaluating a constant MUST be 
done in extended by the compiler, or the answer is just wrong.


See 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
and https://www.freepascal.org/daily/doc/prog/progsu19.html


I think this discussion shows that the 2.2 compiler change was a bad idea (for modes other than 
Delphi).


Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread Jonas Maebe via fpc-pascal

On 03/02/2024 18:42, James Richters via fpc-pascal wrote:
Constants are also evaluated wrong,you don’t know what that constant is 
going to be used for, so all steps of evaluating a constant MUST be done 
in extended by the compiler, or the answer is just wrong.


See 
https://wiki.freepascal.org/User_Changes_2.2.0#Floating_point_constants 
and https://www.freepascal.org/daily/doc/prog/progsu19.html



Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-04 Thread James Richters via fpc-pascal
 in extended and only the final answer be reduced to fit into a smaller 
variable. 
If this was the case, then the result of ALL would be 8427.0229…   
This may be debatable, but certainly when the result is to be stored in a 
double then all operations calculated by the compiler should also be stored in 
doubles, I don't see how anything else could be argued to be correct.
This is not the case at all, or DD, EE, FF, and GG would all be 8427.0229…  but 
only  FF is because I explicitly stated the result of the division is to be a 
double.
 
When the program executes and does math, in the example of BB and CC, and II, 
it’s always correct, but when the compiler evaluates it, it’s doing it wrong. 
And storing portions of the calculation in a single even if the final result is 
a double. 
The compiler should ALWAYS use the highest precision possible, because it can 
be stored in reduce precision variables, but once it’s been butchered by low 
precision, it can’t be fixed. 
 
Constants are also evaluated wrong,  you don’t know what that constant is going 
to be used for, so all steps of evaluating a constant MUST be done in extended 
by the compiler, or the answer is just wrong. 
TT_Const and SS_Const should have been the same, so that when assigned to 
double variables TT_Double and SS_Double they would also be the same.   
TT_Double and TT_Const are wrong.
 
I think this is a legitimate bug you have discovered.  I shouldn’t have to cast 
the division, it’s not what any user would expect to need to do. 
 
My tests were done on a Windows 10 64 bit machine with FPC Win32.
■ Free Pascal IDE Version 1.0.12 [2023/06/26]
■ Compiler Version 3.3.1-12875-gadf843196a


James
 
-Original Message-
From: fpc-pascal  On Behalf Of Thomas 
Kurz via fpc-pascal
Sent: Friday, February 2, 2024 4:37 PM
To: FPC-Pascal users discussions 
Cc: Thomas Kurz 
Subject: Re: [fpc-pascal] Floating point question
 
Well, 8427.0229, that's what I want.
 
But what I get is 8427.0224
 
And that's what I don't unterstand.
 
 
 
- Original Message -
From: Bernd Oppolzer via fpc-pascal < <mailto:fpc-pascal@lists.freepascal.org> 
fpc-pascal@lists.freepascal.org>
To: Bart via fpc-pascal < <mailto:fpc-pascal@lists.freepascal.org> 
fpc-pascal@lists.freepascal.org>
Sent: Sunday, January 28, 2024, 10:13:07
Subject: [fpc-pascal] Floating point question
 
To simplify the problem further:
 
the addition of 12 /24.0 and the subtraction of 0.5 should be removed, IMO, 
because both can be done with floats without loss of precision (0.5 can be 
represented exactly in float).
 
So the problem can be reproduced IMO with this small Pascal program:
 
program TESTDBL1 ;
 
var TT : REAL ;
 
begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / 1440.0 ;
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .
 
With my compiler, REAL is always DOUBLE, and the computation is carried out by 
a P-Code interpreter (or call it just-in-time compiler - much like Java), which 
is written in C.
 
The result is:
 
tt=8427.022916667879
 
and it is the same, no matter if I use this simplified computation or the 
original
 
tt := (8427 - 0.5) + (12 / 24.0) + (33 / 1440.0);
 
My value is between the two other values:
 
tt=8427.022916668000
tt=8427.022916667879
ee=8427.022916625000
 
The problem now is:
 
the printout of my value suggest an accuracy which in fact is not there, 
because with double, you can trust only the first 16 decimal digits ... after 
that, all is speculative a.k.a. wrong. That's why FPC IMO rounds at this place, 
prints the 8, and then only zeroes.
 
The extended format internally has more hex digits and therefore can reliably 
show more decimal digits.
But the last two are wrong, too (the exact value is 6... period).
 
HTH,
kind regards
 
Bernd
 
 
 
Am 27.01.2024 um 22:53 schrieb Bart via fpc-pascal:
> On Sat, Jan 27, 2024 at 6:23 PM Thomas Kurz via fpc-pascal 
> < <mailto:fpc-pascal@lists.freepascal.org> fpc-pascal@lists.freepascal.org>  
> wrote:
 
>> Hmmm... I don't think I can understand that. If the precision of "double" 
>> were that bad, it wouldn't be possible to store dates up to a precision of 
>> milliseconds in a TDateTime. I have a discrepancy of 40 seconds here.
> Consider the following simplified program:
> 
> var
>tt: double;
>ee: extended;
 
> begin
>tt := (8427 - Double(0.5)) + (12/ Double(24.0)) +
> (33/Double(1440.0)) + (0/Double(86400.0));
>ee := (8427 - Extended(0.5)) + (12/ Extended(24.0)) +
> (33/Extended(1440.0)) + (0/Extended(86400.0));
>writeln('tt=',tt:20:20);
>writeln('ee=',ee:20:20);
> end.
> ===
 
> Now see what it outputs:
 
> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc test.pas Free 
> Pascal Compiler version 3.2.2 [2021/05/15] for i386 ...
 
> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
&g

Re: [fpc-pascal] Floating point question

2024-02-02 Thread Thomas Kurz via fpc-pascal
Well, 8427.0229, that's what I want.

But what I get is 8427.0224

And that's what I don't unterstand.



- Original Message - 
From: Bernd Oppolzer via fpc-pascal 
To: Bart via fpc-pascal 
Sent: Sunday, January 28, 2024, 10:13:07
Subject: [fpc-pascal] Floating point question

To simplify the problem further:

the addition of 12 /24.0 and the subtraction of 0.5 should be removed, IMO,
because both can be done with floats without loss of precision (0.5 can 
be represented exactly in float).

So the problem can be reproduced IMO with this small Pascal program:

program TESTDBL1 ;

var TT : REAL ;

begin (* HAUPTPROGRAMM *)
   TT := 8427 + 33 / 1440.0 ;
   WRITELN ( 'tt=' , TT : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

With my compiler, REAL is always DOUBLE, and the computation is carried 
out by a P-Code interpreter
(or call it just-in-time compiler - much like Java), which is written in C.

The result is:

tt=8427.022916667879

and it is the same, no matter if I use this simplified computation or 
the original

tt := (8427 - 0.5) + (12 / 24.0) + (33 / 1440.0);

My value is between the two other values:

tt=8427.022916668000
tt=8427.022916667879
ee=8427.022916625000

The problem now is:

the printout of my value suggest an accuracy which in fact is not there, 
because with double, you can trust
only the first 16 decimal digits ... after that, all is speculative 
a.k.a. wrong. That's why FPC IMO rounds at this
place, prints the 8, and then only zeroes.

The extended format internally has more hex digits and therefore can 
reliably show more decimal digits.
But the last two are wrong, too (the exact value is 6... period).

HTH,
kind regards

Bernd



Am 27.01.2024 um 22:53 schrieb Bart via fpc-pascal:
> On Sat, Jan 27, 2024 at 6:23 PM Thomas Kurz via fpc-pascal
>   wrote:

>> Hmmm... I don't think I can understand that. If the precision of "double" 
>> were that bad, it wouldn't be possible to store dates up to a precision of 
>> milliseconds in a TDateTime. I have a discrepancy of 40 seconds here.
> Consider the following simplified program:
> 
> var
>tt: double;
>ee: extended;

> begin
>tt := (8427 - Double(0.5)) + (12/ Double(24.0)) +
> (33/Double(1440.0)) + (0/Double(86400.0));
>ee := (8427 - Extended(0.5)) + (12/ Extended(24.0)) +
> (33/Extended(1440.0)) + (0/Extended(86400.0));
>writeln('tt=',tt:20:20);
>writeln('ee=',ee:20:20);
> end.
> ===

> Now see what it outputs:

> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc test.pas
> Free Pascal Compiler version 3.2.2 [2021/05/15] for i386
> ...

> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
> tt=8427.022916668000
> ee=8427.022916625000

> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc -Px86_64 test.pas
> Free Pascal Compiler version 3.2.2 [2021/05/15] for x86_64
> ..

> C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
> tt=8427.022916668000
> ee=8427.022916668000

> On Win64 both values are the same, because there Extended = Double.
> On Win32 the Extended version is a bit closer to the exact solution:
> 8427 - 1/2 + 1/2 + 33/1440 = 8427 + 11/480

> Simple as that.

> Bart
> ___
> fpc-pascal maillist  -fpc-pascal@lists.freepascal.org
> https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-28 Thread Bart via fpc-pascal
On Sun, Jan 28, 2024 at 10:21 AM Bernd Oppolzer via fpc-pascal
 wrote:


> The problem now is:
>
> the printout of my value suggest an accuracy which in fact is not there,

Which is because I was too lazy to cater for that.
Notice the :20:20 in the writeln statement: I tell the compiler how
many digits to use there and it just does as I tell it to.

Floating point calculations will always have rounding errors.
Notice that when a calculation can be off by x percent and you do y
calculations then the end result may be off by x*y percent (worst case
scenario).
If you need infinite precision, there are libraries providing that,
they're just not provided with fpc.
E.g. the (old) Windows calculator gives
8427.02291667 as the result.

E.g you can do the whole calculation using fractions and then convert
the end result to floating point.
This will give:
8427.022916668 using Double (64-bit)
8427.022916625 using Extended (80-bit)
These are the same as in my previous example, indicating that most of
the "inaccuracy" is in the 11/480 part.

Conclusion:
It is **not** a bug, it as expected.

B.t.w. I tested with Delphi 7 and there the accuracy is even one digit
less precise than in fpc (it goes 1 digit earlier "off").

-- 
Bart
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-28 Thread Bernd Oppolzer via fpc-pascal

To simplify the problem further:

the addition of 12 /24.0 and the subtraction of 0.5 should be removed, IMO,
because both can be done with floats without loss of precision (0.5 can 
be represented exactly in float).


So the problem can be reproduced IMO with this small Pascal program:

program TESTDBL1 ;

var TT : REAL ;

begin (* HAUPTPROGRAMM *)
  TT := 8427 + 33 / 1440.0 ;
  WRITELN ( 'tt=' , TT : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

With my compiler, REAL is always DOUBLE, and the computation is carried 
out by a P-Code interpreter

(or call it just-in-time compiler - much like Java), which is written in C.

The result is:

tt=8427.022916667879

and it is the same, no matter if I use this simplified computation or 
the original


tt := (8427 - 0.5) + (12 / 24.0) + (33 / 1440.0);

My value is between the two other values:

tt=8427.022916668000
tt=8427.022916667879
ee=8427.022916625000

The problem now is:

the printout of my value suggest an accuracy which in fact is not there, 
because with double, you can trust
only the first 16 decimal digits ... after that, all is speculative 
a.k.a. wrong. That's why FPC IMO rounds at this

place, prints the 8, and then only zeroes.

The extended format internally has more hex digits and therefore can 
reliably show more decimal digits.

But the last two are wrong, too (the exact value is 6... period).

HTH,
kind regards

Bernd



Am 27.01.2024 um 22:53 schrieb Bart via fpc-pascal:

On Sat, Jan 27, 2024 at 6:23 PM Thomas Kurz via fpc-pascal
  wrote:


Hmmm... I don't think I can understand that. If the precision of "double" were 
that bad, it wouldn't be possible to store dates up to a precision of milliseconds in a 
TDateTime. I have a discrepancy of 40 seconds here.

Consider the following simplified program:

var
   tt: double;
   ee: extended;

begin
   tt := (8427 - Double(0.5)) + (12/ Double(24.0)) +
(33/Double(1440.0)) + (0/Double(86400.0));
   ee := (8427 - Extended(0.5)) + (12/ Extended(24.0)) +
(33/Extended(1440.0)) + (0/Extended(86400.0));
   writeln('tt=',tt:20:20);
   writeln('ee=',ee:20:20);
end.
===

Now see what it outputs:

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for i386
...

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916625000

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc -Px86_64 test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for x86_64
..

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916668000

On Win64 both values are the same, because there Extended = Double.
On Win32 the Extended version is a bit closer to the exact solution:
8427 - 1/2 + 1/2 + 33/1440 = 8427 + 11/480

Simple as that.

Bart
___
fpc-pascal maillist  -fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-27 Thread Bart via fpc-pascal
On Sat, Jan 27, 2024 at 6:23 PM Thomas Kurz via fpc-pascal
 wrote:

> Hmmm... I don't think I can understand that. If the precision of "double" 
> were that bad, it wouldn't be possible to store dates up to a precision of 
> milliseconds in a TDateTime. I have a discrepancy of 40 seconds here.

Consider the following simplified program:

var
  tt: double;
  ee: extended;

begin
  tt := (8427 - Double(0.5)) + (12/ Double(24.0)) +
(33/Double(1440.0)) + (0/Double(86400.0));
  ee := (8427 - Extended(0.5)) + (12/ Extended(24.0)) +
(33/Extended(1440.0)) + (0/Extended(86400.0));
  writeln('tt=',tt:20:20);
  writeln('ee=',ee:20:20);
end.
===

Now see what it outputs:

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for i386
...

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916625000

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc -Px86_64 test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for x86_64
..

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916668000

On Win64 both values are the same, because there Extended = Double.
On Win32 the Extended version is a bit closer to the exact solution:
8427 - 1/2 + 1/2 + 33/1440 = 8427 + 11/480

Simple as that.

Bart
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-27 Thread Adriaan van Os via fpc-pascal

Thomas Kurz via fpc-pascal wrote:


1. The "writeln" in line 32 correctly prints "0." when (cross-) compiling to win64, 
but "39.375" when compiling to win32 (with ppc386).


Maybe the word "cross-compiling" gives a clue ? In a cross-compiler, floating-point operations of 
constants must be software-emulated, as the target hardware is (normally) not available to the host 
compiler. In theory, this shouldn't be noticeable, unless there is a bug somewhere.


Regards,

Adriaan van Os

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-27 Thread Thomas Kurz via fpc-pascal
Hmmm... I don't think I can understand that. If the precision of "double" were 
that bad, it wouldn't be possible to store dates up to a precision of 
milliseconds in a TDateTime. I have a discrepancy of 40 seconds here.



- Original Message - 
From: Bart via fpc-pascal 
To: FPC-Pascal users discussions 
Sent: Saturday, January 27, 2024, 17:03:15
Subject: [fpc-pascal] Floating point question

On Sat, Jan 27, 2024 at 1:40 PM Thomas Kurz via fpc-pascal
 wrote:


> My problems are:

> 1. The "writeln" in line 32 correctly prints "0." when (cross-) compiling 
> to win64, but "39.375" when compiling to win32 (with ppc386).
On Win64 all math is done with double precision, on win32 all literal
floating point values in your code will be interpreted as Extended.
Cast everything to Double and the result will be the same on Win64 and Win32.


-- 
Bart
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


  1   2   >