Am 17.02.2024 um 20:18 schrieb Florian Klämpfl via fpc-pascal:

const Xconst : single = 1440.0;

var y1, y2 : real;

y1 := 33.0 / 1440.0;

y2 :=  33.0 / Xconst;

the division in the first assignment (to y1) should be done at maximum precision, that is, both constants should be converted by the compiler to the maximum available precision and
the division should be done (best at compile time) using this precision.

Constant folding is an optimization technique, so the first expression could be also evaluated at run time in case of a simple compiler (constant folding is not something which is mandatory) which means that we have to use always full precision (what full means depends on the host and target platform thought) for real operations. So either: always full precision with the result all operations get bloated or some approach to assign a precision to real constants.

no problem here; the result of y1 must be the same, no matter if the computation is done at compile time or at run time. the result should always be computed at the best precision available, IMO (maybe controlled by a compiler option,
which I personally would set).

y2: the computation could be done using single precision, because the second operand says so. IMO: even if the first operand was a literal constant which cannont be represented exactly in a single FP field

It gets even more hairy if more advanced optimization techniques are involved:

Consider

var
   y1,y2 : single;

 y1 := 1440.0
 y2 := 33.0 / y1;

When constant propagation and constant folding are on (both are optimizations), y2 can be calculated at compile time and everything reduced to one assignment to y2. So with your proposal the value of y2 would differ depending on the optimization level.

if y2 is computed at compile time (which is fine), then the result IMO is determined by the way the source code is written. A possible optimization must not change the meaning of the program, given by the source code. So in this case, the compiler would have to do a single precision division (if we could agree on the rules that we discussed so far), and the meaning of the program may not be changed by optimization techniques (that is: optimization may not change the result to a double or extended precision division ... otherwise the optimization is wrong).

BTW: many of the ideas about what a compiler should do come from my 30+ years experience with PL/1. That may be a sort of "deformation professionelle", as the French call it, but that's how it is.

Apart from the proper handling of literal FP constants (which is what we discuss here, IMO), there is another topic,
which is IMO also part of the debate:

does

 y2 := 33.1 / y1;

require the division to be done at single precision or not?

We have here a literal constant, which is NOT single (33.1) and a single variable operand. I understood from some postings here, that some people want the divisions with singles carried out using single arithmetic, for performance reasons, so I asked for a single division here (in my previous postings). But IMO that's different in the current implementation ... what do others think about this?

I, for my part, would find it strange, if the precision of the division in this case would depend on the (implicit)
type of the operand, that is:

 y2 := 33.015625 / y1;  { single precision, because constant is single - 33 + 1 / 64 }
 y2 := 33.1 / y1;   { extended precision, because constant is extended }

IMO, both of these divisions should be done at single precision, controlled by the type of y1.
But this could be controlled by ANOTHER new option, if someone asks for it.

Kind regards

Bernd
_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to