Hello Daniel!

Let me make a side comment not related to the core threading issue.

On Tue, Mar 1, 2016 at 10:58 AM, Daniel Franzini
<daniel.franz...@gmail.com> wrote:
> I think that comparing floating point numbers this way is wrong (well, at
> least in C it is) because you can't never know how is the precision of this
> comparison. It might be the case that in C++ the == operator is overloaded
> and it performs correctly using some pre-defined precision constant. I'm
> note sure about that.
>
>                 if (b == c)
>                         continue;

In practice, this kind of floating-point comparison is perfectly reasonably
(if you know what you're doing).

However, as far as I can tell, the standard is not inconsistent with what
you say.  The standard -- unless I've overlooked something -- is mightily
silent about floating-point comparisons, and floating-point arithmetic in
general.

The problem (of course) is that computer integers are not math integers
(computer integers have a finite range), nor are computer floating-point
numbers math real numbers (floating-point numbers have both a finite
range and precision, as well as potentially some special singular values).

The standard (I happen to be looking at the N3485 draft.) does say, for
example (5.7/3):

   The result of the binary + operator is the sum of the operands.
   The result of the binary - operator is the difference resulting from
   the subtraction of the second operand from the first.

We know what "sum" and "subtraction" mean for math numbers, but not
necessarily for computer numbers.

The standard helpfully adds that (3.9.1/4):

   Unsigned integers, declared unsigned, shall obey the laws of
   arithmetic modulo 2^n where n is the number of bits in the value
   representation of that particular size of integer.

This clears things up for unsigned (computer) integers, but not
for signed integers nor floating-point numbers.

The standard is equally silent about relational operators (<, ==, etc.).
Now, in practice, it is "implicit" in the standard or "understood" that
for computer integers the relational operators are defined the same
as they are for math integers.  It is also "understood" that the relational
operators are defined sensibly for floating-point numbers, but the
standard doesn't say this any where (as far as I know).

I don't think you can use the language that "the == operator is
overloaded" for floating-point numbers because I believe the standard
forbids overloading operators for fundamental types.  But I don't see
that the standard prohibits an implementation from defining (rather than
"overloading") floating-point == in terms of some "precision constant"
(epsilon tolerance).

Now, having gotten the lack of any standard legalese out of the way,
in practice, floating-point arithmetic is sensibly defined and deterministic.
You can be sure that 2.0 + 2.0 = 4.0, and that 2.1 + 2.1 is pretty darn
close to 4.2.  And code that relies on the fact that 2.1 + 2.1 != 177.9
is -- the standard notwithstanding -- well-formed, well-defined, and
portable.  Exactly how floating-point arithmetic works is (as it should
be) implementation dependent.  Some implementations comply with
the IEEE 754 floating-point standard (which itself gives some latitude
for differing implementations).

There are some weird things in many implementations (including IEEE
754). for example:

   0.0 == -0.0  (two different bit patterns test equal)
   NaN != NaN  (the same bit pattern tests unequal)

But the bottom line is you really are allowed to perform equality tests
on floating-point numbers (if you know what you are doing).  The test
is done exactly -- no approximate equality (done by hand, if you want
that) -- and exact tests are fully legitimate and are appropriate in some
cases (including Benjamin's, if I understand what he's about here).

For example:

   2.1 == 2.1
   2.1 + 2.1 == 2.1 + 2.1
   21.0 + 21.0 == 42.0

but the following might not hold:

   2.1 + 2.1 == 4.2
   3.0 * (1.0 / 3.0) == 1.0

And lastly (relevant to Benjamin's original post):

   2.1 + 2.1 (calculated in the main thread) == 2.1 + 2.1 (in a spawned thread)

really should hold.

(Benjamin's "if (b == c)" is meaningful and perfectly reasonable, and it's okay
if it sometimes fails.  He's asking whether sqrt(a) and pow(a, 0.5) are exactly
the same.  The standard is fully silent on this.  I believe that IEEE 754
gives latitude for these to differ.  When I run this with -O0, the test never
fails, but with -O1 I see differences at the level of 10^-16.  This is
not ideal,
but it is permitted and reasonable.  Benjamin's complaint is not that the test
fails, but that it fails differently depending upon which thread the calculation
is run in.  I agree with Benjamin that this shouldn't happen (unless done on
purpose) and I believe that Ruben's explanation that the "floating-point
environment" is set differently for different threads is correct.  This seems
wrong to me, but I have a vague recollection that in an earlier discussion
an arguably legitimate reason for this was put forth.)

> On Tue, Mar 1, 2016 at 9:04 AM, Benjamin Bihler
> <benjamin.bih...@compositence.de> wrote:
>>
>> Hello,
>>
>> I have found a behaviour of MinGW-W64 5.3.0 that disturbs me very much:
>> the results of floating-point operations may differ, if they are run on
>> different threads.
>> ...
> --
> Daniel


Happy Floating-Point Hacking!


K. Frank

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Mingw-w64-public mailing list
Mingw-w64-public@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mingw-w64-public

Reply via email to