Always false float comparisons
Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case?
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? What is the actual reason for the mismatch? Does f lose precision as a float, while the 1.30 literal is a more precise double/real? Comparing float and double might be worth a warning. Does it encode the two literals differently? If so, why?
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: So, should the compiler emit a warning for the former case? I'm not for a compiler change. IMO a library called std.sanity_float with a equal and a notequal function would be better.
Re: Always false float comparisons
Walter Bright via Digitalmars-d wrote: > Don Clugston pointed out in his DConf 2016 talk that: > > float f = 1.30; > assert(f == 1.30); > > will always be false since 1.30 is not representable as a float. However, > > float f = 1.30; > assert(f == cast(float)1.30); > > will be true. > > So, should the compiler emit a warning for the former case? Since assert(f == 1.30f); passes I find the root cause lies in the implicit type conversion from float to double. Warning for those comparisons should be fine. Shouldn't mix them anyway. I wonder what's the difference between 1.30f and cast(float)1.30. Jens
Re: Always false float comparisons
On Monday, 9 May 2016 at 10:16:54 UTC, Jens Mueller wrote: Walter Bright via Digitalmars-d wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? Since assert(f == 1.30f); passes I find the root cause lies in the implicit type conversion from float to double. Warning for those comparisons should be fine. Shouldn't mix them anyway. I wonder what's the difference between 1.30f and cast(float)1.30. Jens +1
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? Yes, I think it is a good idea, just like emitting a warning for mismatched signed/unsigned comparison.
Re: Always false float comparisons
On 5/9/2016 2:25 AM, qznc wrote: What is the actual reason for the mismatch? floats cannot represent 1.30 exactly, and promoting it to a double gives a different result than 1.30 as a double.
Re: Always false float comparisons
On 5/9/2016 3:16 AM, Jens Mueller via Digitalmars-d wrote: passes I find the root cause lies in the implicit type conversion from float to double. That isn't going to change. Warning for those comparisons should be fine. Shouldn't mix them anyway. Too onerous. I wonder what's the difference between 1.30f and cast(float)1.30. There isn't one.
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: So, should the compiler emit a warning for the former case? Yes, please. I would prefer, at least, a warning flag for that.
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: So, should the compiler emit a warning for the former case? Would that include comparison of variables only aswell? float f = 1.3; double d = 1.3; assert(f == d); // compiler warning
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? I'd assume in the first case that the float is being promoted to double for the comparison. Is there already a warning for loss of precision? We treat warnings as errors in our C++ code, so C4244 triggers all the time in MSVC with integer operations. I just tested that float initialisation in MSVC, initialising a float with a double triggers C4305. So my preference is "Yes please". https://msdn.microsoft.com/en-us/library/th7a07tz.aspx https://msdn.microsoft.com/en-us/library/0as1ke3f.aspx
Re: Always false float comparisons
On Monday, 9 May 2016 at 11:26:55 UTC, Walter Bright wrote: On 5/9/2016 3:16 AM, Jens Mueller via Digitalmars-d wrote: Warning for those comparisons should be fine. Shouldn't mix them anyway. Too onerous. Surely not too onerous if we're only talking about == ? Mixing floating point types on either side of == seems like a pretty solidly bad idea.
Re: Always false float comparisons
On 5/9/2016 4:16 AM, ZombineDev wrote: just like emitting a warning for mismatched signed/unsigned comparison. Not at all the same, such are not always false.
Re: Always false float comparisons
On 5/9/2016 4:38 AM, Nordlöw wrote: Would that include comparison of variables only aswell? float f = 1.3; double d = 1.3; assert(f == d); // compiler warning No.
Re: Always false float comparisons
On Monday, 9 May 2016 at 12:28:04 UTC, Walter Bright wrote: On 5/9/2016 4:38 AM, Nordlöw wrote: Would that include comparison of variables only aswell? float f = 1.3; double d = 1.3; assert(f == d); // compiler warning No. Just get rid of the problem : remove == and != from floats.
Re: Always false float comparisons
On 5/9/2016 5:21 AM, Ethan Watson wrote: I'd assume in the first case that the float is being promoted to double for the comparison. Is there already a warning for loss of precision? Promoting to double does not lose precision. We treat warnings as errors in our C++ code, so C4244 triggers all the time in MSVC with integer operations. I just tested that float initialisation in MSVC, initialising a float with a double triggers C4305. That's going quite a bit further.
Re: Always false float comparisons
On Monday, 9 May 2016 at 12:30:13 UTC, Walter Bright wrote: Promoting to double does not lose precision. Yes, badly worded on my part, I was getting at the original assignment from double to float.
Re: Always false float comparisons
On 5/9/16 7:26 AM, Walter Bright wrote: I wonder what's the difference between 1.30f and cast(float)1.30. There isn't one. I know this is a bit band-aid-ish, but if one is comparing literals to a float, why not treat the literal as the type being compared against? In other words, imply the 1.3f. This isn't integer-land where promotions do not change the outcome. What I see here is that double(1.3) cannot be represented as a float. So right there, the compiler can tell you, no, this is never going to be true. Something stinks when you can write an always-false expression as an if conditional by accident. -Steve
Re: Always false float comparisons
On Monday, 9 May 2016 at 12:24:05 UTC, John Colvin wrote: On Monday, 9 May 2016 at 11:26:55 UTC, Walter Bright wrote: On 5/9/2016 3:16 AM, Jens Mueller via Digitalmars-d wrote: Warning for those comparisons should be fine. Shouldn't mix them anyway. Too onerous. Surely not too onerous if we're only talking about == ? Mixing floating point types on either side of == seems like a pretty solidly bad idea. Why only == ? The example also applies to opCmp. float f = 1.30; assert(f >= 1.30); assert(f <= 1.30);
Re: Always false float comparisons
On Monday, 9 May 2016 at 12:28:04 UTC, Walter Bright wrote: On 5/9/2016 4:38 AM, Nordlöw wrote: Would that include comparison of variables only aswell? No. Why?
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? (1) Yes, emit a warning for this case. (2) Generalize it to all variables, like Nordlöw suggested. (3) Generalize it to all comparisons as well, including < and > . (4) While we're at it, let's also emit a warning when comparing signed and unsigned types. (5) Dare I say it... warn against implicit conversions of double to float. (6) The same applies to "real" as well. All of these scenarios are capable of producing "incorrect" results, are a source of discrete bugs (often corner cases that we failed to consider and test), and can be hard to detect. It's about time we stopped being stubborn and flagged these things as warnings. Even if they require a special compiler flag and are disabled by default, that's better than nothing.
Re: Always false float comparisons
On Monday, May 09, 2016 02:10:19 Walter Bright via Digitalmars-d wrote: > Don Clugston pointed out in his DConf 2016 talk that: > > float f = 1.30; > assert(f == 1.30); > > will always be false since 1.30 is not representable as a float. However, > > float f = 1.30; > assert(f == cast(float)1.30); > > will be true. > > So, should the compiler emit a warning for the former case? It does seem like having implicit conversions with floating point numbers is problematic in general, though warning about it or making it illegal could very well be too annoying to be worth it. But at bare minimum, warning about literals not matching the type that they're being compared against when there _is_ a literal that would be of the same type is probably worth warning about - and that could apply to more than just floating point values. But figuring out when implicit conversions are genuinely useful and should be allowed and when they're more trouble than they're worth is surprisingly hard to get right. :( - Jonathan M Davis
Re: Always false float comparisons
On Monday, 9 May 2016 at 18:37:10 UTC, Xinok wrote: (1) Yes, emit a warning for this case. (2) Generalize it to all variables, like Nordlöw suggested. (3) Generalize it to all comparisons as well, including < and > . (4) While we're at it, let's also emit a warning when comparing signed and unsigned types. (5) Dare I say it... warn against implicit conversions of double to float. (6) The same applies to "real" as well. All of these scenarios are capable of producing "incorrect" results, are a source of discrete bugs (often corner cases that we failed to consider and test), and can be hard to detect. It's about time we stopped being stubborn and flagged these things as warnings. Even if they require a special compiler flag and are disabled by default, that's better than nothing. (1) is good, because the code in question is always wrong. (2) is a logical extension, in those cases where constant folding and VRP can prove that the code is always wrong. (3) Makes no sense though; inequalities with mixed floating-point types are perfectly safe. (Well, as safe as any floating-point code can be, anyway.) (4) is already planned; it's just taking *a lot* longer than anticipated to actually implement it: https://issues.dlang.org/show_bug.cgi?id=259 https://github.com/dlang/dmd/pull/1913 https://github.com/dlang/dmd/pull/5229
Re: Always false float comparisons
On Monday, 9 May 2016 at 18:51:58 UTC, tsbockman wrote: On Monday, 9 May 2016 at 18:37:10 UTC, Xinok wrote: ... (3) Generalize it to all comparisons as well, including < and ... (3) Makes no sense though; inequalities with mixed floating-point types are perfectly safe. (Well, as safe as any floating-point code can be, anyway.) ... Not necessarily. Reusing 1.3 from the original case, the following assertion passes: float f = 1.3; assert(f < 1.3); And considering we also have <= and >=, we may as well check all of the comparison operators. It's a complex issue because there isn't necessarily right or wrong behavior, only things that *might* be wrong. But if we want to truly detect all possible cases of incorrect behavior, then we have to be exhaustive in our checks.
Re: Always false float comparisons
On Monday, 9 May 2016 at 19:15:20 UTC, Xinok wrote: It's a complex issue because there isn't necessarily right or wrong behavior, only things that *might* be wrong. But if we want to truly detect all possible cases of incorrect behavior, then we have to be exhaustive in our checks. We absolutely, emphatically, DO NOT want to "detect all possible cases of incorrect behavior". Given the limited information available to the compiler and the intractability of the Halting Problem, the correct algorithm for doing so is this: foreach(line; sourceCode) warning("This line could not be proven correct."); Only warnings with a reasonably high signal-to-noise ratio should be implemented. Not necessarily. Reusing 1.3 from the original case, the following assertion passes: float f = 1.3; assert(f < 1.3); And considering we also have <= and >=, we may as well check all of the comparison operators. Because of the inevitability of rounding errors, FP code generally cannot be allowed to depend upon precise equality comparisons to any value other than zero (even implicit ones as in your example). Warning about this makes sense: float f = 1.3; assert(f == 1.3); Because the `==` makes it clear that the programmer's intent was to test for equality, and that will fail unexpectedly. On the other hand, with `<`, `>`, `<=`, and `>=`, the compiler should generally assume that they are being used correctly, in a way that is tolerant of small rounding errors. Doing otherwise would cause tons of false positives in competently written FP code. Educating programmers who've never studied how to write correct FP code is too complex of a task to implement via compiler warnings. The warnings should be limited to cases that are either obviously wrong, or where the warning is likely to be a net positive even for FP experts.
Re: Always false float comparisons
On 5/9/2016 11:37 AM, Xinok wrote: All of these scenarios are capable of producing "incorrect" results, are a source of discrete bugs (often corner cases that we failed to consider and test), and can be hard to detect. It's about time we stopped being stubborn and flagged these things as warnings. Even if they require a special compiler flag and are disabled by default, that's better than nothing. I've used a B+D language that does as you suggest (Wirth Pascal). It was highly unpleasant to use, as the code became littered with casts. Casts introduce their own set of bugs.
Re: Always false float comparisons
On 5/9/2016 11:51 AM, tsbockman wrote: On Monday, 9 May 2016 at 18:37:10 UTC, Xinok wrote: (4) While we're at it, let's also emit a warning when comparing signed and unsigned types. (4) is already planned; it's just taking *a lot* longer than anticipated to actually implement it: https://issues.dlang.org/show_bug.cgi?id=259 https://github.com/dlang/dmd/pull/1913 https://github.com/dlang/dmd/pull/5229 I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. But then there are more problems created.
Re: Always false float comparisons
On 5/9/2016 12:39 PM, tsbockman wrote: Educating programmers who've never studied how to write correct FP code is too complex of a task to implement via compiler warnings. The warnings should be limited to cases that are either obviously wrong, or where the warning is likely to be a net positive even for FP experts. I've seen a lot of proposals which try to hide the reality of how FP works. The cure is worse than the disease. The same goes for hiding signed/unsigned, and the autodecode mistake of pretending that code units aren't there.
Re: Always false float comparisons
On 5/9/2016 6:46 AM, Steven Schveighoffer wrote: I know this is a bit band-aid-ish, but if one is comparing literals to a float, why not treat the literal as the type being compared against? In other words, imply the 1.3f. This isn't integer-land where promotions do not change the outcome. Because it's yet another special case, and we know where those lead. For example, what if the 1.30 was the result of CTFE?
Re: Always false float comparisons
Am Mon, 9 May 2016 02:10:19 -0700 schrieb Walter Bright : > Don Clugston pointed out in his DConf 2016 talk that: > > float f = 1.30; > assert(f == 1.30); > > will always be false since 1.30 is not representable as a float. However, > > float f = 1.30; > assert(f == cast(float)1.30); > > will be true. > > So, should the compiler emit a warning for the former case? I'd say yes, but exclude the case where it can be statically verified, that the comparison can yield true, because the constant can be losslessly converted to the type of 'f'. By example, don't warn for these: f == 1.0, f == -0.5, f == 3.625, f == 2UL^^60 But do warn for: f == 1.30, f == 2UL^^60+1 As an extension of the existing "comparison is always false/true" check it could read "Comparison is always false: literal 1.30 is not representable as 'float'". There is a whole bunch in this warning category: byte b; if (b == 1000) {} "Comparison is always false: literal 1000 is not representable as 'byte'" -- Marco
Re: Always false float comparisons
Am Mon, 09 May 2016 15:56:21 + schrieb Nordlöw : > On Monday, 9 May 2016 at 12:28:04 UTC, Walter Bright wrote: > > On 5/9/2016 4:38 AM, Nordlöw wrote: > >> Would that include comparison of variables only aswell? > > No. > > Why? Because the float would be converted to double without loss of precision just like uint == ulong is a valid comparison which can yield 'true' in 2^32 cases. float == 1.30 on the other hand is false in any case. You'd have to warn on _every_ comparison with a widening conversion to be consistent! -- Marco
Re: Always false float comparisons
On 5/9/16 4:22 PM, Walter Bright wrote: On 5/9/2016 6:46 AM, Steven Schveighoffer wrote: I know this is a bit band-aid-ish, but if one is comparing literals to a float, why not treat the literal as the type being compared against? In other words, imply the 1.3f. This isn't integer-land where promotions do not change the outcome. Because it's yet another special case, and we know where those lead. For example, what if the 1.30 was the result of CTFE? This is true, it's a contrived example. -Steve
Re: Always false float comparisons
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote: On 5/9/2016 11:51 AM, tsbockman wrote: (4) is already planned; it's just taking *a lot* longer than anticipated to actually implement it: https://issues.dlang.org/show_bug.cgi?id=259 https://github.com/dlang/dmd/pull/1913 https://github.com/dlang/dmd/pull/5229 I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. But then there are more problems created. What mess? The actual fix for issue 259 is simple, elegant, and shouldn't require much code in the wild to be changed. The difficulties and delays have all been associated with the necessary improvements to VRP and constant folding, which are worthwhile in their own right, since they help the compiler generate faster code.
Re: Always false float comparisons
On Monday, 9 May 2016 at 20:20:00 UTC, Walter Bright wrote: On 5/9/2016 12:39 PM, tsbockman wrote: Educating programmers who've never studied how to write correct FP code is too complex of a task to implement via compiler warnings. The warnings should be limited to cases that are either obviously wrong, or where the warning is likely to be a net positive even for FP experts. I've seen a lot of proposals which try to hide the reality of how FP works. The cure is worse than the disease. The same goes for hiding signed/unsigned, and the autodecode mistake of pretending that code units aren't there. I completely agree that complexity that cannot be properly hidden should not be hidden. The underlying mechanisms of floating point is complexity that we shouldn't paper over. However, the peculiarities of language conventions w.r.t. floating point expressions doesn't quite fit that category.
Re: Always false float comparisons
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote: (4) is already planned; it's just taking *a lot* longer than anticipated to actually implement it: https://issues.dlang.org/show_bug.cgi?id=259 https://github.com/dlang/dmd/pull/1913 https://github.com/dlang/dmd/pull/5229 I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. But then there are more problems created. One other thing - according to the bug report discussion, the proposed solution was pre-approved both by Andrei Alexandrescu and by *YOU*. Proposing a solution, letting various people work on implementing it for three years, and then suddenly announcing that you "oppose this change" and calling the solution a "mess" with no explanation is a fantastic way to destroy all motivation for outside contributors.
Re: Always false float comparisons
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote: Don Clugston pointed out in his DConf 2016 talk that: float f = 1.30; assert(f == 1.30); will always be false since 1.30 is not representable as a float. However, float f = 1.30; assert(f == cast(float)1.30); will be true. So, should the compiler emit a warning for the former case? I think it really depends on what the warning actually says. I think people have different expectations for what that warning would be. When you say 1.30 is not representable as a float, when is the "not representable" enforced? Because it looks like the programmer just represented it in the assignment of the literal – but that's not where the warning would be right? I mean I assume so because people need nonrational literals all the time, and this is the only way they can do it, which means it's a hole in the type system right? There should be a decimal type to cover all these cases, like some databases have. Would the warning say that you can't compare 1.30 to a float because 1.30 is not representable as a float? Or would it say that f was rounded upon assignment and is no longer 1.30? Short of a decimal type, I think it would be nice to have a "float equality" operator that covered this whole class of cases, where floats that started their lives as nonrational literals and floats that have been rounded with loss of precision can be treated as equal if they're within something like .001% of each other (well a percentage that can actually be represented as a float...) Basically equality that covers the known mutational properties of fp arithmetic. There's no way to do this right now without ranges right? I know that ~ is for concat. I saw ~= is an operator. What does that do? The Unicode ≈ would be nice for this. I assume IEEE 754 or ISO 10967 don't cover this? I was just reading the latter (zip here: http://standards.iso.org/ittf/PubliclyAvailableStandards/c051317_ISO_IEC_10967-1_2012.zip)
Re: Always false float comparisons
On Monday, 9 May 2016 at 20:14:36 UTC, Walter Bright wrote: On 5/9/2016 11:37 AM, Xinok wrote: All of these scenarios are capable of producing "incorrect" results, are a source of discrete bugs (often corner cases that we failed to consider and test), and can be hard to detect. It's about time we stopped being stubborn and flagged these things as warnings. Even if they require a special compiler flag and are disabled by default, that's better than nothing. I've used a B+D language that does as you suggest (Wirth Pascal). It was highly unpleasant to use, as the code became littered with casts. Casts introduce their own set of bugs. Maybe it's a bad idea to enable these warnings by default but what's wrong with providing a compiler flag to perform these checks anyways? For example, GCC has a compiler flag to yield warnings for signed+unsigned comparisons but it's not even enabled with the -Wall flag, only by specifying -Wextra or -Wsign-compare.
Re: Always false float comparisons
On Monday, 9 May 2016 at 12:29:14 UTC, Temtaime wrote: Just get rid of the problem : remove == and != from floats. Diagnostics Suggestion: Issue a warning including a direction to approxEqual() https://dlang.org/phobos/std_math.html#.approxEqual
Re: Always false float comparisons
On 9 May 2016 at 19:10, Walter Bright via Digitalmars-d wrote: > Don Clugston pointed out in his DConf 2016 talk that: > > float f = 1.30; > assert(f == 1.30); > > will always be false since 1.30 is not representable as a float. However, > > float f = 1.30; > assert(f == cast(float)1.30); > > will be true. > > So, should the compiler emit a warning for the former case? Perhaps float comparison should *always* be done at the lower precision? There's no meaningful way to perform a float/double comparison where the float is promoted, whereas demoting the double for the comparison will almost certainly yield the expected result.
Re: Always false float comparisons
On 10 May 2016 at 17:28, Manu wrote: > On 9 May 2016 at 19:10, Walter Bright via Digitalmars-d > wrote: >> Don Clugston pointed out in his DConf 2016 talk that: >> >> float f = 1.30; >> assert(f == 1.30); >> >> will always be false since 1.30 is not representable as a float. However, >> >> float f = 1.30; >> assert(f == cast(float)1.30); >> >> will be true. >> >> So, should the compiler emit a warning for the former case? > > Perhaps float comparison should *always* be done at the lower > precision? There's no meaningful way to perform a float/double > comparison where the float is promoted, whereas demoting the double > for the comparison will almost certainly yield the expected result. Think of it like this; a float doesn't represent a precise point (it's an approximation by definition), so see the float as representing the interval from the absolute value it stores, and that + 1 mantissa bit. If you see float's that way, then the natural way to compare them is to demote to the lowest common precision, and it wouldn't be considered erroneous, or even warning-worthy; just documented behaviour.
Re: Always false float comparisons
On 10 May 2016 at 06:25, Marco Leise via Digitalmars-d wrote: > Am Mon, 9 May 2016 02:10:19 -0700 > schrieb Walter Bright : > >> Don Clugston pointed out in his DConf 2016 talk that: >> >> float f = 1.30; >> assert(f == 1.30); >> >> will always be false since 1.30 is not representable as a float. However, >> >> float f = 1.30; >> assert(f == cast(float)1.30); >> >> will be true. >> >> So, should the compiler emit a warning for the former case? > > I'd say yes, but exclude the case where it can be statically > verified, that the comparison can yield true, because the > constant can be losslessly converted to the type of 'f'. > > By example, don't warn for these: > f == 1.0, f == -0.5, f == 3.625, f == 2UL^^60 > > But do warn for: > f == 1.30, f == 2UL^^60+1 > > As an extension of the existing "comparison is always > false/true" check it could read "Comparison is always false: > literal 1.30 is not representable as 'float'". > > There is a whole bunch in this warning category: > byte b; > if (b == 1000) {} > "Comparison is always false: literal 1000 is not representable > as 'byte'" > > -- > Marco This.
Re: Always false float comparisons
On Monday, 9 May 2016 at 19:39:52 UTC, tsbockman wrote: Educating programmers who've never studied how to write correct FP code is too complex of a task to implement via compiler warnings. The warnings should be limited to cases that are either obviously wrong, or where the warning is likely to be a net positive even for FP experts. Any warning message for this type of problem should mention the "What Every Computer Scientist Should Know About Floating-Point Arithmetic" paper (and perhaps give a standard public URL such as https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html at which the paper can be easily accessed).
Re: Always false float comparisons
On 5/9/2016 1:25 PM, Marco Leise wrote: There is a whole bunch in this warning category: byte b; if (b == 1000) {} "Comparison is always false: literal 1000 is not representable as 'byte'" You're right, we may be opening a can of worms with this.
Re: Always false float comparisons
On 5/9/2016 8:00 PM, Xinok wrote: Maybe it's a bad idea to enable these warnings by default but what's wrong with providing a compiler flag to perform these checks anyways? For example, GCC has a compiler flag to yield warnings for signed+unsigned comparisons but it's not even enabled with the -Wall flag, only by specifying -Wextra or -Wsign-compare. Warnings balkanize the language into endless dialects.
Re: Always false float comparisons
On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote: Think of it like this; a float doesn't represent a precise point (it's an approximation by definition), so see the float as representing the interval from the absolute value it stores, and that + 1 mantissa bit. If you see float's that way, then the natural way to compare them is to demote to the lowest common precision, and it wouldn't be considered erroneous, or even warning-worthy; just documented behaviour. Floating point behavior is so commonplace, I am wary of inventing new, unusual semantics for it.
Re: Always false float comparisons
On 2016-05-10 23:44, Walter Bright wrote: On 5/9/2016 1:25 PM, Marco Leise wrote: There is a whole bunch in this warning category: byte b; if (b == 1000) {} "Comparison is always false: literal 1000 is not representable as 'byte'" You're right, we may be opening a can of worms with this. Scala gives a warning/error (don't remember which) for "isInstanceOf" where it can prove at compile time it will never be true. That has helped me a couple of times. -- /Jacob Carlborg
Re: Always false float comparisons
On 11 May 2016 at 07:47, Walter Bright via Digitalmars-d wrote: > On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote: >> >> Think of it like this; a float doesn't represent a precise point (it's >> an approximation by definition), so see the float as representing the >> interval from the absolute value it stores, and that + 1 mantissa bit. >> If you see float's that way, then the natural way to compare them is >> to demote to the lowest common precision, and it wouldn't be >> considered erroneous, or even warning-worthy; just documented >> behaviour. > > > Floating point behavior is so commonplace, I am wary of inventing new, > unusual semantics for it. Is it unusual to demote to the lower common precision? I think it's the only reasonable solution. It's never reasonable to promote a float, since it has already suffered precision loss. It can't meaningfully be compared against anything higher precision than itself. What is the problem with this behaviour I suggest? The reason I'm wary about emitting a warning is because people will encounter the warning *all the time*, and for a user who doesn't have comprehensive understanding of floating point (and probably many that do), the natural/intuitive thing to do would be to place an explicit cast of the lower precision value to the higher precision type, which is __exactly the wrong thing to do__. I don't think the warning improves the problem, it likely just causes people to emit the same incorrect code explicitly. Honestly, who would naturally respond to such a warning by demoting the higher precision type? I don't know that guy, other than those of us who have just watched Don's talk.
Re: Always false float comparisons
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote: I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. The language forces usage of unsigned types. Though in my experience it's relatively easy to fight back including interfacing with C that uses unsigned types exclusively. But then there are more problems created. I've seen no problem from using signed types so far. The last prejudice left is usage of ubyte[] for buffers. How often one looks into individual bytes in some abstract buffer?
Re: Always false float comparisons
On 5/11/2016 2:24 AM, Manu via Digitalmars-d wrote: Floating point behavior is so commonplace, I am wary of inventing new, unusual semantics for it. Is it unusual to demote to the lower common precision? Yes. I think it's the only reasonable solution. It may be, but it is unusual and therefore surprising behavior. What is the problem with this behaviour I suggest? Code will do one thing in C, and the same code will do something unexpectedly different in D. The reason I'm wary about emitting a warning is because people will encounter the warning *all the time*, and for a user who doesn't have comprehensive understanding of floating point (and probably many that do), the natural/intuitive thing to do would be to place an explicit cast of the lower precision value to the higher precision type, which is __exactly the wrong thing to do__. I don't think the warning improves the problem, it likely just causes people to emit the same incorrect code explicitly. The warning is intended for people who understand, as then they will figure out what they actually wanted and implement that. People who randomly and without comprehension insert casts hoping to make the compiler shut up cannot be helped.
Re: Always false float comparisons
On Tuesday, 10 May 2016 at 21:44:45 UTC, Walter Bright wrote: if (b == 1000) {} "Comparison is always false: literal 1000 is not representable as 'byte'" What's wrong with having this warning? You're right, we may be opening a can of worms with this.
Re: Always false float comparisons
On Thursday, 12 May 2016 at 09:22:02 UTC, Nordlöw wrote: "Comparison is always false: literal 1000 is not representable as 'byte'" What's wrong with having this warning? This is, IMO, just a more informative diagnostic than: "Statement at `foo()` is not reachable": in the following code: if (b == 1000) { foo(); }
Re: Always false float comparisons
On 12 May 2016 at 17:32, Walter Bright via Digitalmars-d wrote: > On 5/11/2016 2:24 AM, Manu via Digitalmars-d wrote: > >> The reason I'm wary about emitting a warning is because people will >> encounter the warning *all the time*, and for a user who doesn't have >> comprehensive understanding of floating point (and probably many that >> do), the natural/intuitive thing to do would be to place an explicit >> cast of the lower precision value to the higher precision type, which >> is __exactly the wrong thing to do__. >> I don't think the warning improves the problem, it likely just causes >> people to emit the same incorrect code explicitly. > > > The warning is intended for people who understand, as then they will figure > out what they actually wanted and implement that. People who randomly and > without comprehension insert casts hoping to make the compiler shut up > cannot be helped. But they can easily be helped by implementing behaviour that makes sense. If you're set on a warning, at least make the warning recommend down-casting the higher precision term to the lower precision?
Re: Always false float comparisons
On 5/12/16 3:32 AM, Walter Bright wrote: On 5/11/2016 2:24 AM, Manu via Digitalmars-d wrote: What is the problem with this behaviour I suggest? Code will do one thing in C, and the same code will do something unexpectedly different in D. Not taking one side or another on this, but due to D doing everything with reals, this is already the case. -Steve
Re: Always false float comparisons
On Thursday, 12 May 2016 at 13:03:58 UTC, Steven Schveighoffer wrote: Not taking one side or another on this, but due to D doing everything with reals, this is already the case. Mmmm. I don't want to open up another can of worms right now, but our x64 C++ code only emits SSE instructions at compile time (or AVX on the Xbox One). The only thing that attempts to use reals in our codebase is our D code.
Re: Always false float comparisons
On 5/12/16 10:05 AM, Ethan Watson wrote: On Thursday, 12 May 2016 at 13:03:58 UTC, Steven Schveighoffer wrote: Not taking one side or another on this, but due to D doing everything with reals, this is already the case. Mmmm. I don't want to open up another can of worms right now, but our x64 C++ code only emits SSE instructions at compile time (or AVX on the Xbox One). The only thing that attempts to use reals in our codebase is our D code. There was a question on the forums a while back about equivalent C++ code that didn't work in D. The answer turned out to be, you had to shoehorn everything into doubles in order to get the same answer. -Steve
Re: Always false float comparisons
On Thursday, 12 May 2016 at 14:29:01 UTC, Steven Schveighoffer wrote: There was a question on the forums a while back about equivalent C++ code that didn't work in D. The answer turned out to be, you had to shoehorn everything into doubles in order to get the same answer. I can certainly see that being the case, especially when dealing with SSE-based code. floats and doubles in XMM registers don't get calculated at 80-bit precision internally, their storage size dictates their calculation precision. Which has led MSVC to promoting floats to doubles for CRT functions when it thinks it can get away with it (and one instance where the compiler forgot to convert back to float afterwards and thus the lower 32 bits of a double were being treated as a float...) It's fun comparing assembly too. There's one particular function we have here that collapsed to something like 20-30 lines of SSE-based code (half after I hand optimised it with branchless SSE intrinsics and without a call to fmod). The same function in D resulted in a significantly larger amount of x87 code. I don't miss x87 at all. But this is getting OT.
Re: Always false float comparisons
On 5/12/2016 3:30 AM, Manu via Digitalmars-d wrote: But they can easily be helped by implementing behaviour that makes sense. One thing that's immutably true about computer floating point is that it does not make intuitive sense if your intuition is based on mathematics. It's a hopeless cause trying to bash it into 'making sense'. It's something one has to simply take the time and learn. This reminds me of all the discussions around trying to hide the fact that D strings are UTF-8 code units. The ultimate outcome of trying to make it "make sense" was the utter disaster of autodecoding. If you're set on a warning, at least make the warning recommend down-casting the higher precision term to the lower precision? Yes, of course. I believe error messages should suggest corrective action.
Re: Always false float comparisons
On 5/12/2016 6:03 AM, Steven Schveighoffer wrote: Not taking one side or another on this, but due to D doing everything with reals, this is already the case. Actually, C allows that behavior if I understood the spec correctly. D just makes things more explicit.
Re: Always false float comparisons
On Thursday, 12 May 2016 at 15:55:52 UTC, Walter Bright wrote: This reminds me of all the discussions around trying to hide the fact that D strings are UTF-8 code units. The ultimate outcome of trying to make it "make sense" was the utter disaster of autodecoding. Well maybe it was a disaster because the problem was only half solved. It looks like Perl 6 got it right: https://perl6advent.wordpress.com/2015/12/07/day-7-unicode-perl-6-and-you/
Re: Always false float comparisons
On 5/12/2016 9:15 AM, Guillaume Chatelet wrote: Well maybe it was a disaster because the problem was only half solved. It looks like Perl 6 got it right: https://perl6advent.wordpress.com/2015/12/07/day-7-unicode-perl-6-and-you/ Perl isn't a systems programming language. A systems language requires access to code units, invalid encodings, etc. Nor is Perl efficient. There are a lot of major efficiency gains by not autodecoding.
Re: Always false float comparisons
On 5/12/16 6:55 PM, Walter Bright wrote: This reminds me of all the discussions around trying to hide the fact that D strings are UTF-8 code units. The ultimate outcome of trying to make it "make sense" was the utter disaster of autodecoding. I am as unclear about the problems of autodecoding as I am about the necessity to remove curl. Whenever I ask I hear some arguments that work well emotionally but are scant on reason and engineering. Maybe it's time to rehash them? I just did so about curl, no solid argument seemed to come together. I'd be curious of a crisp list of grievances about autodecoding. -- Andrei
Re: Always false float comparisons
On Thu, May 12, 2016 at 09:20:05AM -0700, Walter Bright via Digitalmars-d wrote: [...] > There are a lot of major efficiency gains by not autodecoding. Any chance of killing autodecoding in Phobos in the foreseeable future? It's one of the things that I really liked about D in the beginning, but over time, I've come to realize more and more that it was a mistake. It's one of those things that only becomes clear in retrospect. T -- Ph.D. = Permanent head Damage
Re: Always false float comparisons
On Thursday, 12 May 2016 at 16:20:05 UTC, Walter Bright wrote: On 5/12/2016 9:15 AM, Guillaume Chatelet wrote: Well maybe it was a disaster because the problem was only half solved. It looks like Perl 6 got it right: https://perl6advent.wordpress.com/2015/12/07/day-7-unicode-perl-6-and-you/ Perl isn't a systems programming language. A systems language requires access to code units, invalid encodings, etc. Nor is Perl efficient. There are a lot of major efficiency gains by not autodecoding. [Sorry for the OT] I never claimed Perl was a systems programming language nor that it was efficient, just that their design looks more mature than ours. Also I think you missed this part of the article: "Of course, that’s all just for the default Str type. If you don’t want to work at a grapheme level, then you have several other string types to choose from: If you’re interested in working within a particular normalization, there’s the self-explanatory types of NFC, NFD, NFKC, and NFKD. If you just want to work with codepoints and not bother with normalization, there’s the Uni string type (which may be most appropriate in cases where you don’t want the NFC normalization that comes with normal Str, and keep text as-is). And if you want to work at the binary level, well, there’s always the Blob family of types :)." We basically have "Uni" in D, no normalized nor grapheme level.
Re: Always false float comparisons
On 5/12/16 12:00 PM, Walter Bright wrote: On 5/12/2016 6:03 AM, Steven Schveighoffer wrote: Not taking one side or another on this, but due to D doing everything with reals, this is already the case. Actually, C allows that behavior if I understood the spec correctly. D just makes things more explicit. This is the thread I was thinking about: https://forum.dlang.org/post/ugxmeiqsbitxxzoya...@forum.dlang.org Essentially, copy-pasted code from C results in different behavior in D because of the different floating point decisions made by the compilers. Your quote was: "[The problem is] Code will do one thing in C, and the same code will do something unexpectedly different in D." My response is that it already happens, so it may not be a convincing argument. I don't know the requirements or allowances of the C spec. All I know is the testable reality of the behavior on the same platform. -Steve
Re: Always false float comparisons
On 05/12/2016 06:29 PM, Andrei Alexandrescu wrote: I'd be curious of a crisp list of grievances about autodecoding. -- Andrei It emits code points (dchar) which is an awkward middle point between code units (char/wchar) and graphemes. Without any auto-decoding at all, every array T[] would be a random-access range of Ts as well. `.front` would be the same as `[0]`, `.length` would be the same as `.walkLength`, etc. That would make things less confusing for newbies, and more experienced programmers wouldn't accidentally mix the two abstraction levels. Of course, you'd have to be aware that a (w)char is not a character as perceived by humans, but a code unit. But auto-decoding to code points only shifts that problem: You have to be aware that a dchar is not a character either. Multiple dchars may form one visible character, one grapheme. For example, "\u00E4" and "a\u0308" encode the same grapheme: "ä". If char[], wchar[], dchar[] (and qualified variants) were ranges of graphemes, things would make the most sense for people who are not aware of delicate details of Unicode. You wouldn't accidentally cut code points or graphemes in half, `.walkLength` makes intuitive sense, etc. You could still accidentally use `.length` or `[0]`, though. So it still has some pitfalls.
Re: Always false float comparisons
On 5/12/16 12:29 PM, Andrei Alexandrescu wrote: On 5/12/16 6:55 PM, Walter Bright wrote: This reminds me of all the discussions around trying to hide the fact that D strings are UTF-8 code units. The ultimate outcome of trying to make it "make sense" was the utter disaster of autodecoding. I am as unclear about the problems of autodecoding as I am about the necessity to remove curl. Whenever I ask I hear some arguments that work well emotionally but are scant on reason and engineering. Maybe it's time to rehash them? I just did so about curl, no solid argument seemed to come together. I'd be curious of a crisp list of grievances about autodecoding. -- Andrei Autodecoding, IMO, is not the problem. The problem is hijacking an array to mean something other than an array. I ran into this the other day. In iopipe, I treat char[] buffers as actual buffers of code units. This makes sense, as I'm reading/writing text to buffers, and care very little about decoding. I wanted to test my system's ability to handle random-access ranges, and I'm using isRandomAccessRange || isNarrowString to get around this. As soon as I do chain(a, b) where a and b are narrow strings, this doesn't work, and I can't get it back. See my exception in the unit test: https://github.com/schveiguy/iopipe/blob/master/source/iopipe/traits.d#L91 If you want to avoid auto-decoding, you have to be very cautious about using Phobos features. -Steve
Re: Always false float comparisons
On 05/12/2016 07:12 PM, ag0aep6g wrote: [...] tl;dr: This is surprising to newbies, and a possible source of bugs for experienced programmers: writeln("ä".length); /* "2" */ writeln("ä"[0]); /* question mark in a diamond */ When people understand the above, they might still not understand this: writeln("length of 'a\u0308': ", "a\u0308".walkLength); /* "length of 'ä': 2" */ writeln("a\u0308".front); /* "a" */
Re: Always false float comparisons
On 5/12/2016 9:36 AM, Guillaume Chatelet wrote: just that their design looks more mature than ours. I don't think that can be inferred from a brief article. If you want to access D strings by various means, there's .byChar, .byWchar, .byDchar, .byCodeunit, etc. foreach can also pick off characters by various schemes.
Re: Always false float comparisons
On 5/12/2016 9:29 AM, Andrei Alexandrescu wrote: I am as unclear about the problems of autodecoding Started a new thread on that.
Re: Always false float comparisons
Am Mon, 9 May 2016 04:26:55 -0700 schrieb Walter Bright : > > I wonder what's the difference between 1.30f and cast(float)1.30. > > There isn't one. Oh yes, there is! Don't you love floating-point... cast(float)1.30 rounds twice, first from a base-10 representation to a base-2 double value and then again to a float. 1.30f directly converts to float. In some cases this does not yield the same value as converting the base-10 literal directly to a float! Imagine this example (with mantissa bit count reduced for illustration): Original base-10 rational number converted to base-2: 10|1000|011010111011... ↖↖ float & double mantissa precision limits The 1st segment is the mantissa width of a float. The 2nd segment is the mantissa width of a double. The 3rd segment is the fraction used for rounding the base-10 literal to a double. Conversion to double rounds down, since fraction <0.5: 10|1000 (double mantissa) Proposed cast to float rounds down, too, because of round-to-even rule for 0.5: 10 (float mantissa) But the conversion of base-10 directly to float, rounds UP since the fraction is then >0.5 10|1000_011010111011... 11 It happens when the 29 bits difference between double and float mantissa are 100…000 and the bit in front and after are 0. The error would occur to ~0.00047% of numbers. -- Marco
Re: Always false float comparisons
Am Thu, 12 May 2016 08:55:52 -0700 schrieb Walter Bright : > On 5/12/2016 3:30 AM, Manu via Digitalmars-d wrote: > > If you're set on a warning, at least make the warning recommend > > down-casting the higher precision term to the lower precision? > > Yes, of course. I believe error messages should suggest corrective action. Because of afore mentioned difference between cast(float)1.30 and 1.30f, the correct action for the original case is to suffix the literal with 'f'. That gives you the correct number to compare to. -- Marco
Re: Always false float comparisons
Am Thu, 12 May 2016 09:03:58 -0400 schrieb Steven Schveighoffer : > Not taking one side or another on this, but due to D doing everything > with reals, this is already the case. > > -Steve As far as I have understood the situation: - FPU instructions are inaccurate - FPU is typically set to highest precision (80-bit) to give accurate float/double results - SSE instructions yield accurately rounded results - Need for 80 bits greatly reduced with SSE - FPU deprectated on 64-bit x86 - Results of FPU and SSE math differ - Compiler writers discordant GCC: compiler switch for SSE or FPU, SSE default DMD: FPU only - Unless CTFE uses soft-float implementation, depending on compiler and flags used to compile a D compiler, resulting executable produces different CTFE floating-point results -- Marco
Re: Always false float comparisons
On 5/12/2016 4:32 PM, Marco Leise wrote: - Unless CTFE uses soft-float implementation, depending on compiler and flags used to compile a D compiler, resulting executable produces different CTFE floating-point results I've actually been thinking of writing a 128 bit float emulator, and then using that in the compiler internals to do all FP computation with. It's not a panacea, as it won't change how things work in the back ends, nor will it change what happens at runtime, but it means the front end will give portable, consistent results.
Re: Always false float comparisons
On Friday, 13 May 2016 at 01:03:57 UTC, Walter Bright wrote: I've actually been thinking of writing a 128 bit float emulator, and then using that in the compiler internals to do all FP computation with. It's not a panacea, as it won't change how things work in the back ends, nor will it change what happens at runtime, but it means the front end will give portable, consistent results. And be 20x slower than hardware floats. Is it really worth it?
Re: Always false float comparisons
On Friday, 13 May 2016 at 03:18:05 UTC, Jack Stouffer wrote: On Friday, 13 May 2016 at 01:03:57 UTC, Walter Bright wrote: I've actually been thinking of writing a 128 bit float emulator, and then using that in the compiler internals to do all FP computation with. [...] And be 20x slower than hardware floats. Is it really worth it? Emulator is meant for computation during compilation only, so CTFE results are consistent across different compilers and compiler host hardware (IIUC). -- Alexander
Re: Always false float comparisons
On 13 May 2016 at 11:03, Walter Bright via Digitalmars-d wrote: > On 5/12/2016 4:32 PM, Marco Leise wrote: >> >> - Unless CTFE uses soft-float implementation, depending on >> compiler and flags used to compile a D compiler, resulting >> executable produces different CTFE floating-point results > > > I've actually been thinking of writing a 128 bit float emulator, and then > using that in the compiler internals to do all FP computation with. No. Do not. I've worked on systems where the compiler and the runtime don't share floating point precisions before, and it was a nightmare. One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit iirc?) float precision, code compiled for it used 32bits in the compiler... to make it worse, the CPU also ran 32bits. The result was, literals/constants, or float data fed from the CPU didn't match data calculated by the vector unit at runtime (ie, runtime computation of the same calculation that may have occurred at compile time to produce some constant didn't match). The result was severe cracking and visible/shimmering seams between triangles as sub-pixel alignment broke down. We struggled with this for years. It was practically impossible to solve, and mostly involved workarounds. I really just want D to use double throughout, like all the cpu's that run code today. This 80bit real thing (only on x86 cpu's though!) is a never ending pain. > It's not a panacea, as it won't change how things work in the back ends, nor > will it change what happens at runtime, but it means the front end will give > portable, consistent results. This sounds like designing specifically for my problem from above, where the frontend is always different than the backend/runtime. Please have the frontend behave such that it operates on the precise datatype expressed by the type... the backend probably does this too, and runtime certainly does; they all match.
Re: Always false float comparisons
On Friday, 13 May 2016 at 05:12:14 UTC, Manu wrote: No. Do not. I've worked on systems where the compiler and the runtime don't share floating point precisions before, and it was a nightmare. Use reproducible cross platform IEEE754-2008 and use exact rational numbers. All other representations are just painful. Nothing wrong with supporting 16, 32, 64 and 128 bit, but stick to the reproducible standard. If people want "non-reproducible fast math", then they should specify it.
Re: Always false float comparisons
On 13 May 2016 at 07:12, Manu via Digitalmars-d wrote: > On 13 May 2016 at 11:03, Walter Bright via Digitalmars-d > wrote: >> On 5/12/2016 4:32 PM, Marco Leise wrote: >>> >>> - Unless CTFE uses soft-float implementation, depending on >>> compiler and flags used to compile a D compiler, resulting >>> executable produces different CTFE floating-point results >> >> >> I've actually been thinking of writing a 128 bit float emulator, and then >> using that in the compiler internals to do all FP computation with. > > No. Do not. > I've worked on systems where the compiler and the runtime don't share > floating point precisions before, and it was a nightmare. I have some bad news for you about CTFE then. This already happens in DMD even though float is not emulated. :-o
Re: Always false float comparisons
On 5/12/2016 8:18 PM, Jack Stouffer wrote: And be 20x slower than hardware floats. Is it really worth it? I seriously doubt the slowdown would be measurable, as the number of float ops the compiler performs is insignificant.
Re: Always false float comparisons
On 5/12/2016 10:12 PM, Manu via Digitalmars-d wrote: No. Do not. I've worked on systems where the compiler and the runtime don't share floating point precisions before, and it was a nightmare. One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit iirc?) float precision, code compiled for it used 32bits in the compiler... to make it worse, the CPU also ran 32bits. The result was, literals/constants, or float data fed from the CPU didn't match data calculated by the vector unit at runtime (ie, runtime computation of the same calculation that may have occurred at compile time to produce some constant didn't match). The result was severe cracking and visible/shimmering seams between triangles as sub-pixel alignment broke down. We struggled with this for years. It was practically impossible to solve, and mostly involved workarounds. I understand there are some cases where this is needed, I've proposed intrinsics for that. I really just want D to use double throughout, like all the cpu's that run code today. This 80bit real thing (only on x86 cpu's though!) is a never ending pain. It's 128 bits on other CPUs. This sounds like designing specifically for my problem from above, where the frontend is always different than the backend/runtime. Please have the frontend behave such that it operates on the precise datatype expressed by the type... the backend probably does this too, and runtime certainly does; they all match. Except this never happens anyway.
Re: Always false float comparisons
On 5/12/2016 4:06 PM, Marco Leise wrote: Am Mon, 9 May 2016 04:26:55 -0700 schrieb Walter Bright : I wonder what's the difference between 1.30f and cast(float)1.30. There isn't one. Oh yes, there is! Don't you love floating-point... cast(float)1.30 rounds twice, first from a base-10 representation to a base-2 double value and then again to a float. 1.30f directly converts to float. This is one reason why the compiler carries everything internally to 80 bit precision, even if they are typed as some other precision. It avoids the double rounding.
Re: Always false float comparisons
On 13.05.2016 21:25, Walter Bright wrote: On 5/12/2016 4:06 PM, Marco Leise wrote: Am Mon, 9 May 2016 04:26:55 -0700 schrieb Walter Bright : I wonder what's the difference between 1.30f and cast(float)1.30. There isn't one. Oh yes, there is! Don't you love floating-point... cast(float)1.30 rounds twice, first from a base-10 representation to a base-2 double value and then again to a float. 1.30f directly converts to float. This is one reason why the compiler carries everything internally to 80 bit precision, even if they are typed as some other precision. It avoids the double rounding. IMO the compiler should never be allowed to use a precision different from the one specified.
Re: Always false float comparisons
On Friday, 13 May 2016 at 18:16:29 UTC, Walter Bright wrote: Please have the frontend behave such that it operates on the precise datatype expressed by the type... the backend probably does this too, and runtime certainly does; they all match. Except this never happens anyway. It should in C++ with the right strict-settings, which makes the compiler use reproducible floating point operations. AFAIK it should work out even in modern JavaScript.
Re: Always false float comparisons
On 5/13/2016 12:48 PM, Timon Gehr wrote: IMO the compiler should never be allowed to use a precision different from the one specified. I take it you've never been bitten by accumulated errors :-) Reduced precision is only useful for storage formats and increasing speed. If a less accurate result is desired, your algorithm is wrong.
Re: Always false float comparisons
On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote: It should in C++ with the right strict-settings, Consider what the C++ Standard says, not what the endless switches to tweak the compiler do.
Re: Always false float comparisons
On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote: On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote: It should in C++ with the right strict-settings, Consider what the C++ Standard says, not what the endless switches to tweak the compiler do. The C++ standard cannot even require IEEE754. Nobody relies only on what the C++ standard says in real projects. They rely on what the chosen compiler(s) on concrete platform(s) do.
Re: Always false float comparisons
On 5/13/2016 2:42 PM, Ola Fosheim Grøstad wrote: On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote: On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote: It should in C++ with the right strict-settings, Consider what the C++ Standard says, not what the endless switches to tweak the compiler do. The C++ standard cannot even require IEEE754. Nobody relies only on what the C++ standard says in real projects. They rely on what the chosen compiler(s) on concrete platform(s) do. Nevertheless, C++ is what the Standard says it is. If Brand X compiler does something else, you should call it "Brand X C++".
Re: Always false float comparisons
On 13.05.2016 23:35, Walter Bright wrote: On 5/13/2016 12:48 PM, Timon Gehr wrote: IMO the compiler should never be allowed to use a precision different from the one specified. I take it you've never been bitten by accumulated errors :-) ... If that was the case it would be because I explicitly ask for high precision if I need it. If the compiler using or not using a higher precision magically fixes an actual issue with accumulated errors, that means the correctness of the code is dependent on something hidden, that you are not aware of, and that could break any time, for example at a time when you really don't have time to track it down. Reduced precision is only useful for storage formats and increasing speed. If a less accurate result is desired, your algorithm is wrong. Nonsense. That might be true for your use cases. Others might actually depend on IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not imply higher accuracy for the overall computation. E.g., correctness of double-double arithmetic is crucially dependent on correct rounding semantics for double: https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic Also, it seems to me that for e.g. https://en.wikipedia.org/wiki/Kahan_summation_algorithm, the result can actually be made less precise by adding casts to higher precision and truncations back to lower precision at appropriate places in the code. And even if higher precision helps, what good is a "precision-boost" that e.g. disappears on 64-bit builds and then creates inconsistent results? Sometimes reproducibility/predictability is more important than maybe making fewer rounding errors sometimes. This includes reproducibility between CTFE and runtime. Just actually comply to the IEEE floating point standard when using their terminology. There are algorithms that are designed for it and that might stop working if the language does not comply. Then maybe add additional built-in types with a given storage size that additionally /guarantee/ a certain amount of additional scratch space when used for function-local computations.
Re: Always false float comparisons
On 14.05.2016 02:49, Timon Gehr wrote: result can actually be made less precise less accurate. I need to go to sleep.
Re: Always false float comparisons
On 14.05.2016 02:49, Timon Gehr wrote: IEE IEEE.
Re: Always false float comparisons
On 5/13/2016 5:49 PM, Timon Gehr wrote: Nonsense. That might be true for your use cases. Others might actually depend on IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not imply higher accuracy for the overall computation. Of course it implies it. An anecdote: a colleague of mine was once doing a chained calculation. At every step, he rounded to 2 digits of precision after the decimal point, because 2 digits of precision was enough for anybody. I carried out the same calculation to the max precision of the calculator (10 digits). He simply could not understand why his result was off by a factor of 2, which was a couple hundred times his individual roundoff error. E.g., correctness of double-double arithmetic is crucially dependent on correct rounding semantics for double: https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic Double-double has its own peculiar issues, and is not relevant to this discussion. Also, it seems to me that for e.g. https://en.wikipedia.org/wiki/Kahan_summation_algorithm, the result can actually be made less precise by adding casts to higher precision and truncations back to lower precision at appropriate places in the code. I don't see any support for your claim there. And even if higher precision helps, what good is a "precision-boost" that e.g. disappears on 64-bit builds and then creates inconsistent results? That's why I was thinking of putting in 128 bit floats for the compiler internals. Sometimes reproducibility/predictability is more important than maybe making fewer rounding errors sometimes. This includes reproducibility between CTFE and runtime. A more accurate answer should never cause your algorithm to fail. It's like putting better parts in your car causing the car to fail. Just actually comply to the IEEE floating point standard when using their terminology. There are algorithms that are designed for it and that might stop working if the language does not comply. Conjecture. I've written FP algorithms (from Cody+Waite, for example), and none of them degraded when using more precision. Consider that the 8087 has been operating at 80 bits precision by default for 30 years. I've NEVER heard of anyone getting actual bad results from this. They have complained about their test suites that tested for less accurate results broke. They have complained about the speed of x87. And Intel has been trying to get rid of the x87 forever. Sometimes I wonder if there's a disinformation campaign about more accuracy being bad, because it smacks of nonsense. BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote: BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up. He should have been aware of reproducibility since people use fixed point to achieve it, if he wasn't then shame on him. In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function. In C++ template parameters cannot be floating point, you use std::ratio so you get exact rational number instead. This is to avoid inaccuracy problems in the type system. In interval-arithmetics you need to round up and down correctly on the bounds-computations to get correct results. (It is ok for the interval to be larger than the real result, but the opposite is a disaster). With reproducible arithmetics you can do advanced accurate static analysis of programs using floating point code. With reproducible arithmetics you can sync nodes in a cluster based on "time" alone, saving exchanges of data in simulations. There are lots of reasons to default to well defined floating point arithmetics.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote: An anecdote: a colleague of mine was once doing a chained calculation. At every step, he rounded to 2 digits of precision after the decimal point, because 2 digits of precision was enough for anybody. I carried out the same calculation to the max precision of the calculator (10 digits). He simply could not understand why his result was off by a factor of 2, which was a couple hundred times his individual roundoff error. I'm sympathetic to this. Some of my work deals with statistics and you see people try to use formula that are faster but less accurate and it can really get you in to trouble. Var(X) = E(X^2) - E(X)^2 is only true for real numbers, not floating point arithmetic. It can also lead to weird results when dealing with matrix inverses. I like the idea of a float type that is effectively the largest precision on your machine (the D real type). However, I could be convinced by the argument that you should have to opt-in for this and that internal calculations should not implicitly use it. Mainly because I'm sympathetic to the people who would prefer speed to precision. Not everybody needs all the precision all the time.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 05:46:38 UTC, Ola Fosheim Grøstad wrote: In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function. In java everything used to be strictfp (and there was no keyword), but it was changed to do non-strict arithmetic by default after a backlash from numeric programmers.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 09:11:49 UTC, QAston wrote: On Saturday, 14 May 2016 at 05:46:38 UTC, Ola Fosheim Grøstad wrote: In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function. In java everything used to be strictfp (and there was no keyword), but it was changed to do non-strict arithmetic by default after a backlash from numeric programmers. Java had a healthy default, but switched in order to not look so bad in comparison to C on current day hardware. However, they retained the ability to get stricter floating point. At the end of the day there is literally no end to how far you can move down the line of implementation defined floating point. Take a look at the ARM instruction set, it makes x86 look high level. You can even choose how many iterations you want for complex instructions (i.e. choose the approximation level for faster execution). However, these days IEEE754-2008 is becoming available in hardware and therefore one is better off choosing the most well-defined semantics for the regular case. It means less optimization opportunities unless you specify relaxed semantics, but that isn't such a bad trade off as long as specifying relaxed semantics is easy.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote: Sometimes reproducibility/predictability is more important than maybe making fewer rounding errors sometimes. This includes reproducibility between CTFE and runtime. A more accurate answer should never cause your algorithm to fail. It's like putting better parts in your car causing the car to fail. This is all quite discouraging from a scientific programmers point of view. Precision is important, more precision is good, but reproducibility and predictability are critical. Tables of constants that change value if I put a `static` in front of them? Floating point code that produces different results after a compiler upgrade / with different non-fp-related switches? Ew.
Re: Always false float comparisons
On 5/14/2016 3:16 AM, John Colvin wrote: This is all quite discouraging from a scientific programmers point of view. Precision is important, more precision is good, but reproducibility and predictability are critical. I used to design and build digital electronics out of TTL chips. Over time, TTL chips got faster and faster. The rule was to design the circuit with a minimum signal propagation delay, but never a maximum. Therefore, putting in faster parts will never break the circuit. Engineering is full of things like this. It's sound engineering practice. I've never ever heard of a circuit requiring a resistor with 20% tolerance that would fail if a 10% tolerance one was put in, for another example. Tables of constants that change value if I put a `static` in front of them? Floating point code that produces different results after a compiler upgrade / with different non-fp-related switches? Ew. Floating point is not exact calculation. It just isn't. Designing an algorithm that relies on worse answers is absurd to my ears. Results should be tested to have a minimum number of correct bits in the answer, not a maximum number. This is, in fact, how std.math checks the result of the algorithms implemented in it, and how it should be done. This is not some weird crazy idea of mine, as I said, the x87 FPU in every x86 chip has been doing this for several decades.
Re: Always false float comparisons
On 5/13/2016 10:52 PM, jmh530 wrote: I like the idea of a float type that is effectively the largest precision on your machine (the D real type). However, I could be convinced by the argument that you should have to opt-in for this and that internal calculations should not implicitly use it. Mainly because I'm sympathetic to the people who would prefer speed to precision. Not everybody needs all the precision all the time. Speed matters on the generated target program, not in the compiler internal floating point calculations, simply because the compiler does very, very few of them.
Re: Always false float comparisons
On 5/13/2016 10:46 PM, Ola Fosheim Grøstad wrote: On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote: BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up. He should have been aware of reproducibility since people use fixed point to achieve it, if he wasn't then shame on him. Kahan designed the x87 and wrote the IEEE 754 standard, so I'd do my homework before telling him he is wrong about basic floating point stuff. In Java all compile time constants are done using strict settings and it provides a keyword «strictfp» to get strict behaviour for a particular class/function. What happened with Java was interesting. The original spec required double arithmetic to be done with double precision. This wound up failing all over the place on x86 machines, which (as I explained) does temporaries to 80 bits. Forcing the x87 to use doubles for intermediate values caused Java to run much slower, and Sun was forced to back off on that requirement.
Re: Always false float comparisons
On Saturday, 14 May 2016 at 18:58:35 UTC, Walter Bright wrote: Kahan designed the x87 and wrote the IEEE 754 standard, so I'd do my homework before telling him he is wrong about basic floating point stuff. You don't have to tell me who Kahan is. I don't see the relevance. You are trying to appeal to authority. Stick to facts. :-)