On Wednesday, 18 May 2016 at 11:16:44 UTC, Joakim wrote:
Welcome to the wonderful world of C++! :D

More seriously, it is well-defined for that implementation, you did not raise the issue of the spec till now. In fact, you seemed not to care what the specs say.

Eh? All C/C++ compilers I have ever used coerce to single precision upon binding.

I care about what the spec says, why it says it and how it is used when targetting the specific hardware.

Or more generally, when writing libraries I target the language spec, when writing applications I target the platform (language + compiler + hardware).

No, it has nothing to do with language semantics and everything to do with bad numerical programming.

No. You can rant all you want about bad numerical programming. But by your definition all DSP programmers are bad at numerics. Which _obviously_ is not true.

That is grasping for straws.

There is NO excuse for preventing programmers from writing reliable performant code that solves the problem at hand to the accuracy that is required by the application.

Because floating-point is itself fuzzy, in so many different ways. You are depending on exactly repeatable results with a numerical type that wasn't meant for it.

Floating point is not fuzzy. It is basically a random sample on an interval of potential solutions with a range that increase with each computation. Unfortunately, you have to use interval arithmetics to get that interval, as in regular floating point you loose the information about the interval. It is an approximation. IEEE 754-2008 makes it possible to get that interval, btw.

Fuzzy is different. Fuzzy means that the range itself is the value. It does not represent an approximation. ;-)


You keep saying this: where did anyone mention unit tests not running with the same precision till you just brought it up out of nowhere? The only prior mention was that compile-time calculation of constants that are then checked for bit-exact equality in the tests might have problems, but that's certainly not all tests and I've repeatedly pointed out you should never be checking for bit-exact equality.

I have stated time and time again that it is completely unacceptable if there is even a remote chance for anything in the unit test to evaluate at a higher precision than in the production code.

That is not a unit test, it is a sham.

The point is that what you consider reliable will be less accurate, sometimes much less.

Define accurate. Accurate has no meaning without a desired outcome to reference.

I care about 4 oscillators having the same phase. THAT MEANS: they all have to use the exact same constants.

If not, they _will_ drift and phase cancellation _will_ occur.

I care about adding and removing the same constant. If not, (more) DC offsets will build up.

It is all about getting below the right tolerance threshold while staying real time. You can compensate by increasing the control rate (reduce the number of interpolated values).


The point is that there are _always_ bounds, so you can never check for the same value. Almost any guessed bounds will be better than incorrectly checking for the bit-exact value.

No. Guessed bounds are not better. I have never said that anyone should _incorrectly_ do anything. I have said that they should _correctly_ understand what they are doing and that guessing bounds just leads to a worse problem:

Programs that intermittently fail in spectacular ways that are very difficult to debug.

You cannot even compute the bounds if the compiler can use higher precision. IEEE754-2008 makes it possible to accurately compute bounds.

Not supporting that is _very_ bad.


should always be thought about. In the latter case, ie your f(x) example, it has nothing to do with error bounds, but that your f(x) is not only invalid at 2, but in a range around 2.

It isn't. For what typed number besides 2 is it invalid?

Zero is not the only number that screws up that calculation.

It is. Not only can it screw up the calculation. It can screw up the real time properties of the algorithm on a specific FPU. Which is even worse.

f(x) and what isn't, that has nothing to do with D. You want D to provide you a way to only check for 0.0, whereas my point is that there are many numbers in the neighborhood of 0.0 which will screw up your calculation, so really you should be using approxEqual.

What IEEE32 number besides 2 can cause problems?

It isn't changing your model, you can always use a very small

But it is!

If your point is that you're modeling artificial worlds that have nothing to do with reality, you can always change your threshold around 0.0 to be much smaller, and who cares if it can't go all the way to zero, it's all artificial, right? :)

Why would I have a threshold around 2, when only 2 is causing problems at the hardware level?


If you're modeling the real world, any function that blows up and gives you bad data, blows up over a range, never a single point, because that's how measurement works.

If I am analyzing real world data I _absolutely_ want all code paths to use the specified precision. I absolutely don't want to wonder whether some computations have a different pattern than others because of the compiler.

Now, providing 64, 128 or 256 bits mantissa and software emulation is _perfectly_ sound. Computing 10 and 24 bits mantissas as if they are 256 bits without the programmer specifying it is bad.

And yes, half-precision is only 10 bits.

They will give you large numbers that can be represented in the computer, but do not work out to describe the real world, because such formulas are really invalid in a neighborhood of 2, not just at 2.

I don't understand what you mean. Even Inf as an outcome tells me something. Or in the case of simulation Inf might be completely valid input for the next stage.

I have not measured this speed myself so I can't say.

www.agner.org

It is of course even worse for 32 bit floats. Then we are at 10x-20x faster than 80bit.

A lot of hand-waving about how more precision is worse, with no real example, which is what Walter keeps asking for.

I have provided plenty of examples.

You guys just don't want to listen. Because it is against the sects religious beliefs that D falls short of expectations both on integers and floating point.

This keeps D at the hobby level as a language. It is a very deep-rooted cultural problem.

But that is ok. Just don't complain about people saying that D is not fit for production. Because they have several good reasons to say that.


Reply via email to