Denis Koroskin wrote:
On Fri, 13 Mar 2009 13:52:08 +0300, Don <nos...@nospam.com> wrote:
Consider this code:
double add(double x, double y) {
return x + y;
}
Is this code pure? Is it nothrow?
Technically, it isn't. If you change the floating-point rounding mode
on the processor, you get a different result for exactly the same
inputs. If you change the floating-point traps, it could throw a
floating-point overflow exception.
One Draconian solution would be to say that this code is NOT pure.
This would mean that floating-point could not be used _at all_ in pure
code. Therefore, a template function like this could probably not be
pure:
T add(T)(T x, T y) { return x + y; }
And everything unravels from there.
Another solution would be to simply ignore the floating point flags,
and mark the relevant functions as pure anyway. That would be a shame
-- DMD goes to a lot of trouble to ensure that they remain valid
(which is one of the reasons why DMD's floating point code is so
slow). We'd be well behind C99 and C++ in support for IEEE
floating-point. I don't like that much.
--- A solution ---
Extend the parametrized module declaration to include something like
module(system, floatingpoint)
as well as
module(system).
This would indicate that the module is floating-point aware. Every
function in that module has two implicit inout parameters: the
floating point status and control registers. This matters ONLY if the
compiler chooses to cache the result of any function in that module
which is marked as 'pure', it must also check the floating-point
status and control, if the function is called from inside any
floatingpoint module. (Most likely, the compiler would simply not
bother caching purity of floating-point modules). This ensures purity
is preserved, _and_ the advanced floating point features remain
available.
Functions inside a floating-point aware module would behave exactly as
they do in normal D.
And now comes the big win. The compiler knows that if a module is not
"floatingpoint", the status flags and rounding DO NOT MATTER. It can
assume the floating-point default situation (round-to-nearest, no
floating point exceptions activated). This allows the compiler to
optimize more aggressively. Of course, it is not required to do so.
Note: The compiler can actually cache calls to pure functions defined
in "floatingpoint" modules in the normal way, since even though the
functions care about advanced features, the code calling those
functions isn't interested. But I doubt the compiler would bother.
This proposal is a little similar to the "Borneo" programming language
proposal for Java,
http://sonic.net/~jddarcy/Borneo/
which was made by one of William Kahan's students. He proposed
annotating every function, specifying which floating point exceptions
it may read or write. In my opinion, that's massive overkill -- it's
only in a few situations that anyone cares about this stuff, most of
the time, you don't. And even when you do care about it, it will only
be in small number of modules.
Since DMD doesn't cache pure functions, it doesn't require any changes
to support this (other than the module statement).
BTW, module(floatingpoint) is just a suggestion. I'm sure someone
could come up with a better term. It could even be really verbose,
it's hardly ever going to be used.
Don.
Does it mean that *any* pure function that has floating-point
arithmentic involved will have to carry an additional status parameter?
If so, why make it explicit?
No. The status parameter is a register on the FPU. You don't have any
choice about it, it's always there. It costs nothing. The proposal is
about specifying a very limited set of circumstances where it is not
allowed to be corrupted. At present, it needs to be preserved
_everywhere_, and that's a big problem for pure functions.
I've been programming in C++ for years now and have never ever used
floating point exception or customized rounding modes. It may be useful
in some cases, but I believe it is not a feature of frequent use.
I agree, that's the whole point!
That's why I believe the following would be suitable for most programmers:
- whenever you enter a pure function, all floating point settings get
saved to stack and reset to defaults (rounding to nearest, no exceptions
etc).
- floating point settings get restored upon leaving the pure function
(normally or via exception)
That's MUCH more complicated than this proposal. And it's very slow.
Also, it doesn't work. It would mean that functions like exp() cannot be
pure -- it's mostly those little ones where the rounding mode actually
matters.
- user may change rounding modes explicitly inside a pure function, but
changes won't be visible to outer code (see previous point)
Comments?
It seems my proposal wasn't clear enough. In practice, there is NO
CHANGE compared to the way things are now. The ONLY difference is that
this proposal provides the guarantees we need to allow things like
sin(x) to be pure nothrow.
And it has the side-effect that it allows compiler writers a bit more
freedom.