The SPARC multiplier is maybe an order of magnitude slower than x86,
based on my experience testing hash algorithms. That's the only one I
can think of though.
Sent from my iPhone
On Mar 28, 2010, at 10:46 AM, Don Clugston <[email protected]>
wrote:
On 28 March 2010 17:35, Lars Tandle Kyllingstad <[email protected]>
wrote:
I've committed an update of my std.complex proposal:
http://github.com/kyllingstad/ltk/blob/master/ltk/complex.d
The current status is as follows:
- Changed mod() to abs(), like Don requested. I'll implement abs()
with
std.math.hypot() as soon as bug 4023 is fixed.
Yup, that's an embarrassing one. I'll get it fixed ASAP.
- Added overloads for the exponentiation operator:
complex^^complex
complex^^real
complex^^integer, w/special cases for 0,1,2,3
- Arithmetic operations now work between different-width complex
types, as
well as between complex and different FP and integer types.
- I tried to make the opOpAssign() functions return by ref, but got
hit by
bug 2460. This really ought to be fixed now that operator
overloading is
done with function templates, but if it's not a priority then I'll
use the
workaround (i.e. write template(T) { ... }).
- I've used FPTemporary!T where appropriate.
Two questions:
I had a look at the implementation of std.numeric.FPTemporary, and
noted
that it's just an alias for real, regardless of the specified type.
(Also,
the std.math.hypot() function is just defined for the real type.)
Can we be
sure this is always what the user wants? Won't using double be
faster in
some cases, or is it so that calculations are done with 80-bit
precision
anyway?
On x86, real is always better. On other architectures, the definition
of FPTemporary will be different.
As you may have seen, I've put in two different multiplication
formulae. I
did this because I read that multiplication is slow on some
architectures,
and the first formula has fewer multiplications. On my machine,
however,
the second (and "standard") one is slightly faster. Do you know which
architecures this refers to, and are any of them relevant for D?
I don't think that would be true of any architectures made in the last
twenty years.
Worrying about that would be premature optimisation.
WRT both these last two points, array operations on complex types is
the place where optimisation really matters.
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos