Re: [Haskell-cafe] Sinus in Haskell

2007-11-11 Thread Henning Thielemann

On Sat, 10 Nov 2007 [EMAIL PROTECTED] wrote:

 Quoting [EMAIL PROTECTED]:

  Then, a *rational* approximation gives you the same precision with
  less coeffs. Nowadays the division is not sooo much more expensive
  than the multiplication, so the efficiency doesn't suffer much.

 It might not cost much in the way of time, but it might complicate the
 implementation a lot.  Using polynomials only probably uses a smaller
 number of transistors, disspiate less power and for all I know might be
 easier to parallelise than the alternatives.

I guess that the general answer to the question for the best algorithm is:
It depends ... If your architecture (on hardware or machine code level)
has no or slow multiplication, you will use an algorithm that avoids
multiplications, same for division and so on.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-10 Thread Henning Thielemann

On Sat, 10 Nov 2007, Daniel Fischer wrote:

 Since you seem to know a lot about these things, out of curiosity, do you know
 how these functions are actually implemented? Do they use Taylor series or
 other techniques?

I think that for sin and cos the Taylor series are a good choice.  For
other functions like the square root, exponential, logarithm, inverse
trigonometric functions the CORDIC algorithms are to be prefered. They are
like a division, both in the idea and the speed.

http://en.wikipedia.org/wiki/CORDIC

E.g.
  exp(x1+x2+x3+...+xn) = exp(x1) * exp(x2) * ... * exp(xn)
 now you choose x1, x2, ..., xn such that exp(xi) is a number that allows
a simple multiplication.
 x1 = ln(1.1 bin)
 x2 = ln(1.01 bin)
 x3 = ln(1.001 bin)
  Multiplying with xi is just a shift and a sum. You only need to
precompute and store the xi.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-10 Thread Jules Bean

Brent Yorgey wrote:


More generally, this is due to the fact that floating-point numbers can 
only have finite precision, so a little bit of rounding error is 
inevitable when dealing with irrational numbers like pi.   This problem 
is in no way specific to Haskell.


But some systems always display to a slightly lower precision than they 
calculate. Some pocket calculators work like this, and I suspect some 
programming languages might. You can conceal the first few instances of 
rounding errors this way (until they get a bit bigger and punch through 
your reduced precision).


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-10 Thread jerzy . karczmarczuk
Carl Witty writes: 


On Sat, 2007-11-10 at 01:29 +0100, Daniel Fischer wrote:
... do you know 
how these functions are actually implemented? Do they use Taylor 
series or other techniques?


I don't really know that much about it; 
... It seems likely that this instruction (and library

implementations on architectures where sin is not built into the
processor) use Taylor series, but I don't know for sure.


== 

No, Gentlemen, nobody rational would use Taylor nowadays! It is lousy. 


First, Chebyshev approximations give better *uniform convergence*.
Then, a *rational* approximation gives you the same precision with
less coeffs. Nowadays the division is not sooo much more expensive
than the multiplication, so the efficiency doesn't suffer much. 

Then, you have plenty of recursive formulae, for example: 

sin 3x = 3*sin x - 4*(sin x)^3 


which converges as it does, x - x/3 for one step... There are more
complicated as well. Of course, for x sufficiently small use other
approx., e.g. sin x =  x. 


Finally, you have CORDIC (Volder, 1959).
Original CORDIC may be used for tan(x), then sin=tan/sqrt(1+tan^2), and
the square root can be obtained by Newton, *real fast*. 


CORDIC is explained everywhere, if you want to learn it. Start with
Wikipedia, of course... 

Jerzy Karczmarczuk 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-10 Thread ajb

G'day all.

Quoting [EMAIL PROTECTED]:


== No, Gentlemen, nobody rational would use Taylor nowadays! It is
lousy.


This is correct.  Real implementations are far more likely to use the
minmax polynomial of some order.  However...


Then, a *rational* approximation gives you the same precision with
less coeffs. Nowadays the division is not sooo much more expensive
than the multiplication, so the efficiency doesn't suffer much.


It might not cost much in the way of time, but it might complicate the
implementation a lot.  Using polynomials only probably uses a smaller
number of transistors, disspiate less power and for all I know might be
easier to parallelise than the alternatives.

In general, if you can implement FP operations without using microcode,
that's a big win.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Hans van Thiel
Hello All,
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?
GHCi yields the same results. I did search the Haskell report and my
text books, but to no avail. Thanks in advance,
Hans van Thiel

Hugs sin (0.0 * pi)
0.0
Hugs sin (0.5 * pi)
1.0
Hugs sin (1.0 * pi)
1.22460635382238e-16
Hugs sin (1.5 * pi)
-1.0
Hugs sin (2.0 * pi)
-2.44921270764475e-16
Hugs sin (2.5 * pi)
1.0
Hugs sin (3.0 * pi)
3.67381906146713e-16
Hugs 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Bryan O'Sullivan

Hans van Thiel wrote:


Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?


It's due to rounding error in the platform's math library.  You'll see 
the same results in most other languages that call into libm.


b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Brent Yorgey
On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:

 Hello All,
 Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?
 GHCi yields the same results. I did search the Haskell report and my
 text books, but to no avail. Thanks in advance,
 Hans van Thiel

 Hugs sin (0.0 * pi)
 0.0
 Hugs sin (0.5 * pi)
 1.0
 Hugs sin (1.0 * pi)
 1.22460635382238e-16
 Hugs sin (1.5 * pi)
 -1.0
 Hugs sin (2.0 * pi)
 -2.44921270764475e-16
 Hugs sin (2.5 * pi)
 1.0
 Hugs sin (3.0 * pi)
 3.67381906146713e-16
 Hugs


More generally, this is due to the fact that floating-point numbers can only
have finite precision, so a little bit of rounding error is inevitable when
dealing with irrational numbers like pi.   This problem is in no way
specific to Haskell.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Dan Piponi
On Nov 9, 2007 11:30 AM, Brent Yorgey [EMAIL PROTECTED] wrote:
 More generally, this is due to the fact that floating-point numbers can only
 have finite precision

This popped up on reddit recently:
http://blogs.sun.com/jag/entry/transcendental_meditation .
Interestingly, AMD did apparently figure out how to reduce modulo 2*pi
much more accurately but doing the right thing broke too many
applications. So it's not quite as simple as floating-point numbers
can only have finite precision, the errors are worse than they need
to be. (But I haven't read the K5 paper and so this information is all
second hand.)
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Hans van Thiel
On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
 
 On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
 Hello All,
 Can anybody explain the results for 1.0, 2.0 and 3.0 times pi
 below?
 GHCi yields the same results. I did search the Haskell report
 and my
 text books, but to no avail. Thanks in advance,
 Hans van Thiel 
 
 Hugs sin (0.0 * pi)
 0.0
 Hugs sin (0.5 * pi)
 1.0
 Hugs sin (1.0 * pi)
 1.22460635382238e-16
 Hugs sin (1.5 * pi)
 -1.0
 Hugs sin (2.0 * pi)
 -2.44921270764475e-16
 Hugs sin ( 2.5 * pi)
 1.0
 Hugs sin (3.0 * pi)
 3.67381906146713e-16
 Hugs
 
 More generally, this is due to the fact that floating-point numbers
 can only have finite precision, so a little bit of rounding error is
 inevitable when dealing with irrational numbers like pi.   This
 problem is in no way specific to Haskell. 
 
 -Brent
 
All right, I'd have guessed that myself, if it hadn't been for the exact
computation results for 0, 0.5, 1.5 and 2.5 times pi. So the rounding
errors are only manifest for 1.0, 2.0 and 3.0 times pi. But look at the
difference between sin (1.0 * pi) and sin (3.0 * pi). That's not a
rounding error, but a factor 3 difference.. and sin (as well as cos) are
modulo (2 * pi), right? 

Regards,
Hans

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Daniel Fischer
Am Freitag, 9. November 2007 21:02 schrieb Hans van Thiel:
 On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
  On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
  Hello All,
  Can anybody explain the results for 1.0, 2.0 and 3.0 times pi
  below?
  GHCi yields the same results. I did search the Haskell report
  and my
  text books, but to no avail. Thanks in advance,
  Hans van Thiel
 
  Hugs sin (0.0 * pi)
  0.0
  Hugs sin (0.5 * pi)
  1.0
  Hugs sin (1.0 * pi)
  1.22460635382238e-16
  Hugs sin (1.5 * pi)
  -1.0
  Hugs sin (2.0 * pi)
  -2.44921270764475e-16
  Hugs sin ( 2.5 * pi)
  1.0
  Hugs sin (3.0 * pi)
  3.67381906146713e-16
  Hugs
 
  More generally, this is due to the fact that floating-point numbers
  can only have finite precision, so a little bit of rounding error is
  inevitable when dealing with irrational numbers like pi.   This
  problem is in no way specific to Haskell.
 
  -Brent

 All right, I'd have guessed that myself, if it hadn't been for the exact
 computation results for 0, 0.5, 1.5 and 2.5 times pi. So the rounding
 errors are only manifest for 1.0, 2.0 and 3.0 times pi. But look at the
 difference between sin (1.0 * pi) and sin (3.0 * pi). That's not a
 rounding error, but a factor 3 difference.. and sin (as well as cos) are
 modulo (2 * pi), right?

 Regards,
 Hans

The exact results for 0, 0.5*pi and 2.5*pi aren't surprising. Leaving out the 
more than obvious case 0, we have two cases with sin x == 1. Now what's the 
smallest positive Double epsilon such that show (1+epsilon) /= show 1.0?
I think, on my box it's 2^^(-52), which is roughly 2.22e-16, but the result 
calculated is closer to 1 than that, so the result is actually represented as 
1.0, which by a lucky coincidence is the exact value.
Regarding sin (1.0*pi) and sin (3.0*pi), I don't know how the sine is 
implemented,that they return nonzero values is expected, that actually 
sin (3.0*pi) == 3.0*sin pi does surprise me, too.
Can anybody knowing more about the implementation of sine enlighten me?

Cheers,
Daniel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Carl Witty
On Fri, 2007-11-09 at 21:34 +0100, Daniel Fischer wrote:
 Am Freitag, 9. November 2007 21:02 schrieb Hans van Thiel:
  On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
   On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
   Hello All,
   Can anybody explain the results for 1.0, 2.0 and 3.0 times pi
   below?
   GHCi yields the same results. I did search the Haskell report
   and my
   text books, but to no avail. Thanks in advance,
   Hans van Thiel
  
   Hugs sin (0.0 * pi)
   0.0
   Hugs sin (0.5 * pi)
   1.0
   Hugs sin (1.0 * pi)
   1.22460635382238e-16
   Hugs sin (1.5 * pi)
   -1.0
   Hugs sin (2.0 * pi)
   -2.44921270764475e-16
   Hugs sin ( 2.5 * pi)
   1.0
   Hugs sin (3.0 * pi)
   3.67381906146713e-16
   Hugs
  
   More generally, this is due to the fact that floating-point numbers
   can only have finite precision, so a little bit of rounding error is
   inevitable when dealing with irrational numbers like pi.   This
   problem is in no way specific to Haskell.
  
   -Brent
 
  All right, I'd have guessed that myself, if it hadn't been for the exact
  computation results for 0, 0.5, 1.5 and 2.5 times pi. So the rounding
  errors are only manifest for 1.0, 2.0 and 3.0 times pi. But look at the
  difference between sin (1.0 * pi) and sin (3.0 * pi). That's not a
  rounding error, but a factor 3 difference.. and sin (as well as cos) are
  modulo (2 * pi), right?
 
  Regards,
  Hans
 
 The exact results for 0, 0.5*pi and 2.5*pi aren't surprising. Leaving out the 
 more than obvious case 0, we have two cases with sin x == 1. Now what's the 
 smallest positive Double epsilon such that show (1+epsilon) /= show 1.0?
 I think, on my box it's 2^^(-52), which is roughly 2.22e-16, but the result 
 calculated is closer to 1 than that, so the result is actually represented as 
 1.0, which by a lucky coincidence is the exact value.
 Regarding sin (1.0*pi) and sin (3.0*pi), I don't know how the sine is 
 implemented,that they return nonzero values is expected, that actually 
 sin (3.0*pi) == 3.0*sin pi does surprise me, too.
 Can anybody knowing more about the implementation of sine enlighten me?

Actually, there are about 95 million floating-point values in the
vicinity of pi/2 such that the best possible floating-point
approximation of sin on those values is exactly 1.0 (this number is
2^(53/2), where 53 is the number of mantissa bits in an IEEE
double-precision float); so sin() would be expected to return exactly
1.0 for even an inaccurate version of pi/2.

I don't know how sin is implemented on your architecture (looks like
x86?); but I can guess at enough to explain these results.

The story hinges on 3 numbers: the exact real number pi, which cannot be
represented in floating-point; the best possible IEEE double-precision
approximation to pi, which I will call Prelude.pi; and a closer
precision to pi, having perhaps 68 mantissa bits instead of 53, which I
will call approx_pi.

The value of sin(pi) is of course 0.  Since Prelude.pi is not equal to
pi, sin(Prelude.pi) is not 0; instead, it is approximately
1.22464679915e-16.  Since the derivative of sin near pi is almost
exactly -1, this is approximately (pi - Prelude.pi).  But that's not the
answer you're getting; instead, you're getting exactly (approx_pi -
Prelude.pi), where approx_pi is a 68-bit approximation to pi.

Then sin(3*Prelude.pi) should be approximately (3*pi - 3*Prelude.pi);
but you're getting exactly (3*approx_pi - 3*Prelude.pi), which is
3*(approx_pi - Prelude.pi), which is 3*sin(Prelude.pi).  (Note that
3*Prelude.pi can be computed exactly -- with no further rounding -- in
IEEE double-precision floating-point.)

The above essay was written after much experimentation using the MPFR
library for correctly-rounded arbitrary-precision floating point, as
exposed in the Sage computer algebra system.

Carl Witty


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Daniel Fischer
Am Samstag, 10. November 2007 00:36 schrieb Carl Witty:
 Actually, there are about 95 million floating-point values in the
 vicinity of pi/2 such that the best possible floating-point
 approximation of sin on those values is exactly 1.0 (this number is
 2^(53/2), where 53 is the number of mantissa bits in an IEEE
 double-precision float); so sin() would be expected to return exactly
 1.0 for even an inaccurate version of pi/2.

 I don't know how sin is implemented on your architecture (looks like
 x86?); but I can guess at enough to explain these results.

arch says i686

 The story hinges on 3 numbers: the exact real number pi, which cannot be
 represented in floating-point; the best possible IEEE double-precision
 approximation to pi, which I will call Prelude.pi; and a closer
 precision to pi, having perhaps 68 mantissa bits instead of 53, which I
 will call approx_pi.

Ah ha!
So the library sine function uses a better approximation (I've heard about 
excess-precision using 80 bit floating point numbers, but didn't think of 
that). Then it's clear.

 The value of sin(pi) is of course 0.  Since Prelude.pi is not equal to
 pi, sin(Prelude.pi) is not 0; instead, it is approximately
 1.22464679915e-16.  Since the derivative of sin near pi is almost
 exactly -1, this is approximately (pi - Prelude.pi).  But that's not the
 answer you're getting; instead, you're getting exactly (approx_pi -
 Prelude.pi), where approx_pi is a 68-bit approximation to pi.

 Then sin(3*Prelude.pi) should be approximately (3*pi - 3*Prelude.pi);
 but you're getting exactly (3*approx_pi - 3*Prelude.pi), which is
 3*(approx_pi - Prelude.pi), which is 3*sin(Prelude.pi).  (Note that
 3*Prelude.pi can be computed exactly -- with no further rounding -- in
 IEEE double-precision floating-point.)

 The above essay was written after much experimentation using the MPFR
 library for correctly-rounded arbitrary-precision floating point, as
 exposed in the Sage computer algebra system.

 Carl Witty

Thanks a lot.

Since you seem to know a lot about these things, out of curiosity, do you know 
how these functions are actually implemented? Do they use Taylor series or 
other techniques?

Cheers,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sinus in Haskell

2007-11-09 Thread Carl Witty
On Sat, 2007-11-10 at 01:29 +0100, Daniel Fischer wrote:
  The above essay was written after much experimentation using the MPFR
  library for correctly-rounded arbitrary-precision floating point, as
  exposed in the Sage computer algebra system.
 
  Carl Witty
 
 Thanks a lot.
 
 Since you seem to know a lot about these things, out of curiosity, do you 
 know 
 how these functions are actually implemented? Do they use Taylor series or 
 other techniques?

I don't really know that much about it; I just understand that the
floating-point numbers are really rational numbers of a particular form,
and I know how to use MPFR (via Sage) to play with these rational
numbers.  The idea to try higher-precision approximations of pi as
approx_pi came from the link Dan Piponi posted, to
http://blogs.sun.com/jag/entry/transcendental_meditation .

Given that our numbers match the bad number from that blog post, I
assume that sin is actually implemented as the fsin machine
instruction. :)  It seems likely that this instruction (and library
implementations on architectures where sin is not built into the
processor) use Taylor series, but I don't know for sure.

Carl Witty


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe