Re: Sine and Cosine Accuracy

2005-05-31 Thread Scott Robert Ladd
chris jefferson wrote:
 I would like to say yes, I disagree that this should be true. By your
 argument, why isn't sin(pow(2.0,90.0)+1) == sin(6.153104..)? Also, how
 the heck do you intend to actually calculate that value? You can't just
 keep subtracting multiples of 2*pi from pow(2.0, 90.0) else nothing will
 happen, and if you choose to subtract some large multiple of 2*pi, your
 answer wouldn't end up accurate to anywhere near that many decimal
 places. Floating point numbers approximate real numbers, and at the size
 you are considering, the approximation contains values for which sin(x)
 takes all values in the range [-1,1].

Nonsense.

Show me an example where the following function should *not* print the
same values for a and b:

void same_sines(double x)
{
double a = sin(x);
double b = sin(fmod(x, 2.0 * PI));
printf(%20.15f,%20.15f\n,a,b);
}

Granted, -funsafe-math-optimizations *will* produce different values for
certain values, such as x = pow(2.0,90.0), on x87 hardware, but that is
an error in computing a, not a violation of principle.

..Scott


Re: Sine and Cosine Accuracy

2005-05-31 Thread Andrew Haley
Scott Robert Ladd writes:
  chris jefferson wrote:
   I would like to say yes, I disagree that this should be true. By your
   argument, why isn't sin(pow(2.0,90.0)+1) == sin(6.153104..)? Also, how
   the heck do you intend to actually calculate that value? You can't just
   keep subtracting multiples of 2*pi from pow(2.0, 90.0) else nothing will
   happen, and if you choose to subtract some large multiple of 2*pi, your
   answer wouldn't end up accurate to anywhere near that many decimal
   places. Floating point numbers approximate real numbers, and at the size
   you are considering, the approximation contains values for which sin(x)
   takes all values in the range [-1,1].
  
  Nonsense.
  
  Show me an example where the following function should *not* print the
  same values for a and b:
  
  void same_sines(double x)
  {
  double a = sin(x);
  double b = sin(fmod(x, 2.0 * PI));
  printf(%20.15f,%20.15f\n,a,b);
  }

Please!  Every correct implementation of libm will not print the same
result for these two values, because it is necessary to do the range
reduction in extended precision.

Andrew.


Re: Sine and Cosine Accuracy

2005-05-31 Thread Vincent Lefevre
On 2005-05-30 11:51:59 -0400, Scott Robert Ladd wrote:
 The fact that trigonometric functions can extended beyond 2D geometry in
 no way invalidates their use in their original domain. I've written many
 2D and 3D applications over the years without need for a sine outside
 the range [0, 2*PI] (or [-PI, PI] in some cases). Some people live and
 die by one of those programs, and no one's died yet because I used
 -ffast-math in compiling it.

I wonder if compilers could use information for assertions.
For instance, instead of writing sin(x), you could write:

  sin((assert(x = 0  x = 2 * pi), x))

possibly via a macro. IMHO, this would be better than using
switches such as -ffast-math, and you could mix small ranges
and large ranges in the same program.

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: Sine and Cosine Accuracy

2005-05-31 Thread Paul Koning
 Geoffrey == Geoffrey Keating [EMAIL PROTECTED] writes:

 Geoffrey Paul Koning [EMAIL PROTECTED] writes:
  After some off-line exchanges with Dave Korn, it seems to me that
  part of the problem is that the documentation for
  -funsafe-math-optimizations is so vague as to have no discernable
  meaning.

 Geoffrey I believe that (b) is intended to include:

 Geoffrey ... - limited ranges of elementary functions

You mean limited range or limited domain?  The x87 discussion suggests
limiting the domain.  And limited how far?  Scott likes 0 to 2pi;
equally sensibly one might recommend -pi to +pi.

All these things may well make sense, but few or none of them are
implied by the text of the documentation.

 paul




Re: Sine and Cosine Accuracy

2005-05-31 Thread Geoff Keating


On 31/05/2005, at 6:34 AM, Paul Koning wrote:


Geoffrey == Geoffrey Keating [EMAIL PROTECTED] writes:



 Geoffrey Paul Koning [EMAIL PROTECTED] writes:


After some off-line exchanges with Dave Korn, it seems to me that
part of the problem is that the documentation for
-funsafe-math-optimizations is so vague as to have no discernable
meaning.



 Geoffrey I believe that (b) is intended to include:

 Geoffrey ... - limited ranges of elementary functions

You mean limited range or limited domain?  The x87 discussion suggests
limiting the domain.


Both.  (For instance, this option would also cover the case of an exp 
() which refuses to return zero.)



  And limited how far?  Scott likes 0 to 2pi;
equally sensibly one might recommend -pi to +pi.


I guess another way to put it is that the results may become  
increasingly inaccurate for values away from zero or one (or  
whatever).  (Or they might just be very inaccurate to start with.)



All these things may well make sense, but few or none of them are
implied by the text of the documentation.


I think they're all covered by (b).

It is intentional that the documentation doesn't specify exactly how  
the results differ from IEEE.  The idea is that if you need to know  
such things, this is not the option for you.




Re: Sine and Cosine Accuracy

2005-05-30 Thread Vincent Lefevre
On 2005-05-29 01:33:43 -0600, Roger Sayle wrote:
 I apologise for coming into this argument late.  I'll admit that I
 haven't even caught up on the entire thread, but an interesting
 relevant article that may or may not have already been mentioned is:
 
 http://web.archive.org/web/20040409144725/http://www.naturalbridge.com/floatingpoint/intelfp.html

I mentioned it here:

Date: Fri, 27 May 2005 14:42:32 +0200
From: Vincent Lefevre [EMAIL PROTECTED]
To: gcc@gcc.gnu.org
Subject: Re: GCC and Floating-Point (A proposal)
Message-ID: [EMAIL PROTECTED]

 Admittedly on many IA-32 systems there's little difference between
 using FSIN vs calling the OS's libm's sin function, as glibc and
 microsoft's runtimes (for example) themselves use the x87 intrinsics.
 GCC, however, is not to know this and assumes that the user might
 provide a high-precision library, such as Lefevre's perfect O.5ulp
 implementation.  [It's nice to see him join this argument! :)]

Well, I'm just one of the authors of MPFR. Concerning the runtime
libraries for math functions in IEEE double precision, that partly
provide correct rounding, I know:
  * IBM's MathLib, on which the glibc is based (for Athlon 64,
Opteron, PowerPC, Alpha and PA-RISC). Does rounding-to-nearest
only.
URL: ftp://www-126.ibm.com/pub/mathlib/
  * Arenaire's Crlibm.
URL: https://lipforge.ens-lyon.fr/projects/crlibm/
  * Sun's libmcr.
URL: http://www.sun.com/download/products.xml?id=41797765
  * MPFR does correct rounding in multiple precision, but a wrapper
could be written for the double precision (and possibly other
precisions for the*f and *l variants). Of course, this would be
quite slow as MPFR wasn't written for such kind of things, but
some users may still be interested.

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: Sine and Cosine Accuracy

2005-05-30 Thread Bernhard R. Link
* Georg Bauhaus [EMAIL PROTECTED] [050529 20:53]:
 By real circle I mean a thing that is not obfuscated by the useful
 but strange ways in which things are redefined by mathematicians;
 cf. Halmos for some humor.

Sorry, but sin and cos are mathematical functions. If you want to invent
some predating functions, given them other names.

There might be some use in having computational functions not working
in every range (as all computation by computers is somehow limited),
but 0..2pi is definitly too limited. As going-to-be mathematican I
could not even imagine before this discussion that someone might limit
it that much.
Breaking things like sin(-x) or sin(x+y) will definitly hurt people,
because it is natural to expect this to work.

 And yes, I know that all the other stuff mentioned in this thread
 explains very well that there exist useful definitions of sine for real
 numbers outside (co)sine related ranges, and that these definitions
 are frequently used.

What are (co)sine related ranges, if I may ask? Have you any sane
definiton than 'values in a range I personaly like'?

Bernhard R. Link


Re: Sine and Cosine Accuracy

2005-05-30 Thread Marc Espie
On Sun, May 29, 2005 at 05:52:11PM -0400, Scott Robert Ladd wrote:
 (I expect Gabriel dos Rios to respond with something pithy here; please
 don't disappoint me!)

Funny, I don't expect any message from that signature.

Gabriel dos Reis, on the other hand, may have something to say...


Re: Sine and Cosine Accuracy

2005-05-30 Thread Scott Robert Ladd
Marc Espie wrote:
 Heck, I can plot trajectories on a sphere that do not follow great circles,
 and that extend over 360 degrees in longitude.  I don't see why I should be
 restricted from doing that.

Can you show me a circumstance where sin(x - 2 * pi) and sin(x + 2 * pi)
are not equal to sin(x)?

Using an earlier example in these threads, do you deny that
sin(pow(2.0,90.0)) == sin(5.15314063427653548) ==
sin(-1.130044672903051) -- assuming no use of
-funsafe-math-optimizations, of course?

Shall we mark all potentially troublesome optimizations as unsafe, and
chastise those who use them? Quite a few combinations of options can
cause specific applications to fail, and other programs to work very
well. Under such logic, we should replace -O3 with -Ounsafe, because
some programs break when compiled with -O3.

Since that's patently silly, perhaps we should be more concerned with
making useful choices and improvements to GCC.

 You can decide to restrict this stuff to plain old 2D geometry, and this would
 be fine for teaching in elementary school, but this makes absolutely 
 no sense with respect to any kind of modern mathematics.

The fact that trigonometric functions can extended beyond 2D geometry in
no way invalidates their use in their original domain. I've written many
2D and 3D applications over the years without need for a sine outside
the range [0, 2*PI] (or [-PI, PI] in some cases). Some people live and
die by one of those programs, and no one's died yet because I used
-ffast-math in compiling it.

(I expect Gabriel dos Rios to respond with something pithy here; please
don't disappoint me!)

I keep saying that GCC can and should support the different needs of
different applications. What is wrong with that?

..Scott


Re: Sine and Cosine Accuracy

2005-05-30 Thread Scott Robert Ladd
Marc Espie wrote:
 Heck, I can plot trajectories on a sphere that do not follow great circles,
 and that extend over 360 degrees in longitude.  I don't see why I should be
 restricted from doing that.

Can you show me a circumstance where sin(x - 2 * pi) and sin(x + 2 * pi)
are not equal to sin(x)?

Using an earlier example in these threads, do you deny that
sin(pow(2.0,90.0)) == sin(5.15314063427653548) ==
sin(-1.130044672903051) -- assuming no use of
-funsafe-math-optimizations, of course?

Shall we mark all potentially troublesome optimizations as unsafe, and
chastise those who use them? Quite a few combinations of options can
cause specific applications to fail, and other programs to work very
well. Under such logic, we should replace -O3 with -Ounsafe, because
some programs break when compiled with -O3.

Since that's patently silly, perhaps we should be more concerned with
making useful choices and improvements to GCC.

 You can decide to restrict this stuff to plain old 2D geometry, and this would
 be fine for teaching in elementary school, but this makes absolutely 
 no sense with respect to any kind of modern mathematics.

The fact that trigonometric functions can extended beyond 2D geometry in
no way invalidates their use in their original domain. I've written many
2D and 3D applications over the years without need for a sine outside
the range [0, 2*PI] (or [-PI, PI] in some cases). Some people live and
die by one of those programs, and no one's died yet because I used
-ffast-math in compiling it.

(I expect Gabriel dos Rios to respond with something pithy here; please
don't disappoint me!)

I keep saying that GCC can and should support the different needs of
different applications. What is wrong with that?

..Scott



Re: Sine and Cosine Accuracy

2005-05-30 Thread Scott Robert Ladd
Marc Espie wrote:
 Funny, I don't expect any message from that signature.
 
 Gabriel dos Reis, on the other hand, may have something to say...

A regrettable mistake, brought on by spending too many years in
Spanish-speaking areas, where rio is river.

..Scott


Re: Sine and Cosine Accuracy

2005-05-30 Thread Scott Robert Ladd
Bernhard R. Link wrote:
 Breaking things like sin(-x) or sin(x+y) will definitly hurt people,
 because it is natural to expect this to work.

Where in the name of [insert diety here] did I *ever* say I wanted to
break anything?

Just because something breaks *your* application doesn't mean I
shouldn't be able to use it for *my* application.

I asked for one very simple thing that in no way breaks any code for
anyone: A better break down of floationg-point switches to better serve
diverse users.

..Scott


Re: Sine and Cosine Accuracy

2005-05-30 Thread Georg Bauhaus

Bernhard R. Link wrote:


Sorry, but sin and cos are mathematical functions.


The mathematical functions sin and cos are mathematical
functions in mathematics but almost never in GCC's world,
almost never in the mathematical sense:
They can almost never be computed by programs translated using
GCC, i.e. they can be computed for only finitely many inputs.
You can use gcc for translating functions typically named
sin and cos, this naming doesn't make them equal to the
mathematical functions of the same name. The choice of names
sin and cos seems to be less than helpful, this thread
demonstrates, but of course correcting them is too late, and
off topic. Knuth *has* chosen different names for metapost, BTW.

Programmers write calls to functions named sin and cos for
reaons of getting a result that is near what the mathematical
model (involving the same names sin and cos) would suggest.
Question is, how and when should GCC enable a programmer to
trigger either library procedures, or procedures built
into the processor. There is no full mathematical trigonometry
inside the processor, and probably not in any T(n)  infty
library function. But there is reason to use either of them
depending on your application. Scott explains.

I do not want to offend methematicians, but see, you say that this
and that is obvious when it isn't even obviously the best choice
in 2D and 3D programs that depend heavily on sin and cos.


-- Georg 


Re: Sine and Cosine Accuracy

2005-05-30 Thread Bernhard R. Link
* Georg Bauhaus [EMAIL PROTECTED] [050530 19:34]:
 Programmers write calls to functions named sin and cos for
 reaons of getting a result that is near what the mathematical
 model (involving the same names sin and cos) would suggest.
 Question is, how and when should GCC enable a programmer to
 trigger either library procedures, or procedures built
 into the processor. There is no full mathematical trigonometry
 inside the processor, and probably not in any T(n)  infty
 library function. But there is reason to use either of them
 depending on your application. Scott explains.

As I stated in my earlier mail, I'm not opposed against some
limitation of arguments (2^60 is a large number for me, when it is
correctly documented). What I'm arguing against is an argument
telling only [0,2\pi] is in any sense of the word 'correct'
range for those functions, or in any way sensible range for
computations of those. Code like 
if( x+y  2*pi) return sin(x+y); else return(x+y-2*pi); would
really be useable to make me run around screaming, but
naming any range smaller than some [-50pi,100pi] valid could
really make me crazy...

Bernhard R. Link


Re: Sine and Cosine Accuracy

2005-05-30 Thread Georg Bauhaus

Bernhard R. Link wrote:


naming any range smaller than some [-50pi,100pi] valid could
really make me crazy...


No one is asking for sine to be restricted in this way.
Some are asking for the freedom to request this or that
kind of sine computation to be generated, because they know
that for *their* range of numbers, this or that kind of sine
computation does what they want.



Re: Sine and Cosine Accuracy

2005-05-29 Thread Roger Sayle

On Thu, 26 May 2005, Scott Robert Ladd wrote:
 I prefer breaking out the hardware intrinsics from
 -funsafe-math-optimizations, such that people can compile to use their
 hardware *without* the other transformations implicit in the current
 collective.

 If someone can explain how this hurts anything, please let me know.


I apologise for coming into this argument late.  I'll admit that I
haven't even caught up on the entire thread, but an interesting
relevant article that may or may not have already been mentioned is:

http://web.archive.org/web/20040409144725/http://www.naturalbridge.com/floatingpoint/intelfp.html

[My bookmark 404'd, so I had to use the wayback machine to find it!]

The crux of the issue is that only two GCC targets have ever supported
trigonometic functions in hardware; the x87 coprocessor on IA-32 systems
and the 68881 co-processor on m68k systems.  Of these two, GCC builtin
support has only ever been added for the i386 backend and as mentioned
in the article above the FSIN and FCOS functions produce results well
outside the 1ulp allowed by the relevant standards, even for arguments
in the range [0...2PI].  As such, the reason why hardware support for
these intrinsics is considered part of flag_unsafe_math_optimizations,
is that, for some applications, they are exactly that, unsafe.

Admittedly on many IA-32 systems there's little difference between
using FSIN vs calling the OS's libm's sin function, as glibc and
microsoft's runtimes (for example) themselves use the x87 intrinsics.
GCC, however, is not to know this and assumes that the user might
provide a high-precision library, such as Lefevre's perfect O.5ulp
implementation.  [It's nice to see him join this argument! :)]


In this instance, unsafe math need only be different.  For example,
if the compile-time evaluation, the run-time library and the hardware
intrinsic could potentially return different results, even if the
change is to return a more accurate result, then it is potentially
unsafe.  Tests such as:

double x = 2.0;
if (sin(x) != sin(2.0))
  abort();

and

double (*fptr)(double) = sin;
if (sin(x) != fptr(x))
  abort();

should continue to work as expected.  This might also explain some
of your accuracy results.  Even if GCC can determine a way of
evaluating an expression that results in less loss of precision,
e.g. (x + 1.0) - 1.0 - x, it can't do so unless allowed to
be unsafe.

Sorry if all this has been mentioned before.  Your own results
show that you get different results using x87 hardware intrinsics,
and this alone classifies their use as unsafe in GCC terminology,
i.e. may potentially produce different results.

Roger
--



Re: Sine and Cosine Accuracy

2005-05-29 Thread Georg Bauhaus

Marc Espie wrote:

Sorry for chiming in after all this time, but I can't let this pass.

Scott, where on earth did you pick up your trig books ?


Sorry, too, but why one earth do modern time mathematics scholars
think that sine and cosine are bound to have to do with an equally
modern notion of real numbers that clearly exceed what a circle
has to offer? What is a plain unit circle of a circumference that
exceeds 2?
How can a real mathematical circle of the normal kind have
more than 360 non-fractional sections?
By real circle I mean a thing that is not obfuscated by the useful
but strange ways in which things are redefined by mathematicians;
cf. Halmos for some humor.

And yes, I know that all the other stuff mentioned in this thread
explains very well that there exist useful definitions of sine for real
numbers outside (co)sine related ranges, and that these definitions
are frequently used. Still, at what longitude does your your trip around
the world start in Paris, at 220' or at 36220', if you tell the story
to a seaman? Cutting a pizza at 2.0^90. Huh?!

Have a look into e.g. Mathematics for the Million by Lancelot
Hogben for an impression of how astounding works of architecture
have been done without those weird ways of extending angle related
computations into arbitrarily inflated numbers of which no one knows
how to distinguish one from the other in sine (what you have dared to call
obvious, when it is just one useful convention. Apparently some
applications derive from different conventions if I understand Scott's
remarks correctly).

Sure this might have little to do with ANSI C99 requirements of fpt
computations, but then this thread teaches me that -ansi C should be given
up in favor of -pedantic Autoconf...



-- Georg 



Re: Sine and Cosine Accuracy

2005-05-29 Thread Gabriel Dos Reis
Georg Bauhaus [EMAIL PROTECTED] writes:

| Marc Espie wrote:
|  Sorry for chiming in after all this time, but I can't let this pass.
|  Scott, where on earth did you pick up your trig books ?
| 
| Sorry, too, but why one earth do modern time mathematics scholars
| think that sine and cosine are bound to have to do with an equally
| modern notion of real numbers that clearly exceed what a circle
| has to offer?

It depends on the mathematical definitions you have for cosine and
sine.  Standard mathematics make them functions the domain of which
contains the real line -- traditional expositions may use power series
or differential equatioons (but that does not matter much).  The
relation to circle is coincidental (happily!), not fundamental.  Which
is why they do not have narrow scope. Ah, yes, it has nothing to do
with people being scholars.  

-- Gaby



Re: Sine and Cosine Accuracy

2005-05-29 Thread Marc Espie
On Sun, May 29, 2005 at 08:59:00PM +0200, Georg Bauhaus wrote:
 Marc Espie wrote:
 Sorry for chiming in after all this time, but I can't let this pass.
 
 Scott, where on earth did you pick up your trig books ?
 
 Sorry, too, but why one earth do modern time mathematics scholars
 think that sine and cosine are bound to have to do with an equally
 modern notion of real numbers that clearly exceed what a circle
 has to offer? What is a plain unit circle of a circumference that
 exceeds 2???
 How can a real mathematical circle of the normal kind have
 more than 360 non-fractional sections?
 By real circle I mean a thing that is not obfuscated by the useful
 but strange ways in which things are redefined by mathematicians;
 cf. Halmos for some humor.

Err, because it all makes sense ? Because there is no reason to do stuff
from 0 to 360 instead of -180 to 180 ?

 And yes, I know that all the other stuff mentioned in this thread
 explains very well that there exist useful definitions of sine for real
 numbers outside (co)sine related ranges, and that these definitions
 are frequently used. Still, at what longitude does your your trip around
 the world start in Paris, at 2°20' or at 362°20', if you tell the story
 to a seaman? Cutting a pizza at 2.0^90. Huh?!

At 0.0. Did you know that, before Greenwhich, the meridian for the
origin of longitude was going through Paris ? Your idea would make some
sense if you talked about a latitude (well, even though the notion of
north pole is not THAT easy to define, and neither is the earth round).

Heck, I can plot trajectories on a sphere that do not follow great circles,
and that extend over 360 degrees in longitude.  I don't see why I should be
restricted from doing that.

 Have a look into e.g. Mathematics for the Million by Lancelot
 Hogben for an impression of how astounding works of architecture
 have been done without those weird ways of extending angle related
 computations into arbitrarily inflated numbers of which no one knows
 how to distinguish one from the other in sine (what you have dared to call
 obvious, when it is just one useful convention. Apparently some
 applications derive from different conventions if I understand Scott's
 remarks correctly).

There are some arbitrary convenient definitions in modern mathematics.
The angle units have been chosen so that derivation of sine/cosine is 
obvious.  The definition of sine/cosine extends naturally to the whole
real axis which gives a sense to mechanics, rotation speeds, complex functions
and everything that's been done in mathematics over the last four centuries
or so.

You can decide to restrict this stuff to plain old 2D geometry, and this would
be fine for teaching in elementary school, but this makes absolutely 
no sense with respect to any kind of modern mathematics.

Maybe playing with modern mathematical notions for years has obfuscated
my mind ? or maybe I just find those definitions to be really obvious and
intuitive.   Actually, I would find arbitrary boundaries to be unintuitive.

There is absolutely nothing magical wrt trigonometric functions, if I
compare them to any other kind of floating point arithmetic: as soon as
you try to map `real' numbers into approximations, you have to be VERY wary
if you don't want to lose all precision.  There's nothing special, nor
conventional about sine and cosine.

Again, if you want ARBITRARY conventions, then look at reverse trig functions,
or at logarithms. There you will find arbitrary discontinuities 
that can't be avoided.


Re: Sine and Cosine Accuracy

2005-05-28 Thread Kai Henningsen
[EMAIL PROTECTED] (Scott Robert Ladd)  wrote on 26.05.05 in [EMAIL PROTECTED]:

 Paul Koning wrote:
   Scott Yes, but within the defined mathematical ranges for sine and
   Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite
   Scott accurate.

 I *said* that such statements are outside the standard range of
 trigonometric identities. Writing sin(100) is not a matter of necessity,

Actually, no, you did not say that.

You *said* defined mathematical ranges. See above.

Which is just very, very wrong.

MfG Kai


Re: Sine and Cosine Accuracy

2005-05-28 Thread Kai Henningsen
[EMAIL PROTECTED] (Richard Henderson)  wrote on 26.05.05 in [EMAIL PROTECTED]:

 On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
  static const double range = PI; // * 2.0;
  static const double incr  = PI / 100.0;

 The trig insns fail with large numbers; an argument
 reduction loop is required with their use.

Are you claiming that [-PI ... PI] counts as large numbers?

MfG Kai


Re: Sine and Cosine Accuracy

2005-05-28 Thread Geoffrey Keating
Paul Koning [EMAIL PROTECTED] writes:

 After some off-line exchanges with Dave Korn, it seems to me that part
 of the problem is that the documentation for
 -funsafe-math-optimizations is so vague as to have no discernable
 meaning. 
 
 For example, does the wording of the documentation convey the
 limitation that one should only invoke math functions with a small
 range of arguments (say, -pi to +pi)?  I cannot see anything remotely
 resembling that limitation, but others can.
 
 Given that, I wonder how we can tell whether a particular proposed
 optimization governed by that flag is permissible.  Consider:
 
 `-funsafe-math-optimizations'
  Allow optimizations for floating-point arithmetic that (a) assume
  that arguments and results are valid and (b) may violate IEEE or
  ANSI standards.  
 
 What does (b) mean?  What if anything are its limitations?  Is
 returning 1.2e27 as the result for a sin() call authorized by (b)?  I
 would not have expected that, but I can't defend that expectation
 based on a literal reading of the text...

I believe that (b) is intended to include:

- adding extra precision or range unspecified by the program
- spurious overflow and/or underflow
- a limited general reduction in precision (no more than a few ulp)
- limited ranges of elementary functions
- complete loss of precision in unusual cases (where 'unusual' is not
  well-defined), for instance with complex numbers
- applying simplifications to expressions that would be allowable
  if precision and range were unlimited
- all these might vary between compiler versions, so results are not stable

and probably many other things that I don't remember right now.

The only real limitation on -funsafe-math-optimizations is that it
still has to build SPEC.


Re: Sine and Cosine Accuracy

2005-05-27 Thread Scott Robert Ladd
Menezes, Evandro wrote:
 Your code just tests every 3.6°, perhaps you won't trip at the
 problems...

Actually, it tested every 1.8°, but who wants to be picky. I've rerun
the test overnight at greater resolution, testing every
0.0018 degress, and saw no change in the result.

..Scott


Re: Sine and Cosine Accuracy

2005-05-27 Thread Vincent Lefevre
On 2005-05-26 12:04:04 -0400, Scott Robert Ladd wrote:
 I've never quite understood the necessity for performing trig
 operations on excessively large values, but perhaps my problem
 domain hasn't included such applications.

This can happen in some numerical applications (the same expressions
are used for small and large values). An accurate value wouldn't
necessarily be meaningful, but some properties can be necessary or
at least useful, such as:

* Mathematical properties, e.g. |sin(x)| = 1 and sin²x + cos²x = 1.

* Properties concerning the distribution of the values of sin(x) for
  large values of x.

* Reproducibility of the results across different platforms (may be
  useful for debugging purpose, in particular).

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: Sine and Cosine Accuracy

2005-05-27 Thread Vincent Lefevre
On 2005-05-26 16:33:00 -0500, Menezes, Evandro wrote:
 Keep in mind that x87 transcendentals are not the most accurate
 around, but all x86 processors from any manufacturer produce roughly
 the same results for any argument as the 8087 did way back when,
 even if the result is hundreds of ulps off...

BTW, a few years ago, I heard that some manufacturer had an
x86-compatible processor with math functions that were (slightly)
more accurate than Intel's, and users complained because they
didn't get the same results as with Intel processors, and thought
that the non-Intel one was buggy.

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: Sine and Cosine Accuracy

2005-05-27 Thread Vincent Lefevre
On 2005-05-27 15:36:51 +0200, Olivier Galibert wrote:
 If you're evaluating it at the floating point value 2^90 you're just
 evaluating a fancy prng.  Floating point values represent intervals,

They don't. Have you never heard of correlation?

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: Sine and Cosine Accuracy

2005-05-27 Thread Marcin Dalecki


On 2005-05-27, at 15:36, Olivier Galibert wrote:

Floating point values represent intervals,


This is mathematically wrong. The basic concept is that the
calculations domain as given by floating point numbers is used
to *model* the real number calculus. Certain constrains apply of course.
But there isn't any concept of representation here. Just a mapping.


and when the interval size is way bigger than 2pi any value in [-1,1]
is a perfectably acceptable answer for sin or cos.


???


RE: Sine and Cosine Accuracy

2005-05-27 Thread Menezes, Evandro
Scott, 

 Actually, it tested every 1.8°, but who wants to be picky. 
 I've rerun the test overnight at greater resolution, testing every
 0.0018 degress, and saw no change in the result.

That's because the error is the same but symmetrical for sin and cos, so that, 
when you calculate the sum of their squares, one cancels the other out.

The lack of accuracy in x87 is well known: see 
http://www.gnu.org/software/libc/manual/html_node/Errors-in-Math-Functions.html#Errors-in-Math-Functions.

I modified your test slightly to return the number of error bits.  My results 
using GCC 4.0.0 on an Opteron system running SUSE 9.1 were:

x87sincos-32-fst
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-sse
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-x87
cumulative error: 2 (ulp)
  -1.4432899320127e-15 (decimal)
x87sincos-64-sse
cumulative error: 4 (ulp)
 -4.77395900588817e-15 (decimal)
x87sincos-64-x87
cumulative error: 2 (ulp)
 -1.22124532708767e-15 (decimal)

On an Athlon MP system running SUSE 9.1:

x87sincos-32-fst
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-x87
cumulative error: 2 (ulp)
  -1.4432899320127e-15 (decimal)

Now, adding -funsafe-math-optimizations, on the same systems:

x87sincos-32-fst
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-sse
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-x87
cumulative error: 0 (ulp)
+0 (decimal)
x87sincos-64-sse
cumulative error: 4 (ulp)
 -4.77395900588817e-15 (decimal)
x87sincos-64-x87
cumulative error: 0 (ulp)
+0 (decimal)
And:

x87sincos-32-fst
cumulative error: 4 (ulp)
 -5.10702591327572e-15 (decimal)
x87sincos-32-x87
cumulative error: 0 (ulp)
+0 (decimal)

Any perceived increase in accuracy in this test comes from intermediary 
calculations being done with 80 bits and because the errors in fsin are 
complementary to those in fcos.

HTH


-- 
___
Evandro MenezesAMD   Austin, TX


unsafe.tar.bz2
Description: unsafe.tar.bz2


Re: Sine and Cosine Accuracy

2005-05-27 Thread Scott Robert Ladd
Evandro,

 Any perceived increase in accuracy in this test comes
 from intermediary calculations being done with 80 bits and
 because the errors in fsin are complementary to those in fcos.

I'm always willing to see my mistakes revealed, if it can be done so
eloquently and politely. Unlike some people in this thread, you've been
both eloquent and polite; thank you.

I still maintain that hardware fsin and fcos are valid and valuable for
certain classes of applications, and that we need better options and
documentation -- both of which I'm more than happy to work on. I look
forward to your future comments.

..Scott



RE: Sine and Cosine Accuracy

2005-05-27 Thread Gary Funck


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
 Menezes, Evandro
 Sent: Friday, May 27, 2005 1:55 PM
[...]
 
 That's because the error is the same but symmetrical for sin and 
 cos, so that, when you calculate the sum of their squares, one 
 cancels the other out.
 
 The lack of accuracy in x87 is well known: see 
 http://www.gnu.org/software/libc/manual/html_node/Errors-in-Math-F
 unctions.html#Errors-in-Math-Functions.

Ulrich Drepper used a different method to compute math function
accuracy, described here:
http://people.redhat.com/drepper/libm/index.html

It might be interesting to re-run the safe/unsafe/x87 tests using
his methodology.  His results show offer comparisons on a number of
platforms, and the visual representation of the errors can offer
some insight into the behavior of the implementation.



Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Let's consider the accuracy of sice and cosine. I've run tests as
follows, using a program provided at the end of this message.

On the Opteron, using GCC 4.0.0 release, the command lines produce these
outputs:

-lm -O3 -march=k8 -funsafe-math-optimizations -mfpmath=387

  generates:
  fsincos

  cumulative accuracy:   60.830074998557684 (binary)
 18.311677213055471 (decimal)

-lm -O3 -march=k8 -mfpmath=387

  generates:
  call sin
  call cos

  cumulative accuracy:   49.415037499278846 (binary)
 14.875408524143376 (decimal)

-lm -O3 -march=k8 -funsafe-math-optimizations

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

-lm -O3 -march=k8

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The default for Opteron is -mfpmath=sse; as has been discussed in other
threads, this may not be a good choice. I also note that using
-funsafe-math-optimizations (and thus the combined fsincos instruction)
*increases* accuracy.

On the Pentium4, using the same version of GCC, I get:

-lm -O3 -march=pentium4 -funsafe-math-optimizations

  cumulative accuracy:   63.000 (binary)
 18.964889726830815 (decimal)

-lm -O3 -march=pentium4

  cumulative accuracy:   49.299560281858909 (binary)
 14.840646417884166 (decimal)

-lm -O3 -march=pentium4 -funsafe-math-optimizations -mfpmath=sse

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The program used is below. I'm very open to suggestions about this
program, which is a subset of a larger accuracy benchmark I'm writing
(Subtilis).

#include fenv.h
#pragma STDC FENV_ACCESS ON
#include float.h
#include math.h
#include stdio.h
#include stdbool.h
#include string.h

static bool verbose = false;
#define PI 3.14159265358979323846

// Test floating point accuracy
inline double binary_accuracy(double x)
{
return -(log(fabs(x)) / log(2.0));
}

inline double decimal_accuracy(double x)
{
return -(log(fabs(x)) / log(10.0));
}

// accuracy of trigonometric functions
void trigtest()
{
static const double range = PI; // * 2.0;
static const double incr  = PI / 100.0;

if (verbose)
   printf(  xdiff accuracy\n);

double final = 1.0;
double x;

for (x = -range; x = range; x += incr)
{
double s1  = sin(x);
double c1  = cos(x);
double one = s1 * s1 + c1 * c1;
double diff = one - 1.0;
final *= one;

double accuracy1 = binary_accuracy(diff);

if (verbose)
printf(%20.15f %14g %20.15f\n,x,diff,accuracy1);
}

final -= 1.0;

printf(\ncumulative accuracy: %20.15f (binary)\n,
   binary_accuracy(final));

printf( %20.15f (decimal)\n,
   decimal_accuracy(final));
}

// Entry point
int main(int argc, char ** argv)
{
int i;

// do we have verbose output?
if (argc  1)
{
for (i = 1; i  argc; ++i)
{
if (!strcmp(argv[i],-v))
{
verbose = true;
break;
}
}
}


// run tests
trigtest();

// done
return 0;
}

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
  
  The program used is below. I'm very open to suggestions about this
  program, which is a subset of a larger accuracy benchmark I'm writing
  (Subtilis).

Try this:

public class trial
{
  static public void main (String[] argv)
  {
System.out.println(Math.sin(Math.pow(2.0, 90.0)));
  }
}

zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
zapata:~ $ ./a.out 
1.2379400392853803E27
zapata:~ $ gcj trial.java --main=trial -ffast-math   
zapata:~ $ ./a.out 
-0.9044312486086016

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Haley wrote:
 Try this:
 
 public class trial
 {
   static public void main (String[] argv)
   {
 System.out.println(Math.sin(Math.pow(2.0, 90.0)));
   }
 }
 
 zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
 zapata:~ $ ./a.out 
 1.2379400392853803E27
 zapata:~ $ gcj trial.java --main=trial -ffast-math   
 zapata:~ $ ./a.out 
 -0.9044312486086016

You're comparing apples and oranges, since C (my code) and Java differ
in their definitions and implementations of floating-point.

I don't build gcj these days; however, when I have a moment later, I'll
build the latest GCC mainline from CVS -- with Java -- and see how it
reacts to my Java version of my benchmark. I also have a Fortran 95
version as well, so I guess I might as well try several languages, and
see what we get.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paolo Carlini
Andrew Haley wrote

 zapata:~ $ gcj trial.java --main=trial -ffast-math -O
  ^^

Ok, maybe those people that are accusing the Free Software philosophy of
being akin to communisn are wrong, but it looks like revolutionaries are
lurking around, at least... ;) ;)

Paolo.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
  Andrew Haley wrote:
   Try this:
   
   public class trial
   {
 static public void main (String[] argv)
 {
   System.out.println(Math.sin(Math.pow(2.0, 90.0)));
 }
   }
   
   zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
   zapata:~ $ ./a.out 
   1.2379400392853803E27
   zapata:~ $ gcj trial.java --main=trial -ffast-math   
   zapata:~ $ ./a.out 
   -0.9044312486086016
  
  You're comparing apples and oranges, since C (my code) and Java differ
  in their definitions and implementations of floating-point.

So try it in C.   -ffast-math won't be any better.

#include stdio.h
#include math.h

void
main (int argc, char **argv)
{
  printf (%g\n, sin (pow (2.0, 90.0)));
}

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
 On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
 
static const double range = PI; // * 2.0;
static const double incr  = PI / 100.0;
 
 
 The trig insns fail with large numbers; an argument
 reduction loop is required with their use.

Yes, but within the defined mathematical ranges for sine and cosine --
[0, 2 * PI) -- the processor intrinsics are quite accurate.

Now, I can see a problem in signal processing or similar applications,
where you're working with continuous values over a large range, but it
seems to me that a simple application of fmod (via FPREM) solves that
problem nicely.

I've never quite understood the necessity for performing trig operations
on excessively large values, but perhaps my problem domain hasn't
included such applications.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Scott == Scott Robert Ladd [EMAIL PROTECTED] writes:

 Scott Richard Henderson wrote:
  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
  
  static const double range = PI; // * 2.0; static const double
  incr = PI / 100.0;
  
  
  The trig insns fail with large numbers; an argument reduction loop
  is required with their use.

 Scott Yes, but within the defined mathematical ranges for sine and
 Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite
 Scott accurate.

Huh?  Sine and consine are mathematically defined for all finite
inputs. 

Yes, normally the first step is to reduce the arguments to a small
range around zero and then do the series expansion after that, because
the series expansion convergest fastest near zero.  But sin(100) is
certainly a valid call, even if not a common one.

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
  Scott Yes, but within the defined mathematical ranges for sine and
  Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite
  Scott accurate.
 
 Huh?  Sine and consine are mathematically defined for all finite
 inputs. 

Defined, yes. However, I'm speaking as a mathematician in this case, not
a programmer. Pick up an trig book, and it will have a statement similar
to this one, taken from a text (Trigonometry Demystified, Gibilisco,
McGraw-Hill, 2003) randomly grabbed from the shelf next to me:

These trigonometric identities apply to angles in the *standard range*
of 0 rad = theta  2 * PI rad. Angles outside the standard range are
converted to values within the standard range by adding or subtracting
the appropriate multiple of 2 * PI rad. You might hear of an angle with
negative measurement or with a measure more than 2 * PI rad, but this
can always be converted...

I can assure you that other texts (of which I have several) make similar
statements.

 Yes, normally the first step is to reduce the arguments to a small
 range around zero and then do the series expansion after that, because
 the series expansion convergest fastest near zero.  But sin(100) is
 certainly a valid call, even if not a common one.

I *said* that such statements are outside the standard range of
trigonometric identities. Writing sin(100) is not a matter of necessity,
nor should people using regular math be penalized in speed or accuracy
for extreme cases.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 17:32

 Paul Koning wrote:
  Scott Yes, but within the defined mathematical ranges for sine and
  Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite 
 Scott accurate. 
 
 Huh?  Sine and consine are mathematically defined for all finite
 inputs.
 
 Defined, yes. However, I'm speaking as a mathematician in this case, not
 a programmer. Pick up an trig book, and it will have a statement similar
 to this one, taken from a text (Trigonometry Demystified, Gibilisco,
 McGraw-Hill, 2003) randomly grabbed from the shelf next to me:
 
 These trigonometric identities apply to angles in the *standard range*
 of 0 rad = theta  2 * PI rad. 

  It's difficult to tell from that quote, which lacks sufficient context,
but you *appear* at first glance  to be conflating the fundamental
trignometric *functions* with the trignometric *identities* that are
generally built up from those functions.  That is to say, you appear to be
quoting a statement that says

 Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 = x = 2*PI

and interpreting it to imply that 

   sin(x)
  is only valid when 0 = x = 2*PI

which, while it may or may not be true for other reasons, certainly is a
non-sequitur from the statement above.

  And in fact, and in any case, this is a perfect illustration of the point,
because what we're discussing here is *not* the behaviour of the
mathematical sine and cosine functions, but the behaviour of the C runtime
library functions sin(...) and cos(...), which are defined by the language
spec rather than by the strictures of mathematics.  And that spec makes *no*
restriction on what values you may supply as inputs, so gcc had better
implement sin and cos in a way that doesn't require the programmer to have
reduced the arguments beforehand, or it won't be ANSI compliant.

  Not only that, but if you don't use -funsafe-math-optimisations, gcc emits
libcalls to sin/cos functions, which I'll bet *do* reduce their arguments to
that range before doing the computation, (and which might indeed even be
clever enough to use the intrinsic, and can encapsulate the knowledge that
that intrinsic can only be used on arguments within a more limited range
than are valid for the C library function which they are being used to
implement).

  When you use -funsafe-math-optimisations, one of those optimisations is to
assume that you're not going to be using the full range of arguments that
POSIX/ANSI say is valid for the sin/cos functions, but that you're going to
be using values that are already folded into the range around zero, and so
it optimises away the libcall and the reduction with it and just uses the
intrinsic to implement the function.  But the intrinsic does not actually
implement the function as specified by ANSI, since it doesn't accept the
same range of inputs, and therefore it is *not* a suitable transformation to
ever apply except when the user has explicitly specified that they want to
live dangerously.  So in terms of your earlier suggestion:

quote
May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.
quote

... I am obliged to point out that using the hardware intrinsics *IS* an
unsafe optimisation, at least in this case!

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread David Daney

Dave Korn wrote:


 Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 = x = 2*PI



It's been a while since I studied math, but isn't that particular 
identity is true for any x real or complex?


David Daney,



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Dave Korn wrote:
   It's difficult to tell from that quote, which lacks sufficient context,
 but you *appear* at first glance  to be conflating the fundamental
 trignometric *functions* with the trignometric *identities* that are
 generally built up from those functions.  That is to say, you appear to be
 quoting a statement that says

Perhaps I didn't say it as clearly as I should, but I do indeed know the
difference between the implementation and definition of the
trigonometric identifies.

The tradeoff is between absolute adherence to the C standard and the
need to provide fast, accurate results for people who know their math.
What I see is a focus (in some areas like math) on complying with the
standard, to the exclusion of people who need speed. Both needs can be met.

 And in fact, and in any case, this is a perfect illustration of the point,
 because what we're discussing here is *not* the behaviour of the
 mathematical sine and cosine functions, but the behaviour of the C runtime
 library functions sin(...) and cos(...), which are defined by the language
 spec rather than by the strictures of mathematics.

The sin() and cos() functions, in theory, implement the behavior of the
mathematical sine and cosine identities, so the two can not be
completely divorced. I believe it is, at the very least, misleading to
claim that the hardware intrinsics are unsafe.

 And that spec makes *no*
 restriction on what values you may supply as inputs, so gcc had better
 implement sin and cos in a way that doesn't require the programmer to have
 reduced the arguments beforehand, or it won't be ANSI compliant.

I'm not asking that the default behavior of the compiler be non-ANSI;
I'm asking that we give non-perjorative options to people who know what
they are doing and need greater speed. The -funsafe-math-optimizations
encompasses more than hardware intrinsics, and I don't see why
separating the hardware intrinsics into their own option
(-fhardware-math) is unreasonable, for folk who want the intrinsics but
not the other transformations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Kevin == Kevin Handy [EMAIL PROTECTED] writes:

 Kevin But, you are using a number in the range of 2^90, only have 64
 Kevin bits for storing the floating point representation, and some
 Kevin of that is needed for the exponent.

Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Morten Welinder
 Yes, but within the defined mathematical ranges for sine and cosine --
 [0, 2 * PI) -- the processor intrinsics are quite accurate.

If you were to look up a serious math book like AbramowitzStegun1965
you would see a definition like

sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

for all complex numbers, thus in particular valid for z=x+0i for all real x.
If you wanted to stick to reals only, a serious math text would probably use
the series expansion around zero [4.3.65]

And there is the answer to your question: if you just think of sin
as something
with angles and triangles, then sin(2^90) makes very little sense.  But sin
occurs other places where there are no triangles in sight.  For example:

  Gamma(z)Gamma(1-z) = pi/sin(z pi)   [6.1.17]

or in series expansions of the cdf for the Student t distribution [26.7.4]

Morten


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: David Daney
Sent: 26 May 2005 18:23

 Dave Korn wrote:
 
  Identities such as
 sin(x)^2 + cos(x)^2 === 1
   are only valid when 0 = x = 2*PI
 
 
 It's been a while since I studied math, but isn't that particular
 identity is true for any x real or complex?
 
 David Daney,


  Yes, that was solely an example of the difference between 'identities' and
'functions', for illustration, in case there was any ambiguity in the
language, but was not meant to be an example of an *actual* identity that
has a restriction on the valid range of inputs.  Sorry for not being
clearer.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Morten Welinder wrote:
 If you were to look up a serious math book like AbramowitzStegun1965
 you would see a definition like
 
 sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

Very true. However, the processor doesn't implement intrinsics for
complex functions -- well, maybe some do, and I've never encountered them!

As such, I was sticking to a discussion specific to reals.


 And there is the answer to your question: if you just think of sin
 as something
 with angles and triangles, then sin(2^90) makes very little sense.  But sin
 occurs other places where there are no triangles in sight.

That's certainly true; the use of sine and cosine depend on the
application. I don't deny that many applications need to perform sin()
on any double value; however there are also many applications where you
*are* dealing with angles.

I recently wrote a GPS application where using the intrinsics improved
both accuracy and speed (the latter substantially), and using those
intrinsics was only unsafe because -funsafe-math-optimizations
includes other transformations.

I am simply lobbying for the separation of hardware intrinsics from
-funsafe-math-optimizations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Scott == Scott Robert Ladd [EMAIL PROTECTED] writes:

 Scott Dave Korn wrote:
  It's difficult to tell from that quote, which lacks sufficient
  context, but you *appear* at first glance to be conflating the
  fundamental trignometric *functions* with the trignometric
  *identities* that are generally built up from those functions.
  That is to say, you appear to be quoting a statement that says

 Scott Perhaps I didn't say it as clearly as I should, but I do
 Scott indeed know the difference between the implementation and
 Scott definition of the trigonometric identifies.

 Scott The tradeoff is between absolute adherence to the C standard
 Scott and the need to provide fast, accurate results for people who
 Scott know their math. 

I'm really puzzled by that comment, partly because the text book quote
you gave doesn't match any math I ever learned.  Does knowing your
math translates to believing that trig functions should be applied
only to arguments in the range 0 to 2pi?  If so, I must object.

What *may* make sense is the creation of a new option (off by default)
that says you're allowed to assume that all calls to trig functions
have arguments in the range x..y.  Then the question to be answered
is what x and y should be.  A possible answer is 0 and 2pi; another
answer that some might prefer is -pi to +pi.  Or it might be -2pi to
+2pi to accommodate both preferences at essentially no cost.

 paul




RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 18:36

 
 I am simply lobbying for the separation of hardware intrinsics from
 -funsafe-math-optimizations.

  Well, as long as they're under the control of a flag that also makes it
clear that they are *also* unsafe math optimisations, I wouldn't object.

  But you can't just replace a call to the ANSI C 'sin' function with an
invocation of the x87 fsin intrinsic, because they aren't the same, and the
intrinsic is non-ansi-compliant.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
 I'm really puzzled by that comment, partly because the text book quote
 you gave doesn't match any math I ever learned.  Does knowing your
 math translates to believing that trig functions should be applied
 only to arguments in the range 0 to 2pi?  If so, I must object.

I'll correct myself to say people who know their application. ;) Some
apps need sin() over all possible doubles, while other applications need
sin() over the range of angles.

 What *may* make sense is the creation of a new option (off by default)
 that says you're allowed to assume that all calls to trig functions
 have arguments in the range x..y.  Then the question to be answered
 is what x and y should be.  A possible answer is 0 and 2pi; another
 answer that some might prefer is -pi to +pi.  Or it might be -2pi to
 +2pi to accommodate both preferences at essentially no cost.

I prefer breaking out the hardware intrinsics from
-funsafe-math-optimizations, such that people can compile to use their
hardware *without* the other transformations implicit in the current
collective.

If someone can explain how this hurts anything, please let me know.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
After some off-line exchanges with Dave Korn, it seems to me that part
of the problem is that the documentation for
-funsafe-math-optimizations is so vague as to have no discernable
meaning. 

For example, does the wording of the documentation convey the
limitation that one should only invoke math functions with a small
range of arguments (say, -pi to +pi)?  I cannot see anything remotely
resembling that limitation, but others can.

Given that, I wonder how we can tell whether a particular proposed
optimization governed by that flag is permissible.  Consider:

`-funsafe-math-optimizations'
 Allow optimizations for floating-point arithmetic that (a) assume
 that arguments and results are valid and (b) may violate IEEE or
 ANSI standards.  

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Pinski


On May 26, 2005, at 2:12 PM, Paul Koning wrote:

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...



b) means that (-a)*(b-c) can be changed to a*(c-b) and other 
reassociation

opportunities.

Thanks,
Andrew Pinski



RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 19:09

 Dave Korn wrote:
   Well, as long as they're under the control of a flag that also makes it
 clear that they are *also* unsafe math optimisations, I wouldn't object.
 
 But they are *not* unsafe for *all* applications.

  Irrelevant; nor are many of the other things that are described by the
term unsafe.

  In fact they are often things that may be safe on one occasion, yet not on
another, even within one single application.  Referring to something as
unsafe doesn't mean it's *always* unsafe, but referring to it as safe (or
implying that it is by contrast with an option that names it as unsafe)
*does* mean that it is *always* safe.

 An ignorant user may not understand the ramifications of unsafe math
 -- however, the current documentation is quite vague as to why these
 optimizations are unsafe, and people thus become paranoid and avoid
 -ffast-math when it would be to their benefit.

  Until they get sqrt(-1.0) returning a value of +1.0 with no complaints, of
course

  But yes: the biggest problem here that I can see is inadequate
documentation.

 First and foremost, GCC should conform to standards. *However*, I see
 nothing wrong with providing additional capability for those who need
 it, without combining everything unsafe under one umbrella.

  That's exactly what I said up at the top.  Nothing wrong with having
multiple unsafe options, but they *are* all unsafe.

 But you can't just replace a call to the ANSI C 'sin' function with an
 invocation of the x87 fsin intrinsic, because they aren't the same, and
 the intrinsic is non-ansi-compliant.
 
 Nobody said they were.

  Then any optimisation flag that replaces one with the other is, QED,
unsafe.

  Of course, if you went and wrote a whole load of builtins, so that with
your new flag in effect sin (x) would translate into a code sequence that
first uses fmod to reduce the argument to the valid range for fsin, I would
no longer consider it unsafe.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Pinski wrote:
 b) means that (-a)*(b-c) can be changed to a*(c-b) and other reassociation
 opportunities.

This is precisely the sort of transformation that, in my opinion, should
be separate from the hardware intrinsics. I mentioned this specific case
earlier in the thread (I think; maybe it went to a private mail).

The documentation should quote you above, instead of being general and
vague (lots of mays, for example, in the current text).

Perhaps we need to have a clearer name for the option,
-funsafe-transformations, anyone? I may want to use a hardware
intrinsics, but not want those transformations.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Joseph S. Myers
On Thu, 26 May 2005, Paul Koning wrote:

  Kevin == Kevin Handy [EMAIL PROTECTED] writes:
 
  Kevin But, you are using a number in the range of 2^90, only have 64
  Kevin bits for storing the floating point representation, and some
  Kevin of that is needed for the exponent.
 
 Fair enough, so with 64 bit floats you have no right to expect an
 accurate answer for sin(2^90).  However, you DO have a right to expect
 an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
 quoted.  I see no words in the description of
 -funsafe-math-optimizations to lead me to expect such a result.

When I discussed this question with Nick Maclaren a while back after a UK 
C Panel meeting, his view was that for most applications (a) the output 
should be close (within 1 or a few ulp) to the sine/cosine of a value 
close (within 1 or a few ulp) to the floating-point input and (b) sin^2 + 
cos^2 (of any input value) should equal 1 with high precision, but most 
applications (using floating-point values as approximations of 
unrepresentable real numbers) wouldn't care about the answer being close 
to the sine or cosine of the exact real number represented by the 
floating-point value when 1ulp is on the order of 2pi or bigger.  This 
does of course disallow 1.2e+27 as a safe answer for sin or cos to give 
for any input.  (And a few applications may care for stronger degrees of 
accuracy.)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
|  
| static const double range = PI; // * 2.0;
| static const double incr  = PI / 100.0;
|  
|  
|  The trig insns fail with large numbers; an argument
|  reduction loop is required with their use.
| 
| Yes, but within the defined mathematical ranges for sine and cosine --
| [0, 2 * PI) -- 

this is what they call post-modern maths?

[...]

| I've never quite understood the necessity for performing trig operations
| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Scott Robert Ladd [EMAIL PROTECTED] writes:
 | I've never quite understood the necessity for performing trig operations
 | on excessively large values, but perhaps my problem domain hasn't
 | included such applications.
 
 The world is flat; I never quite understood the necessity of spherical
 trigonometry.

For many practical problems, the world can be considered flat. And I do
plenty of spherical geometry (GPS navigation) without requiring the sin
of 2**90. ;)

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Richard Henderson
On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
 I've never quite understood the necessity for performing trig operations
 on excessively large values, but perhaps my problem domain hasn't
 included such applications.

Whether you think it necessary or not, the ISO C functions allow
such arguments, and we're not allowed to break that without cause.


r~


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Gabriel Dos Reis wrote:
|  Scott Robert Ladd [EMAIL PROTECTED] writes:
|  | I've never quite understood the necessity for performing trig operations
|  | on excessively large values, but perhaps my problem domain hasn't
|  | included such applications.
|  
|  The world is flat; I never quite understood the necessity of spherical
|  trigonometry.
| 
| For many practical problems, the world can be considered flat.

Wooho.

| And I do
| plenty of spherical geometry (GPS navigation) without requiring the sin
| of 2**90. ;)

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*, not just angles or whatever, and happen to appear in
approximations of functions as series (e.g. Fourier series) and therefore
those functions can be applied to things that are not just angles. 

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Uros Bizjak

Hello!


Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

 The source operand to fsin, fcos and fsincos x87 insns must be within 
the range of +-2^63, otherwise a C2 flag is set in FP status word that 
marks insufficient operand reduction. Limited operand range is the 
reason, why fsin  friends are enabled only with 
-funsafe-math-optimizations.


 However, the argument to fsin can be reduced to an acceptable range by 
using fmod builtin. Internally, this builtin is implemented as a very 
tight loop that check for insufficient reduction, and could reduce 
whatever finite value one wishes.


 Out of curiosity, where could sin(2^90) be needed? It looks rather big 
angle to me.


Uros.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Uros == Uros Bizjak [EMAIL PROTECTED] writes:

 Uros Hello!
  Fair enough, so with 64 bit floats you have no right to expect an
  accurate answer for sin(2^90).  However, you DO have a right to
  expect an answer in the range [-1,+1] rather than the 1.2e+27 that
  Richard quoted.  I see no words in the description of
  -funsafe-math-optimizations to lead me to expect such a result.
  
 Uros The source operand to fsin, fcos and fsincos x87 insns must be
 Uros within the range of +-2^63, otherwise a C2 flag is set in FP
 Uros status word that marks insufficient operand reduction. Limited
 Uros operand range is the reason, why fsin  friends are enabled
 Uros only with -funsafe-math-optimizations.

 Uros However, the argument to fsin can be reduced to an acceptable
 Uros range by using fmod builtin. Internally, this builtin is
 Uros implemented as a very tight loop that check for insufficient
 Uros reduction, and could reduce whatever finite value one wishes.

 Uros Out of curiosity, where could sin(2^90) be needed? It looks
 Uros rather big angle to me.

It looks that way to me too, but it's a perfectly valid argument to
the function as has been explained by several people.

Unless -funsafe-math-optimizations is *explicitly* documented to say
trig function arguments must be in the range x..y for meaningful
results I believe it is a bug to translate sin(x) to a call to the
x87 fsin primitive.  It needs to be wrapped with fmod (perhaps after a
range check for efficiency), otherwise you've drastically changed the
semantics of the function.

Personally I don't expect sin(2^90) to yield 1.2e27.  Yes, you can
argue that, pedantically, clause (b) in the doc for
-funsafe-math-optimizations permits this.  Then again, I could argue
that it also permits sin(x) to return 0 for all x.

 paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Uros Bizjak [EMAIL PROTECTED] writes:

[...]

|   Out of curiosity, where could sin(2^90) be needed? It looks rather
| big angle to me.

If it was and angle!  Not everything that is an argument to sin or cos
is an angle.  They are just functions!  Suppose you're evaluating an
approximation of a Fourrier series expansion.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Steven Bosscher
On Friday 27 May 2005 00:26, Gabriel Dos Reis wrote:
 Uros Bizjak [EMAIL PROTECTED] writes:

 [...]

 |   Out of curiosity, where could sin(2^90) be needed? It looks rather
 | big angle to me.

 If it was and angle!  Not everything that is an argument to sin or cos
 is an angle.  They are just functions!  Suppose you're evaluating an
 approximation of a Fourrier series expansion.

It would, in a way, still be a phase angle ;-)

Gr.
Steven


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Uros, 

   However, the argument to fsin can be reduced to an 
 acceptable range by using fmod builtin. Internally, this 
 builtin is implemented as a very tight loop that check for 
 insufficient reduction, and could reduce whatever finite 
 value one wishes.

Keep in mind that x87 transcendentals are not the most accurate around, but all 
x86 processors from any manufacturer produce roughly the same results for any 
argument as the 8087 did way back when, even if the result is hundreds of ulps 
off...


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
 On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
 
I've never quite understood the necessity for performing trig operations
on excessively large values, but perhaps my problem domain hasn't
included such applications.
 
 
 Whether you think it necessary or not, the ISO C functions allow
 such arguments, and we're not allowed to break that without cause.

Then, as someone else said, why doesn't the compiler enforce -ansi
and/or -pedantic by default? Or is ANSI purity only important in some
cases, but not others?

I do not and have not suggested changing the default behavior of the
compiler, and *have* suggested that it is not pedantic enough about
Standards.

*This* discussion is about improving -funsafe-math-optimizations to make
it more sensible and flexible.

For a wide variety of applications, the hardware intrinsics provide both
faster and more accurate results, when compared to the library
functions. However, I may *not* want other transformations implied by
-funsafe-math-optimizations. Therefore, it seems to me that GCC could
cleanly and simply implement an option to use hardware intrinsics (or a
highly-optimized but non-ANSI library) for those of us who want it.

No changes to default optimizations, no breaking of existing code, just
a new option (as in optional.)

How does that hurt you or anyone else? It's not as if GCC doesn't have a
few options already... ;)

I (and others) also note other compilers do a fine job of handling these
problems.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Yeah, the problem with people who work only with angles is that they
 tend to forget that sin (and friends) are defined as functions on
 *numbers*, not just angles or whatever, and happen to appear in
 approximations of functions as series (e.g. Fourier series) and therefore
 those functions can be applied to things that are not just angles. 

To paraphrase the above:

Yeah, the problem with people who only work with Fourier series is that
they tend to forget that sin (and friends) can be used in applications
with angles that fall in a limited range, where the hardware intrinsics
produce faster and more accurate results.

I've worked on some pretty fancy DSP code in the last years, and some
spherical trig stuff. Two different kinds of code with different needs.

..Scott




RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

 For a wide variety of applications, the hardware intrinsics 
 provide both faster and more accurate results, when compared 
 to the library functions.

This is not true.  Compare results on an x86 systems with those on an x86_64 or 
ppc.  As I said before, shortcuts were taken in x87 that sacrificed accuracy 
for the sake of speed initially and later of compatibility.

HTH


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
|  
| I've never quite understood the necessity for performing trig operations
| on excessively large values, but perhaps my problem domain hasn't
| included such applications.
|  
|  
|  Whether you think it necessary or not, the ISO C functions allow
|  such arguments, and we're not allowed to break that without cause.
| 
| Then, as someone else said, why doesn't the compiler enforce -ansi
| and/or -pedantic by default?

Care submitting a ptach?

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Gabriel Dos Reis wrote:
|  Yeah, the problem with people who work only with angles is that they
|  tend to forget that sin (and friends) are defined as functions on
|  *numbers*, not just angles or whatever, and happen to appear in
|  approximations of functions as series (e.g. Fourier series) and therefore
|  those functions can be applied to things that are not just angles. 
| 
| To paraphrase the above:
| 
| Yeah, the problem with people who only work with Fourier series is that
| they tend to forget that sin (and friends) can be used in applications
| with angles that fall in a limited range, where the hardware intrinsics
| produce faster and more accurate results.

That is a good try, but it fails in the context in which the original
statement was made.  Maybe it is good time and check the thread aand
the pattern of logic that statement was point out?

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
 This is not true.  Compare results on an x86 systems with those on an
 x86_64 or ppc.  As I said before, shortcuts were taken in x87 that
 sacrificed accuracy for the sake of speed initially and later of
 compatibility.

It *is* true for the case where the argument is in the range [0, 2*PI),
at least according to the tests I published earlier in this thread. If
you think there is something erroneous in the test code, I sincerely
would like to know.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

  This is not true.  Compare results on an x86 systems with 
 those on an
  x86_64 or ppc.  As I said before, shortcuts were taken in x87 that 
  sacrificed accuracy for the sake of speed initially and later of 
  compatibility.
 
 It *is* true for the case where the argument is in the range 
 [0, 2*PI), at least according to the tests I published 
 earlier in this thread. If you think there is something 
 erroneous in the test code, I sincerely would like to know.

Your code just tests every 3.6°, perhaps you won't trip at the problems...  

As I said, x87 can be off by hundreds of ulps, whereas the routines for x86_64 
which ships with SUSE are accurate to less than 1ulp over their entire domain.

Besides, you're also comparing 80-bit calculations with 64-bit calculations, 
not only the accuracy of sin and cos.  Try using -ffloat-store along with 
-mfpmath=387 and see yet another set of results.  At the end of the day, which 
one do you trust?  I wouldn't trust my check balance to x87 microcode... ;-)

HTH


___
Evandro MenezesSoftware Strategy  Alliance
512-602-9940AMD
[EMAIL PROTECTED]  Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Scott Robert Ladd [EMAIL PROTECTED] writes:
 | Then, as someone else said, why doesn't the compiler enforce -ansi
 | and/or -pedantic by default?
 
 Care submitting a ptach?

Would a strictly ansi default be accepted on principle? Given the
existing code base of non-standard code, such a change may be unrealistic.

I'm willing to make the -ansi -pedantic patch, if I wouldn't be
wasting my time.

What about separating hardware intrinsics from
-funsafe-math-optimizations? I believe this would make everyone happy by
allowing people to use the compiler more effectively in different
circumstances.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
 Besides, you're also comparing 80-bit calculations with 64-bit
 calculations, not only the accuracy of sin and cos.  Try using
 -ffloat-store along with -mfpmath=387 and see yet another set of
 results.  At the end of the day, which one do you trust?  I wouldn't
 trust my check balance to x87 microcode... ;-)

I wouldn;t trust my bank accounts to the x87 under any circumstances;
anyone doing exact math should be using fixed-point.

Different programs have different requirements. I don't understand why
GCC needs to be one-size fits all, when it could be *better* than the
competition by taking a broader and more flexible view.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 21:34, Scott Robert Ladd wrote:


For many practical problems, the world can be considered flat. And  
I do
plenty of spherical geometry (GPS navigation) without requiring the  
sin

of 2**90. ;)


Yes right. I guess your second name is ignorance.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-27, at 00:00, Gabriel Dos Reis wrote:

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*,



The problem with people who work only with angles is that they are  
without sin.




Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 22:39, Gabriel Dos Reis wrote:


Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
| 
| static const double range = PI; // * 2.0;
| static const double incr  = PI / 100.0;
| 
| 
|  The trig insns fail with large numbers; an argument
|  reduction loop is required with their use.
|
| Yes, but within the defined mathematical ranges for sine and  
cosine --

| [0, 2 * PI) --

this is what they call post-modern maths?

[...]

| I've never quite understood the necessity for performing trig  
operations

| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.


I agree fully. And who was this Fourier anyway?