weird optimization in sin+cos, x86 backend

2012-02-03 Thread Konstantin Vladimirov
Hi, Consider minimal reproduction code: #include "math.h" #include "stdio.h" double __attribute__ ((noinline)) slip(double a) { return (cos(a) + sin(a)); } int main(void) { double a = 4.47460300787e+182; double slipped = slip(a); printf("slipped = %lf\n", slipped); return 0; } Compil

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Richard Guenther
On Fri, Feb 3, 2012 at 2:26 PM, Konstantin Vladimirov wrote: > Hi, > > Consider minimal reproduction code: > > #include "math.h" > #include "stdio.h" > > double __attribute__ ((noinline)) > slip(double a) > { >  return (cos(a) + sin(a)); > } > > int main(void) > { >  double a = 4.47460300787e+182;

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Michael Matz
Hi, On Fri, 3 Feb 2012, Richard Guenther wrote: > > int main(void) > > { > >  double a = 4.47460300787e+182; > > slipped = -1.141385 > > That is correct. > > > > slipped = -0.432436 > > That is obviously incorrect. How did you determine that one is correct and the other obviously incorrect? N

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar
On 2/3/2012 10:01 AM, Michael Matz wrote: No normal math library supports such an extreme range, even basic identities (like cos^2+sin^2=1) aren't retained with such inputs. I agree: the program is complete nonsense. It would be useful to know what the intent was. Ciao, Michael.

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 10:13:58 -0500, Robert Dewar wrote: > On 2/3/2012 10:01 AM, Michael Matz wrote: > >No normal math library supports such an extreme range, even basic > >identities (like cos^2+sin^2=1) aren't retained with such inputs. > > I agree: the program is complete nonsense. I disagree: there

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar
On 2/3/2012 10:28 AM, Vincent Lefevre wrote: On 2012-02-03 10:13:58 -0500, Robert Dewar wrote: On 2/3/2012 10:01 AM, Michael Matz wrote: No normal math library supports such an extreme range, even basic identities (like cos^2+sin^2=1) aren't retained with such inputs. I agree: the program is

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 10:33:58 -0500, Robert Dewar wrote: > On 2/3/2012 10:28 AM, Vincent Lefevre wrote: > >If the user requested such a computation, there should at least be > >some intent. Unless an option like -ffast-math is given, the result > >should be accurate. > > What is the basis for that claim?

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Michael Matz
Hi, On Fri, 3 Feb 2012, Vincent Lefevre wrote: > > >No normal math library supports such an extreme range, even basic > > >identities (like cos^2+sin^2=1) aren't retained with such inputs. > > > > I agree: the program is complete nonsense. > > I disagree: there may be cases where large inputs c

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 16:57:19 +0100, Michael Matz wrote: > > And it may be important that some identities (like cos^2+sin^2=1) be > > preserved. > > Well, you're not going to get this without much more work in sin/cos. If you use the glibc sin() and cos(), you already have this (possibly up to a few ulp

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar
On 2/3/2012 10:55 AM, Vincent Lefevre wrote: On 2012-02-03 10:33:58 -0500, Robert Dewar wrote: On 2/3/2012 10:28 AM, Vincent Lefevre wrote: If the user requested such a computation, there should at least be some intent. Unless an option like -ffast-math is given, the result should be accurate.

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Dominique Dhumieres
While I fail to see how the "correct value" of cos(4.47460300787e+182)+sin(4.47460300787e+182) can be defined in the 'double' world, cos^2(x)+sin^2(x)=1 and sin(2*x)=2*sin(x)*cos(x) seems to be verified (at least for this value) even if the actual values of sin and cos depends on the optimisation

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Michael Matz
Hi, On Fri, 3 Feb 2012, Vincent Lefevre wrote: > > > For the glibc, I've finally reported a bug here: > > > > > > http://sourceware.org/bugzilla/show_bug.cgi?id=13658 > > > > That is about 1.0e22, not the obscene 4.47460300787e+182 of the > > original poster. > > But 1.0e22 cannot be handle

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Michael Matz
On Fri, 3 Feb 2012, Dominique Dhumieres wrote: > Note that sqrt(2.0)*sin(4.47460300787e+182+pi/4) gives a diffeent value > for the sum. In double: 4.47460300787e+182 + pi/4 == 4.47460300787e+182 Ciao, Michael.

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Konstantin Vladimirov
Hi, I agree, that this case have no practical value. It was autogenerated between other thousands of tests and showed really strange results, so I decided to ask. I thought, this value fits double precision range and, according to C standard, all double-precision arithmetics must be avaliable for

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar
On 2/3/2012 1:12 PM, Konstantin Vladimirov wrote: Hi, I agree, that this case have no practical value. It was autogenerated between other thousands of tests and showed really strange results, so I decided to ask. I thought, this value fits double precision range and, according to C standard, all

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Toon Moene
On 02/03/2012 04:13 PM, Robert Dewar wrote: On 2/3/2012 10:01 AM, Michael Matz wrote: No normal math library supports such an extreme range, even basic identities (like cos^2+sin^2=1) aren't retained with such inputs. I agree: the program is complete nonsense. It would be useful to know wha

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 11:35:39 -0500, Robert Dewar wrote: > On 2/3/2012 10:55 AM, Vincent Lefevre wrote: > >On 2012-02-03 10:33:58 -0500, Robert Dewar wrote: > >>What is the basis for that claim? to me it seems useless to expect > >>anything from such absurd arguments. Can you site a requirement to > >>the

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
58 > > > > > > That is about 1.0e22, not the obscene 4.47460300787e+182 of the > > > original poster. > > > > But 1.0e22 cannot be handled correctly. > > I'm not sure what you're getting at. Yes, 1e22 isn't handled correctly, > but this

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar
On 2/3/2012 4:32 PM, Vincent Lefevre wrote: Yes, I do! The floating-point representation of this number This fact is not even necessarily correct because you don't know the intent of the programmer. In the program, double a = 4.47460300787e+182; could mean two things: 1. A number whic

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread James Courtier-Dutton
On 3 February 2012 16:24, Vincent Lefevre wrote: > On 2012-02-03 16:57:19 +0100, Michael Matz wrote: >> > And it may be important that some identities (like cos^2+sin^2=1) be >> > preserved. >> >> Well, you're not going to get this without much more work in sin/cos. > > If you use the glibc sin()

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread James Courtier-Dutton
On 3 February 2012 18:12, Konstantin Vladimirov wrote: > Hi, > > I agree, that this case have no practical value. It was autogenerated > between other thousands of tests and showed really strange results, so > I decided to ask. I thought, this value fits double precision range > and, according to

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 17:40:05 +0100, Dominique Dhumieres wrote: > While I fail to see how the "correct value" of > cos(4.47460300787e+182)+sin(4.47460300787e+182) > can be defined in the 'double' world, cos^2(x)+sin^2(x)=1 and > sin(2*x)=2*sin(x)*cos(x) seems to be verified (at least for this value) > e

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 16:51:22 -0500, Robert Dewar wrote: > All machines that implement IEEE arithmetic :-) As we know only too well > from the universe of machines on which we implement GNAT, this is not > all machines :-) But I think that machines with no IEEE support will tend to disappear (and already

Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Vincent Lefevre
On 2012-02-03 22:57:31 +, James Courtier-Dutton wrote: > On 3 February 2012 16:24, Vincent Lefevre wrote: > > But 1.0e22 cannot be handled correctly. > > Of course it can't. > You only have 52 bits of precision in double floating point numbers. Wrong. 53 bits of precision. And 10^22 is the l

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread James Courtier-Dutton
On 4 February 2012 00:06, Vincent Lefevre wrote: > On 2012-02-03 17:40:05 +0100, Dominique Dhumieres wrote: >> While I fail to see how the "correct value" of >> cos(4.47460300787e+182)+sin(4.47460300787e+182) >> can be defined in the 'double' world, cos^2(x)+sin^2(x)=1 and >> sin(2*x)=2*sin(x)*cos

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Andreas Schwab
Vincent Lefevre writes: > Wrong. 53 bits of precision. And 10^22 is the last power of 10 > exactly representable in double precision (FYI, this example has > been chosen because of this property). But it is indistinguishable from 10^22+pi. So both -0.8522008497671888 and 0.8522008497671888 are

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar
On 2/4/2012 7:00 AM, Andreas Schwab wrote: Vincent Lefevre writes: Wrong. 53 bits of precision. And 10^22 is the last power of 10 exactly representable in double precision (FYI, this example has been chosen because of this property). But it is indistinguishable from 10^22+pi. So both -0.852

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Andreas Schwab
Robert Dewar writes: > But if you write a literal that can be represented exactly, then it is > perfectly reasonable to expect trig functions to give the proper > result, which is unambiguous in this case. How do you know that the number is exact? Andreas. -- Andreas Schwab, sch...@linux-m68k

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar
On 2/4/2012 9:09 AM, Andreas Schwab wrote: Robert Dewar writes: But if you write a literal that can be represented exactly, then it is perfectly reasonable to expect trig functions to give the proper result, which is unambiguous in this case. How do you know that the number is exact? Sorry

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Andreas Schwab
Robert Dewar writes: > On 2/4/2012 9:09 AM, Andreas Schwab wrote: >> Robert Dewar writes: >> >>> But if you write a literal that can be represented exactly, then it is >>> perfectly reasonable to expect trig functions to give the proper >>> result, which is unambiguous in this case. >> >> How do

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar
On 2/4/2012 9:57 AM, Andreas Schwab wrote: \ How can the sine function know which of the millions of numbers represented by 0x1.0f0cf064dd591p+73 are meant? Applying the sine to this interval covers the whole result domain of the function. The idea that an IEEE number necessarily represents an

Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Dave Korn
On 04/02/2012 10:20, James Courtier-Dutton wrote: >> #include >> #include >> >> int main (void) >> { >> double x, c, s; >> volatile double v; >> >> x = 1.0e22; >> s = sin (x); >> printf ("sin(%.17g) = %.17g\n", x, s); >> >> v = x; >> x = v; >> c = cos (x); >> s = sin (x); >> printf ("s

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread James Courtier-Dutton
Hi, I looked at this a bit closer. sin(1.0e22) is outside the +-2^63 range, so FPREM1 is used to bring it inside the range. So, I looked at FPREM1 a bit closer. #include #include int main (void) { long double x, r, m; x = 1.0e22; // x = 5.26300791462049950360708478127784; <- This is what t

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread Tim Prince
On 02/05/2012 11:08 AM, James Courtier-Dutton wrote: Hi, I looked at this a bit closer. sin(1.0e22) is outside the +-2^63 range, so FPREM1 is used to bring it inside the range. So, I looked at FPREM1 a bit closer. #include #include int main (void) { long double x, r, m; x = 1.0e22; // x

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread Vincent Lefevre
On 2012-02-04 13:00:45 +0100, Andreas Schwab wrote: > But it is indistinguishable from 10^22+pi. So both -0.8522008497671888 > and 0.8522008497671888 are correct results, or anything inbetween. No, 10^22 and 10^22+pi are different numbers. You are not following the IEEE 754 model, where each inpu

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread Geert Bosch
On Feb 5, 2012, at 11:08, James Courtier-Dutton wrote: > But, r should be > 5.26300791462049950360708478127784... or > -1.020177392559086973318201985281... > according to wolfram alpha and most arbitrary maths libs I tried. > > I need to do a bit more digging, but this might point to a bug in th

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread Dave Korn
On 05/02/2012 19:01, Vincent Lefevre wrote: > On 2012-02-04 13:00:45 +0100, Andreas Schwab wrote: >> But it is indistinguishable from 10^22+pi. So both -0.8522008497671888 >> and 0.8522008497671888 are correct results, or anything inbetween. > > No, 10^22 and 10^22+pi are different numbers. O

Re: weird optimization in sin+cos, x86 backend

2012-02-05 Thread Vincent Lefevre
On 2012-02-05 20:52:39 +, Dave Korn wrote: > On 05/02/2012 19:01, Vincent Lefevre wrote: > > On 2012-02-04 13:00:45 +0100, Andreas Schwab wrote: > >> But it is indistinguishable from 10^22+pi. So both -0.8522008497671888 > >> and 0.8522008497671888 are correct results, or anything inbetween. >

Re: weird optimization in sin+cos, x86 backend

2012-02-06 Thread Richard Guenther
On Sat, Feb 4, 2012 at 11:20 AM, James Courtier-Dutton wrote: > On 4 February 2012 00:06, Vincent Lefevre wrote: >> On 2012-02-03 17:40:05 +0100, Dominique Dhumieres wrote: >>> While I fail to see how the "correct value" of >>> cos(4.47460300787e+182)+sin(4.47460300787e+182) >>> can be defined in

Re: weird optimization in sin+cos, x86 backend

2012-02-06 Thread Vincent Lefevre
On 2012-02-06 12:54:09 +0100, Richard Guenther wrote: > Note that you are comparing a constant folded sin() result against > sincos() (or sin() and cos()). Use > > #include > #include > > int main (void) > { > double x, c, s; > volatile double v; > > x = 1.0e22; > v = x; > x = v; >

Re: weird optimization in sin+cos, x86 backend

2012-02-06 Thread Richard Guenther
On Mon, Feb 6, 2012 at 1:29 PM, Vincent Lefevre wrote: > On 2012-02-06 12:54:09 +0100, Richard Guenther wrote: >> Note that you are comparing a constant folded sin() result against >> sincos() (or sin() and cos()). Use >> >> #include >> #include >> >> int main (void) >> { >>   double x, c, s; >>

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread James Courtier-Dutton
#x27;m not sure what you're getting at.  Yes, 1e22 isn't handled correctly, >> but this thread was about 4.47460300787e+182 until you changed topics. > > No, the topic is: "weird optimization in sin+cos, x86 backend". > But actually, as said by Richard Guenther, t

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 10:20 AM, James Courtier-Dutton wrote: > From what I can see, on x86_64, the hardware fsin(x) is more accurate > than the hardware fsincos(x). > As you gradually increase the size of X from 0 to 10e22, fsincos(x) > diverges from the correct accurate value quicker than fsin(x) does. >

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Richard Guenther
On Thu, Feb 9, 2012 at 11:35 AM, Andrew Haley wrote: > On 02/09/2012 10:20 AM, James Courtier-Dutton wrote: >> From what I can see, on x86_64, the hardware fsin(x) is more accurate >> than the hardware fsincos(x). >> As you gradually increase the size of X from 0 to 10e22, fsincos(x) >> diverges f

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Tim Prince
On 2/9/2012 5:55 AM, Richard Guenther wrote: On Thu, Feb 9, 2012 at 11:35 AM, Andrew Haley wrote: On 02/09/2012 10:20 AM, James Courtier-Dutton wrote: From what I can see, on x86_64, the hardware fsin(x) is more accurate than the hardware fsincos(x). As you gradually increase the size of X fr

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 01:38 PM, Tim Prince wrote: > x87 built-ins should be a fair compromise between speed, code size, and > accuracy, for long double, on most CPUs. As Richard says, it's > certainly possible to do better in the context of SSE, but gcc doesn't > know anything about the quality of math

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread James Courtier-Dutton
2012/2/9 Andrew Haley : > On 02/09/2012 01:38 PM, Tim Prince wrote: >> x87 built-ins should be a fair compromise between speed, code size, and >> accuracy, for long double, on most CPUs.  As Richard says, it's >> certainly possible to do better in the context of SSE, but gcc doesn't >> know anythin

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 02:51 PM, James Courtier-Dutton wrote: > 2012/2/9 Andrew Haley : >> On 02/09/2012 01:38 PM, Tim Prince wrote: >>> x87 built-ins should be a fair compromise between speed, code size, and >>> accuracy, for long double, on most CPUs. As Richard says, it's >>> certainly possible to do be

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Geert Bosch
On Feb 9, 2012, at 08:46, Andrew Haley wrote: > n 02/09/2012 01:38 PM, Tim Prince wrote: >> x87 built-ins should be a fair compromise between speed, code size, and >> accuracy, for long double, on most CPUs. As Richard says, it's >> certainly possible to do better in the context of SSE, but gcc

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Richard Guenther
On Thu, Feb 9, 2012 at 4:20 PM, Geert Bosch wrote: > > On Feb 9, 2012, at 08:46, Andrew Haley wrote: >> n 02/09/2012 01:38 PM, Tim Prince wrote: >>> x87 built-ins should be a fair compromise between speed, code size, and >>> accuracy, for long double, on most CPUs.  As Richard says, it's >>> certa

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 03:28 PM, Richard Guenther wrote: > So - do you have an idea what routines we can start off with to get > a full C99 set of routines for float, double and long double? The last > time I was exploring the idea again I was looking at the BSD libm. I'd start with INRIA's crlibm. Andre

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread James Courtier-Dutton
On 9 February 2012 14:51, James Courtier-Dutton wrote: > 2012/2/9 Andrew Haley : >> On 02/09/2012 01:38 PM, Tim Prince wrote: >>> x87 built-ins should be a fair compromise between speed, code size, and >>> accuracy, for long double, on most CPUs.  As Richard says, it's >>> certainly possible to do

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Michael Matz
Hi, On Thu, 9 Feb 2012, Andrew Haley wrote: > On 02/09/2012 03:28 PM, Richard Guenther wrote: > > So - do you have an idea what routines we can start off with to get > > a full C99 set of routines for float, double and long double? The last > > time I was exploring the idea again I was looking a

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 03:56 PM, Michael Matz wrote: > Hi, > > On Thu, 9 Feb 2012, Andrew Haley wrote: > >> On 02/09/2012 03:28 PM, Richard Guenther wrote: >>> So - do you have an idea what routines we can start off with to get >>> a full C99 set of routines for float, double and long double? The last >>

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 03:55 PM, James Courtier-Dutton wrote: > Results for x86_64 > gcc -g -O0 -c -o sincos1.o sincos1.c > gcc -static -g -o sincos1 sincos1.o -lm > > ./sincos1 > sin = -8.52200849767188795e-01(uses xmm register intructions) > sinl = 0.46261304076460176 (uses fprem and fsin)

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Richard Guenther
On Thu, Feb 9, 2012 at 4:57 PM, Andrew Haley wrote: > On 02/09/2012 03:56 PM, Michael Matz wrote: >> Hi, >> >> On Thu, 9 Feb 2012, Andrew Haley wrote: >> >>> On 02/09/2012 03:28 PM, Richard Guenther wrote: So - do you have an idea what routines we can start off with to get a full C99 set

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 03:59 PM, Richard Guenther wrote: > On Thu, Feb 9, 2012 at 4:57 PM, Andrew Haley wrote: >> On 02/09/2012 03:56 PM, Michael Matz wrote: >>> Hi, >>> >>> On Thu, 9 Feb 2012, Andrew Haley wrote: >>> On 02/09/2012 03:28 PM, Richard Guenther wrote: > So - do you have an idea what

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Michael Matz
Hi, On Thu, 9 Feb 2012, James Courtier-Dutton wrote: > Results when compiled for 32bit x86. > gcc -m32 -g -O0 -c -o sincos1.o sincos1.c > gcc -m32 -static -g -o sincos1 sincos1.o -lm > > ./sincos1 > sin = 4.62613040764601746e-01 > sinl = 0.46261304076460176 > sincos = 4.62613040764601746e-01 > s

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Michael Matz
Hi, On Thu, 9 Feb 2012, Andrew Haley wrote: > >>> So - do you have an idea what routines we can start off with to get > >>> a full C99 set of routines for float, double and long double? The > >>> last time I was exploring the idea again I was looking at the BSD > >>> libm. > >> > >> I'd start

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Richard Guenther wrote: > > Given the fact that GCC already needs to know pretty much everything > > about these functions for optimizations and constant folding, and is > > in the best situation to choose specific implementations (-ffast-math > > or not, -frounding-math or not

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Andrew Haley wrote: > Okay, but the crlibm algorithms could be extended to long > doubles and, presumably, floats. Where's Vincent Lefevre > when you need him? :-) The crlibm approach, involving exhaustive searches for worst cases for directed rounding, could as I understa

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Geert Bosch
On Feb 9, 2012, at 10:28, Richard Guenther wrote: > Yes, definitely! OTOH last time I added the toplevel libgcc-math directory > and populated it with sources from glibc RMS objected violently and I had > to remove it again. So we at least need to find a different source of > math routines to st

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 04:53 PM, Joseph S. Myers wrote: > My view is that we should have a "GNU libm" project whose purpose is not > to install a library directly but to provide functions for use in other > projects (much like gnulib, but the functions could presume that they were > being built with rece

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Geert Bosch wrote: > While I think it would be great if there were a suitable > GNU libm project that we could directly use, this seems to only > make sense if this could be based on the current glibc math > library. As far as I understand, it is unlikely that we No, that's no

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Andrew Haley wrote: > On 02/09/2012 04:53 PM, Joseph S. Myers wrote: > > My view is that we should have a "GNU libm" project whose purpose is not > > to install a library directly but to provide functions for use in other > > projects (much like gnulib, but the functions coul

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Andrew Haley
On 02/09/2012 06:00 PM, Joseph S. Myers wrote: > On Thu, 9 Feb 2012, Andrew Haley wrote: > >> On 02/09/2012 04:53 PM, Joseph S. Myers wrote: >>> My view is that we should have a "GNU libm" project whose purpose is not >>> to install a library directly but to provide functions for use in other >>

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Geert Bosch
On Feb 9, 2012, at 12:55, Joseph S. Myers wrote: > No, that's not the case. Rather, the point would be that both GCC's > library and glibc's end up being based on the new GNU project (which might > take some code from glibc and some from elsewhere - and quite possibly > write some from scratc

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Andrew Haley wrote: > > No, the point of the separate project would be to be used by both glibc > > and GCC (and possibly other GNU projects such as GSL) - because > > cooperation among the various projects wanting such functions is the right > > way to do things. > > Well,

Re: weird optimization in sin+cos, x86 backend

2012-02-09 Thread Joseph S. Myers
On Thu, 9 Feb 2012, Geert Bosch wrote: > I don't agree having such a libm is the ultimate goal. It could be > a first step along the way, addressing correctness issues. This Indeed, I think having it as a first step makes sense - with subsequent development done in that context. > would be grea

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Richard Guenther
On Thu, Feb 9, 2012 at 8:16 PM, Geert Bosch wrote: > > On Feb 9, 2012, at 12:55, Joseph S. Myers wrote: > >> No, that's not the case.  Rather, the point would be that both GCC's >> library and glibc's end up being based on the new GNU project (which might >> take some code from glibc and some from

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Andrew Haley
On 02/10/2012 10:07 AM, Richard Guenther wrote: > > The issue with libm in glibc here is that Drepper absolutely does > not want new ABIs in libm - he believes that for example vectorized > routines do not belong there (nor the SSE calling-convention variants > for i686 I tried to push once). Tha

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread James Courtier-Dutton
On 10 February 2012 10:42, Andrew Haley wrote: > On 02/10/2012 10:07 AM, Richard Guenther wrote: >> >> The issue with libm in glibc here is that Drepper absolutely does >> not want new ABIs in libm - he believes that for example vectorized >> routines do not belong there (nor the SSE calling-conve

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Joseph S. Myers
On Fri, 10 Feb 2012, Richard Guenther wrote: > I don't buy the argument that inlining math routines (apart from those > we already handle) would improve performance. What will improve > performance is to have separate entry points to the routines > to skip errno handling, NaN/Inf checking or roun

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Andrew Haley
On 02/10/2012 01:30 PM, James Courtier-Dutton wrote: > On 10 February 2012 10:42, Andrew Haley wrote: > > I think a starting point would be at least documenting correctly the > accuracy of the current libm, because what is currently in the > documents is obviously wrong. > It certainly does not d

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread James Courtier-Dutton
On 10 February 2012 14:05, Andrew Haley wrote: > On 02/10/2012 01:30 PM, James Courtier-Dutton wrote: >> On 10 February 2012 10:42, Andrew Haley wrote: >> >> I think a starting point would be at least documenting correctly the >> accuracy of the current libm, because what is currently in the >> d

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Andrew Haley
On 02/10/2012 02:24 PM, James Courtier-Dutton wrote: > On 10 February 2012 14:05, Andrew Haley wrote: >> On 02/10/2012 01:30 PM, James Courtier-Dutton wrote: >>> On 10 February 2012 10:42, Andrew Haley wrote: >>> >>> I think a starting point would be at least documenting correctly the >>> accurac

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread James Courtier-Dutton
On 10 February 2012 14:36, Andrew Haley wrote: > On 02/10/2012 02:24 PM, James Courtier-Dutton wrote: >> On 10 February 2012 14:05, Andrew Haley wrote: >>> On 02/10/2012 01:30 PM, James Courtier-Dutton wrote: On 10 February 2012 10:42, Andrew Haley wrote: I think a starting point

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Geert Bosch
On Feb 9, 2012, at 15:33, Joseph S. Myers wrote: > For a few, yes, inline support (such as already exists for some functions > on some targets) makes sense. But for some more complicated cases it > seems plausible that LTO information in a library might be an appropriate > way of inlining whil

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Geert Bosch
On Feb 10, 2012, at 05:07, Richard Guenther wrote: > On Thu, Feb 9, 2012 at 8:16 PM, Geert Bosch wrote: >> I don't agree having such a libm is the ultimate goal. It could be >> a first step along the way, addressing correctness issues. This >> would be great progress, but does not remove the nee

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Joseph S. Myers
On Fri, 10 Feb 2012, Geert Bosch wrote: > On Feb 9, 2012, at 15:33, Joseph S. Myers wrote: > > For a few, yes, inline support (such as already exists for some functions > > on some targets) makes sense. But for some more complicated cases it > > seems plausible that LTO information in a library

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Paweł Sikora
On Friday 10 of February 2012 13:30:25 James Courtier-Dutton wrote: > On 10 February 2012 10:42, Andrew Haley wrote: > > On 02/10/2012 10:07 AM, Richard Guenther wrote: > >> > >> The issue with libm in glibc here is that Drepper absolutely does > >> not want new ABIs in libm - he believes that for

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Joseph S. Myers
On Fri, 10 Feb 2012, Geert Bosch wrote: > Right. I even understand where he is coming from. Adding new interfaces > is indeed a big deal as they'll pretty much have to stay around forever. And: even if the interface is a known, public, standard, stable interface, glibc may still not be the right

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Andrew Haley
On 02/10/2012 05:31 PM, Paweł Sikora wrote: > it would be also nice to see functions for reducing argument range in public > api. > finally the end-user can use e.g. sin(reduce(x)) to get the best precision > with some declared cpu overhead. Hmm. I'm not sure this is such a terrific idea: each f

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Paweł Sikora
On Friday 10 of February 2012 17:41:49 Andrew Haley wrote: > On 02/10/2012 05:31 PM, Paweł Sikora wrote: > > it would be also nice to see functions for reducing argument range in > > public api. > > finally the end-user can use e.g. sin(reduce(x)) to get the best precision > > with some declared c

Re: weird optimization in sin+cos, x86 backend

2012-02-10 Thread Jakub Jelinek
On Thu, Feb 09, 2012 at 04:59:55PM +0100, Richard Guenther wrote: > On Thu, Feb 9, 2012 at 4:57 PM, Andrew Haley wrote: > > On 02/09/2012 03:56 PM, Michael Matz wrote: > >> On Thu, 9 Feb 2012, Andrew Haley wrote: > >> > >>> On 02/09/2012 03:28 PM, Richard Guenther wrote: > So - do you have an

Fwd: weird optimization in sin+cos, x86 backend

2012-02-12 Thread Janne Blomqvist
Richard Guenther wrote: > > I don't buy the argument that inlining math routines (apart from those we > already handle) would improve performance. What will improve performance is > to have separate entry points to the routines to skip errno handling, NaN/Inf > checking or rounding mode selecti

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Richard Guenther
On Fri, Feb 10, 2012 at 5:25 PM, Geert Bosch wrote: > > On Feb 10, 2012, at 05:07, Richard Guenther wrote: > >> On Thu, Feb 9, 2012 at 8:16 PM, Geert Bosch wrote: >>> I don't agree having such a libm is the ultimate goal. It could be >>> a first step along the way, addressing correctness issues.

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Vincent Lefevre
On 2012-02-09 16:01:48 +, Andrew Haley wrote: > On 02/09/2012 03:59 PM, Richard Guenther wrote: > > On Thu, Feb 9, 2012 at 4:57 PM, Andrew Haley wrote: > >> On 02/09/2012 03:56 PM, Michael Matz wrote: > >>> Hi, > >>> > >>> On Thu, 9 Feb 2012, Andrew Haley wrote: > >>> > On 02/09/2012 03:2

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Andrew Haley
On 02/13/2012 01:11 PM, Vincent Lefevre wrote: > On 2012-02-09 16:01:48 +, Andrew Haley wrote: >> On 02/09/2012 03:59 PM, Richard Guenther wrote: >>> Maybe. Nothing would prevent us from composing from multiple sources >>> of course. crlibm also only provides double precision routines. >> >>

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Richard Guenther
On Mon, Feb 13, 2012 at 2:32 PM, Andrew Haley wrote: > On 02/13/2012 01:11 PM, Vincent Lefevre wrote: >> On 2012-02-09 16:01:48 +, Andrew Haley wrote: >>> On 02/09/2012 03:59 PM, Richard Guenther wrote: > Maybe.  Nothing would prevent us from composing from multiple sources of course

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Vincent Lefevre
On 2012-02-09 17:18:25 +, Joseph S. Myers wrote: > The crlibm approach, involving exhaustive searches for worst cases for > directed rounding, could as I understand it work for functions of one > float, double or 80-bit long double argument, but I think the exhaustive > searches are still in

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Vincent Lefevre
On 2012-02-09 12:36:01 -0500, Geert Bosch wrote: > I think it would make sense to have a check list of properties, and > use configure-based tests to categorize implementations. These tests > would be added as we go along. > > Criteria: > > [ ] Conforms to C99 for exceptional values > (acc

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Joseph S. Myers
On Mon, 13 Feb 2012, Vincent Lefevre wrote: > Also note that CRlibm supports the 4 rounding modes, while the > IBM Accurate Mathematical Library currently used in glibc behaves > erratically (e.g. can even crash) on directed rounding modes. FWIW the proposed ISO C bindings to IEEE 754-2008 (still

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Jakub Jelinek
On Mon, Feb 13, 2012 at 02:48:05PM +0100, Richard Guenther wrote: > > I think there is some consensus that crlibm is a great place to start > > for correctly-rounded elementary functions.  I think we'd need, or at > > least greatly appreciate, some help from your team. > > I agree. If crlibm can

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Richard Guenther
On Mon, Feb 13, 2012 at 3:32 PM, Jakub Jelinek wrote: > On Mon, Feb 13, 2012 at 02:48:05PM +0100, Richard Guenther wrote: >> > I think there is some consensus that crlibm is a great place to start >> > for correctly-rounded elementary functions.  I think we'd need, or at >> > least greatly appreci

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Vincent Lefevre
On 2012-02-09 15:49:37 +, Andrew Haley wrote: > I'd start with INRIA's crlibm. I point I'd like to correct. GNU MPFR has mainly (> 95%) been developed by researchers and engineers paid by INRIA. But this is not the case of CRlibm. I don't know its copyright status (apparently, mainly ENS Lyon,

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Vincent Lefevre
On 2012-02-10 17:41:49 +, Andrew Haley wrote: > On 02/10/2012 05:31 PM, Paweł Sikora wrote: > > it would be also nice to see functions for reducing argument range in > > public api. > > finally the end-user can use e.g. sin(reduce(x)) to get the best precision > > with some declared cpu overhe

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Joseph S. Myers
On Mon, 13 Feb 2012, Jakub Jelinek wrote: > Furthermore, crlibm_init changes the i?86/x86_64 rounding mode globally, > that is not appropriate for a general purpose math library, there you either > need to cope with extended precision, or rely on SSE/SSE2 for float/double, > or change the rounding

Re: weird optimization in sin+cos, x86 backend

2012-02-13 Thread Geert Bosch
> On 2012-02-09 12:36:01 -0500, Geert Bosch wrote: >> I think it would make sense to have a check list of properties, and >> use configure-based tests to categorize implementations. These tests >> would be added as we go along. >> >> Criteria: >> >> [ ] Conforms to C99 for exceptional values >

Re: weird optimization in sin+cos, x86 backend

2012-02-14 Thread Andrew Haley
On 02/13/2012 08:00 PM, Geert Bosch wrote: > GNU Linux is quite good, but has issues with the "pow" function for > large exponents, even in current versions Really? Even on 64-bit? I know this is a problem for the 32-bit legacy architecture, but I thought the 64-bit pow() was OK. Andrew.

  1   2   >