Re: i387 control word register definition is missing

2005-05-26 Thread Uros Bizjak
Quoting Jan Hubicka [EMAIL PROTECTED]:

 If you make FPCTR/MXCSR real registers, you will need to add use to all
 the arithmetic and move pattern that would consume quite some memory and
 confuse optimizers.  I think you can get better around simply using volatile
 unspecs inserted by LCM pass  (this would limit scheduling, but I don't
 think it is that big deal)

  Ouch... I wrongly assumed that rouding bits affect only (int)-(float)
patterns - thanks for clearing this to me! (Perhaps adding a nearest i387_cw
attribute to arithmetic/move patterns could be used to switch back to default
rounding?)

  Unfortunatelly, in first testcase, fldcw is not moved out of the loop,
  because
  fix_truncmode_i387_2 is splitted after gcse-after-reload pass (Is this
  intentional for gcse-after-reload pass?)
 
 It is intentional for reload pass.  I guess gcse might be run after
 splitting, but not sure what the interferences are.

  I have added split_all_insns call before gcse_after_reload_main in passes.c.
To my suprise, it didn't break anything, but it also didn't get fldcw out of the
loop.

Uros.


Re: Ada Status in mainline

2005-05-26 Thread Andreas Jaeger
Diego Novillo [EMAIL PROTECTED] writes:

 On Wed, May 25, 2005 at 03:37:29PM -0600, Jeffrey A Law wrote:

 So, if I wanted to be able to bootstrap Ada, what I do I need
 to do?  Disable VRP?

 Applying the patches in the PRs I mentioned.  If that doesn't
 work, try with VRP disabled.

Does not work for me on powerpc64-linux-gnu, the compiler fails to
build with:

/aj-cvs/gcc/gcc/ada/atree.adb: In function Atree._Elabb:
/aj-cvs/gcc/gcc/ada/atree.adb:51: error: invariant not recomputed when 
ADDR_EXPR changed
C.3356D.19258;

/aj-cvs/gcc/gcc/ada/atree.adb:51: error: invariant not recomputed when 
ADDR_EXPR changed
C.3357D.19259;


pgp6lIoaXOKcV.pgp
Description: PGP signature

Andreas
-- 
 Andreas Jaeger, [EMAIL PROTECTED], http://www.suse.de/~aj
  SUSE Linux Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


pgpt6mpOljxKJ.pgp
Description: PGP signature


Re: Compiling GCC with g++: a report

2005-05-26 Thread Gabriel Dos Reis
Kaveh R. Ghazi [EMAIL PROTECTED] writes:

|Now we have e.g. XNEW* and all we need is a new -W* flag to catch
|things like using C++ keywords and it should be fairly automatic to
|keep incompatibilities out of the sources.
|   
|   Why not this?
|   
|   #ifndef __cplusplus
|   #pragma GCC poison class template new . . .
|   #endif
| 
| That's limited.  A new -W flag could catch not only this, but also
| other problems like naked void* - FOO* conversions.  E.g. IIRC, the
| -Wtraditional flag eventually caught over a dozen different problems.
| Over time this new warning flag for c/c++ intersection could be
| similarly refined.

This is now

 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21759

-- Gaby


Re: GCC and Floating-Point

2005-05-26 Thread Vincent Lefevre
On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:
 Yes. I still don't understand why gcc doesn't do -ffast-math by
 default like all other compilers.

No! And I really don't think that other compilers do that.
It would be very bad, would not conform to the C standard[*]
and would make lots of codes fail.

[*] See for instance:

   5.1.2.3  Program execution
[...]
   [#14] EXAMPLE 5 Rearrangement for floating-point expressions
   is  often  restricted because of limitations in precision as
   well as range.  The implementation  cannot  generally  apply
   the   mathematical   associative   rules   for  addition  or
   multiplication,  nor  the  distributive  rule,  because   of
   roundoff   error,  even  in  the  absence  of  overflow  and
   underflow.   Likewise,  implementations   cannot   generally
   replace decimal constants in order to rearrange expressions.
   In  the  following  fragment,  rearrangements  suggested  by
   mathematical rules for real numbers are often not valid (see
   F.8).

   double x, y, z;
   /* ... */
   x = (x * y) * z;  // not equivalent to x *= y * z;
   z = (x - y) + y ; // not equivalent to z = x;
   z = x + x * y;// not equivalent to z = x * (1.0 + y);
   y = x / 5.0;  // not equivalent to y = x * 0.2;

 The people who needs perfect standard behavior are a lot fewer than
 all the packagers who doesn't understand which optimization flags
 gcc should _always_ be called with.

Standard should be the default.

(Is this a troll or what?)

-- 
Vincent Lefèvre [EMAIL PROTECTED] - Web: http://www.vinc17.org/
100% accessible validated (X)HTML - Blog: http://www.vinc17.org/blog/
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Allan Sandfeld Jensen wrote:
Yes. I still don't understand why gcc doesn't do -ffast-math by
default like all other compilers.

Vincent Lefevre wrote:
 No! And I really don't think that other compilers do that.
 It would be very bad, would not conform to the C standard[*]
 and would make lots of codes fail.

Perhaps what needs to be changed is the definition of -ffast-math
itself. Some people (myself included) view it from the standpoint of
using the full capabilities of our processors' hardware intrinsics;
however, -ffast-math *also* implies the rearrangement of code that
violates Standard behavior. Thus it does two things that perhaps should
not be combined.

To be more pointed, it is -funsafe-math-optimizations (implied by
-ffast-math) that is in need of adjustment.

May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.

Does anyone object if I experiment a bit with this modification? Or am I
completely wrong in my understanding?

..Scott


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Richard Guenther
On 5/26/05, Scott Robert Ladd [EMAIL PROTECTED] wrote:
 Allan Sandfeld Jensen wrote:
 Yes. I still don't understand why gcc doesn't do -ffast-math by
 default like all other compilers.
 
 Vincent Lefevre wrote:
  No! And I really don't think that other compilers do that.
  It would be very bad, would not conform to the C standard[*]
  and would make lots of codes fail.
 
 Perhaps what needs to be changed is the definition of -ffast-math
 itself. Some people (myself included) view it from the standpoint of
 using the full capabilities of our processors' hardware intrinsics;
 however, -ffast-math *also* implies the rearrangement of code that
 violates Standard behavior. Thus it does two things that perhaps should
 not be combined.
 
 To be more pointed, it is -funsafe-math-optimizations (implied by
 -ffast-math) that is in need of adjustment.
 
 May I be so bold as to suggest that -funsafe-math-optimizations be
 reduced in scope to perform exactly what it's name implies:
 transformations that may slightly alter the meanding of code. Then move
 the use of hardware intrinsics to a new -fhardware-math switch.

I think the other options implied by -ffast-math apart from
-funsafe-math-optimizations should (and do?) enable the use of
hardware intrinsics already.  It's only that some of the optimzations
guarded by -funsafe-math-optimizations could be applied in general.
A good start may be to enumerate the transformations done on a
Wiki page and list the flags it is guarded with.

Richard.


Re: GCC and Floating-Point

2005-05-26 Thread Daniel Berlin



On Thu, 26 May 2005, Vincent Lefevre wrote:


On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:

Yes. I still don't understand why gcc doesn't do -ffast-math by
default like all other compilers.


No! And I really don't think that other compilers do that.


Have you looked, or are you just guessing?

I know for a fact that XLC does it at -O3+, and unless i'm 
misremembering, icc does it at -O2+.


Both require flags to turn the behavior *off* at those opt levels.

XLC will give you a warning when it sees itself making an optimization 
that may affect precision, saying that if you don't want this to happen, 
to use a flag.




Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Scott Robert Ladd [EMAIL PROTECTED] wrote:
May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.

Richard Guenther wrote:
 I think the other options implied by -ffast-math apart from
 -funsafe-math-optimizations should (and do?) enable the use of
 hardware intrinsics already.  It's only that some of the optimzations
 guarded by -funsafe-math-optimizations could be applied in general.
 A good start may be to enumerate the transformations done on a
 Wiki page and list the flags it is guarded with.

Unless I've missed something obvious, -funsafe-math-optimizations alone
enables most hardware floating-point intrinsics -- on x86_64 and x86, at
least --. For example, consider a simple line of code that takes the
sine of a constant:

x = sin(1.0);

On the Pentium 4, with GCC 4.0, various command lines produced the
following code:

gcc -S -O3 -march=pentium4

movl$1072693248, 4(%esp)
callsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -D__NO_MATH_INLINES

movl$1072693248, 4(%esp)
callsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -funsafe-math-optimizations

fld1
fsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -funsafe-math-optimizations \
  -D__NO_MATH_INLINES

fld1
fsin
fstpl   4(%esp)

As you can see, it is -funsafe-math-optimizations alone that determines
the use of hardware intrinsics, on the P4 at least.

As a side note, GCC 4.0 on the Opteron produces the same result with all
four command-line variations:

gcc -S -O3 -march=k8
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -D__NO_MATH_INLINES
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -funsafe-math-optimizations
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -funsafe-math-optimizations -D__NO_MATH_INLINES
movlpd  .LC2(%rip), %xmm0
callsin


..Scott


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Paul Brook
On Thursday 26 May 2005 14:25, Scott Robert Ladd wrote:
 Scott Robert Ladd [EMAIL PROTECTED] wrote:
 May I be so bold as to suggest that -funsafe-math-optimizations be
 reduced in scope to perform exactly what it's name implies:
 transformations that may slightly alter the meanding of code. Then move
 the use of hardware intrinsics to a new -fhardware-math switch.

 Richard Guenther wrote:
  I think the other options implied by -ffast-math apart from
  -funsafe-math-optimizations should (and do?) enable the use of
  hardware intrinsics already.  It's only that some of the optimzations
  guarded by -funsafe-math-optimizations could be applied in general.
  A good start may be to enumerate the transformations done on a
  Wiki page and list the flags it is guarded with.

 Unless I've missed something obvious, -funsafe-math-optimizations alone
 enables most hardware floating-point intrinsics -- on x86_64 and x86, at
 least --. For example, consider a simple line of code that takes the
 sine of a constant:

I thought the x86 sin/cos intrinsics were unsafe. ie. they don't gave accurate 
results in all cases.

Paul


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Gabriel Dos Reis
Paul Brook [EMAIL PROTECTED] writes:

| On Thursday 26 May 2005 14:25, Scott Robert Ladd wrote:
|  Scott Robert Ladd [EMAIL PROTECTED] wrote:
|  May I be so bold as to suggest that -funsafe-math-optimizations be
|  reduced in scope to perform exactly what it's name implies:
|  transformations that may slightly alter the meanding of code. Then move
|  the use of hardware intrinsics to a new -fhardware-math switch.
| 
|  Richard Guenther wrote:
|   I think the other options implied by -ffast-math apart from
|   -funsafe-math-optimizations should (and do?) enable the use of
|   hardware intrinsics already.  It's only that some of the optimzations
|   guarded by -funsafe-math-optimizations could be applied in general.
|   A good start may be to enumerate the transformations done on a
|   Wiki page and list the flags it is guarded with.
| 
|  Unless I've missed something obvious, -funsafe-math-optimizations alone
|  enables most hardware floating-point intrinsics -- on x86_64 and x86, at
|  least --. For example, consider a simple line of code that takes the
|  sine of a constant:
| 
| I thought the x86 sin/cos intrinsics were unsafe. ie. they don't
| gave accurate results in all cases.

Indeed.

-- Gaby


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Paul Brook wrote:
 I thought the x86 sin/cos intrinsics were unsafe. ie. they don't gave 
 accurate 
 results in all cases.

If memory serves, Intel's fsin (for example) has an error  1 ulp for
value flose to multiples of pi (2pi, for example).

Now, I'm not certain this is true for the K8 and later Pentiums. Looks
like I need to run another round of tests. ;)

..Scott



Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Let's consider the accuracy of sice and cosine. I've run tests as
follows, using a program provided at the end of this message.

On the Opteron, using GCC 4.0.0 release, the command lines produce these
outputs:

-lm -O3 -march=k8 -funsafe-math-optimizations -mfpmath=387

  generates:
  fsincos

  cumulative accuracy:   60.830074998557684 (binary)
 18.311677213055471 (decimal)

-lm -O3 -march=k8 -mfpmath=387

  generates:
  call sin
  call cos

  cumulative accuracy:   49.415037499278846 (binary)
 14.875408524143376 (decimal)

-lm -O3 -march=k8 -funsafe-math-optimizations

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

-lm -O3 -march=k8

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The default for Opteron is -mfpmath=sse; as has been discussed in other
threads, this may not be a good choice. I also note that using
-funsafe-math-optimizations (and thus the combined fsincos instruction)
*increases* accuracy.

On the Pentium4, using the same version of GCC, I get:

-lm -O3 -march=pentium4 -funsafe-math-optimizations

  cumulative accuracy:   63.000 (binary)
 18.964889726830815 (decimal)

-lm -O3 -march=pentium4

  cumulative accuracy:   49.299560281858909 (binary)
 14.840646417884166 (decimal)

-lm -O3 -march=pentium4 -funsafe-math-optimizations -mfpmath=sse

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The program used is below. I'm very open to suggestions about this
program, which is a subset of a larger accuracy benchmark I'm writing
(Subtilis).

#include fenv.h
#pragma STDC FENV_ACCESS ON
#include float.h
#include math.h
#include stdio.h
#include stdbool.h
#include string.h

static bool verbose = false;
#define PI 3.14159265358979323846

// Test floating point accuracy
inline double binary_accuracy(double x)
{
return -(log(fabs(x)) / log(2.0));
}

inline double decimal_accuracy(double x)
{
return -(log(fabs(x)) / log(10.0));
}

// accuracy of trigonometric functions
void trigtest()
{
static const double range = PI; // * 2.0;
static const double incr  = PI / 100.0;

if (verbose)
   printf(  xdiff accuracy\n);

double final = 1.0;
double x;

for (x = -range; x = range; x += incr)
{
double s1  = sin(x);
double c1  = cos(x);
double one = s1 * s1 + c1 * c1;
double diff = one - 1.0;
final *= one;

double accuracy1 = binary_accuracy(diff);

if (verbose)
printf(%20.15f %14g %20.15f\n,x,diff,accuracy1);
}

final -= 1.0;

printf(\ncumulative accuracy: %20.15f (binary)\n,
   binary_accuracy(final));

printf( %20.15f (decimal)\n,
   decimal_accuracy(final));
}

// Entry point
int main(int argc, char ** argv)
{
int i;

// do we have verbose output?
if (argc  1)
{
for (i = 1; i  argc; ++i)
{
if (!strcmp(argv[i],-v))
{
verbose = true;
break;
}
}
}


// run tests
trigtest();

// done
return 0;
}

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
  
  The program used is below. I'm very open to suggestions about this
  program, which is a subset of a larger accuracy benchmark I'm writing
  (Subtilis).

Try this:

public class trial
{
  static public void main (String[] argv)
  {
System.out.println(Math.sin(Math.pow(2.0, 90.0)));
  }
}

zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
zapata:~ $ ./a.out 
1.2379400392853803E27
zapata:~ $ gcj trial.java --main=trial -ffast-math   
zapata:~ $ ./a.out 
-0.9044312486086016

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Haley wrote:
 Try this:
 
 public class trial
 {
   static public void main (String[] argv)
   {
 System.out.println(Math.sin(Math.pow(2.0, 90.0)));
   }
 }
 
 zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
 zapata:~ $ ./a.out 
 1.2379400392853803E27
 zapata:~ $ gcj trial.java --main=trial -ffast-math   
 zapata:~ $ ./a.out 
 -0.9044312486086016

You're comparing apples and oranges, since C (my code) and Java differ
in their definitions and implementations of floating-point.

I don't build gcj these days; however, when I have a moment later, I'll
build the latest GCC mainline from CVS -- with Java -- and see how it
reacts to my Java version of my benchmark. I also have a Fortran 95
version as well, so I guess I might as well try several languages, and
see what we get.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paolo Carlini
Andrew Haley wrote

 zapata:~ $ gcj trial.java --main=trial -ffast-math -O
  ^^

Ok, maybe those people that are accusing the Free Software philosophy of
being akin to communisn are wrong, but it looks like revolutionaries are
lurking around, at least... ;) ;)

Paolo.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
  Andrew Haley wrote:
   Try this:
   
   public class trial
   {
 static public void main (String[] argv)
 {
   System.out.println(Math.sin(Math.pow(2.0, 90.0)));
 }
   }
   
   zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
   zapata:~ $ ./a.out 
   1.2379400392853803E27
   zapata:~ $ gcj trial.java --main=trial -ffast-math   
   zapata:~ $ ./a.out 
   -0.9044312486086016
  
  You're comparing apples and oranges, since C (my code) and Java differ
  in their definitions and implementations of floating-point.

So try it in C.   -ffast-math won't be any better.

#include stdio.h
#include math.h

void
main (int argc, char **argv)
{
  printf (%g\n, sin (pow (2.0, 90.0)));
}

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
 On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
 
static const double range = PI; // * 2.0;
static const double incr  = PI / 100.0;
 
 
 The trig insns fail with large numbers; an argument
 reduction loop is required with their use.

Yes, but within the defined mathematical ranges for sine and cosine --
[0, 2 * PI) -- the processor intrinsics are quite accurate.

Now, I can see a problem in signal processing or similar applications,
where you're working with continuous values over a large range, but it
seems to me that a simple application of fmod (via FPREM) solves that
problem nicely.

I've never quite understood the necessity for performing trig operations
on excessively large values, but perhaps my problem domain hasn't
included such applications.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Scott == Scott Robert Ladd [EMAIL PROTECTED] writes:

 Scott Richard Henderson wrote:
  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
  
  static const double range = PI; // * 2.0; static const double
  incr = PI / 100.0;
  
  
  The trig insns fail with large numbers; an argument reduction loop
  is required with their use.

 Scott Yes, but within the defined mathematical ranges for sine and
 Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite
 Scott accurate.

Huh?  Sine and consine are mathematically defined for all finite
inputs. 

Yes, normally the first step is to reduce the arguments to a small
range around zero and then do the series expansion after that, because
the series expansion convergest fastest near zero.  But sin(100) is
certainly a valid call, even if not a common one.

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
  Scott Yes, but within the defined mathematical ranges for sine and
  Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite
  Scott accurate.
 
 Huh?  Sine and consine are mathematically defined for all finite
 inputs. 

Defined, yes. However, I'm speaking as a mathematician in this case, not
a programmer. Pick up an trig book, and it will have a statement similar
to this one, taken from a text (Trigonometry Demystified, Gibilisco,
McGraw-Hill, 2003) randomly grabbed from the shelf next to me:

These trigonometric identities apply to angles in the *standard range*
of 0 rad = theta  2 * PI rad. Angles outside the standard range are
converted to values within the standard range by adding or subtracting
the appropriate multiple of 2 * PI rad. You might hear of an angle with
negative measurement or with a measure more than 2 * PI rad, but this
can always be converted...

I can assure you that other texts (of which I have several) make similar
statements.

 Yes, normally the first step is to reduce the arguments to a small
 range around zero and then do the series expansion after that, because
 the series expansion convergest fastest near zero.  But sin(100) is
 certainly a valid call, even if not a common one.

I *said* that such statements are outside the standard range of
trigonometric identities. Writing sin(100) is not a matter of necessity,
nor should people using regular math be penalized in speed or accuracy
for extreme cases.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 17:32

 Paul Koning wrote:
  Scott Yes, but within the defined mathematical ranges for sine and
  Scott cosine -- [0, 2 * PI) -- the processor intrinsics are quite 
 Scott accurate. 
 
 Huh?  Sine and consine are mathematically defined for all finite
 inputs.
 
 Defined, yes. However, I'm speaking as a mathematician in this case, not
 a programmer. Pick up an trig book, and it will have a statement similar
 to this one, taken from a text (Trigonometry Demystified, Gibilisco,
 McGraw-Hill, 2003) randomly grabbed from the shelf next to me:
 
 These trigonometric identities apply to angles in the *standard range*
 of 0 rad = theta  2 * PI rad. 

  It's difficult to tell from that quote, which lacks sufficient context,
but you *appear* at first glance  to be conflating the fundamental
trignometric *functions* with the trignometric *identities* that are
generally built up from those functions.  That is to say, you appear to be
quoting a statement that says

 Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 = x = 2*PI

and interpreting it to imply that 

   sin(x)
  is only valid when 0 = x = 2*PI

which, while it may or may not be true for other reasons, certainly is a
non-sequitur from the statement above.

  And in fact, and in any case, this is a perfect illustration of the point,
because what we're discussing here is *not* the behaviour of the
mathematical sine and cosine functions, but the behaviour of the C runtime
library functions sin(...) and cos(...), which are defined by the language
spec rather than by the strictures of mathematics.  And that spec makes *no*
restriction on what values you may supply as inputs, so gcc had better
implement sin and cos in a way that doesn't require the programmer to have
reduced the arguments beforehand, or it won't be ANSI compliant.

  Not only that, but if you don't use -funsafe-math-optimisations, gcc emits
libcalls to sin/cos functions, which I'll bet *do* reduce their arguments to
that range before doing the computation, (and which might indeed even be
clever enough to use the intrinsic, and can encapsulate the knowledge that
that intrinsic can only be used on arguments within a more limited range
than are valid for the C library function which they are being used to
implement).

  When you use -funsafe-math-optimisations, one of those optimisations is to
assume that you're not going to be using the full range of arguments that
POSIX/ANSI say is valid for the sin/cos functions, but that you're going to
be using values that are already folded into the range around zero, and so
it optimises away the libcall and the reduction with it and just uses the
intrinsic to implement the function.  But the intrinsic does not actually
implement the function as specified by ANSI, since it doesn't accept the
same range of inputs, and therefore it is *not* a suitable transformation to
ever apply except when the user has explicitly specified that they want to
live dangerously.  So in terms of your earlier suggestion:

quote
May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.
quote

... I am obliged to point out that using the hardware intrinsics *IS* an
unsafe optimisation, at least in this case!

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread David Daney

Dave Korn wrote:


 Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 = x = 2*PI



It's been a while since I studied math, but isn't that particular 
identity is true for any x real or complex?


David Daney,



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Dave Korn wrote:
   It's difficult to tell from that quote, which lacks sufficient context,
 but you *appear* at first glance  to be conflating the fundamental
 trignometric *functions* with the trignometric *identities* that are
 generally built up from those functions.  That is to say, you appear to be
 quoting a statement that says

Perhaps I didn't say it as clearly as I should, but I do indeed know the
difference between the implementation and definition of the
trigonometric identifies.

The tradeoff is between absolute adherence to the C standard and the
need to provide fast, accurate results for people who know their math.
What I see is a focus (in some areas like math) on complying with the
standard, to the exclusion of people who need speed. Both needs can be met.

 And in fact, and in any case, this is a perfect illustration of the point,
 because what we're discussing here is *not* the behaviour of the
 mathematical sine and cosine functions, but the behaviour of the C runtime
 library functions sin(...) and cos(...), which are defined by the language
 spec rather than by the strictures of mathematics.

The sin() and cos() functions, in theory, implement the behavior of the
mathematical sine and cosine identities, so the two can not be
completely divorced. I believe it is, at the very least, misleading to
claim that the hardware intrinsics are unsafe.

 And that spec makes *no*
 restriction on what values you may supply as inputs, so gcc had better
 implement sin and cos in a way that doesn't require the programmer to have
 reduced the arguments beforehand, or it won't be ANSI compliant.

I'm not asking that the default behavior of the compiler be non-ANSI;
I'm asking that we give non-perjorative options to people who know what
they are doing and need greater speed. The -funsafe-math-optimizations
encompasses more than hardware intrinsics, and I don't see why
separating the hardware intrinsics into their own option
(-fhardware-math) is unreasonable, for folk who want the intrinsics but
not the other transformations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Kevin == Kevin Handy [EMAIL PROTECTED] writes:

 Kevin But, you are using a number in the range of 2^90, only have 64
 Kevin bits for storing the floating point representation, and some
 Kevin of that is needed for the exponent.

Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Morten Welinder
 Yes, but within the defined mathematical ranges for sine and cosine --
 [0, 2 * PI) -- the processor intrinsics are quite accurate.

If you were to look up a serious math book like AbramowitzStegun1965
you would see a definition like

sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

for all complex numbers, thus in particular valid for z=x+0i for all real x.
If you wanted to stick to reals only, a serious math text would probably use
the series expansion around zero [4.3.65]

And there is the answer to your question: if you just think of sin
as something
with angles and triangles, then sin(2^90) makes very little sense.  But sin
occurs other places where there are no triangles in sight.  For example:

  Gamma(z)Gamma(1-z) = pi/sin(z pi)   [6.1.17]

or in series expansions of the cdf for the Student t distribution [26.7.4]

Morten


GCC 3.3.6 has been released

2005-05-26 Thread Gabriel Dos Reis

I'm pleased to announce that GCC 3.3.6 has been released. 

  This version is a minor release, fixing regressions in GCC 3.3.5
with respect to previous versions of GCC.  It can be downloaded from
the FTP serves listed here

  http://www.gnu.org/order/ftp.html


  The list of changes is available at

   http://gcc.gnu.org/gcc-3.3/changes.html


  This release is the last from the 3.3.x series.


  Many thanks to the huge GCC community who contributed to the
completion of this release.

-- 
Gabriel Dos Reis
 [EMAIL PROTECTED]
Texas AM University -- Department of Computer Science
301, Bright Building -- College Station, TX 77843-3112


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: David Daney
Sent: 26 May 2005 18:23

 Dave Korn wrote:
 
  Identities such as
 sin(x)^2 + cos(x)^2 === 1
   are only valid when 0 = x = 2*PI
 
 
 It's been a while since I studied math, but isn't that particular
 identity is true for any x real or complex?
 
 David Daney,


  Yes, that was solely an example of the difference between 'identities' and
'functions', for illustration, in case there was any ambiguity in the
language, but was not meant to be an example of an *actual* identity that
has a restriction on the valid range of inputs.  Sorry for not being
clearer.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Morten Welinder wrote:
 If you were to look up a serious math book like AbramowitzStegun1965
 you would see a definition like
 
 sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

Very true. However, the processor doesn't implement intrinsics for
complex functions -- well, maybe some do, and I've never encountered them!

As such, I was sticking to a discussion specific to reals.


 And there is the answer to your question: if you just think of sin
 as something
 with angles and triangles, then sin(2^90) makes very little sense.  But sin
 occurs other places where there are no triangles in sight.

That's certainly true; the use of sine and cosine depend on the
application. I don't deny that many applications need to perform sin()
on any double value; however there are also many applications where you
*are* dealing with angles.

I recently wrote a GPS application where using the intrinsics improved
both accuracy and speed (the latter substantially), and using those
intrinsics was only unsafe because -funsafe-math-optimizations
includes other transformations.

I am simply lobbying for the separation of hardware intrinsics from
-funsafe-math-optimizations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Scott == Scott Robert Ladd [EMAIL PROTECTED] writes:

 Scott Dave Korn wrote:
  It's difficult to tell from that quote, which lacks sufficient
  context, but you *appear* at first glance to be conflating the
  fundamental trignometric *functions* with the trignometric
  *identities* that are generally built up from those functions.
  That is to say, you appear to be quoting a statement that says

 Scott Perhaps I didn't say it as clearly as I should, but I do
 Scott indeed know the difference between the implementation and
 Scott definition of the trigonometric identifies.

 Scott The tradeoff is between absolute adherence to the C standard
 Scott and the need to provide fast, accurate results for people who
 Scott know their math. 

I'm really puzzled by that comment, partly because the text book quote
you gave doesn't match any math I ever learned.  Does knowing your
math translates to believing that trig functions should be applied
only to arguments in the range 0 to 2pi?  If so, I must object.

What *may* make sense is the creation of a new option (off by default)
that says you're allowed to assume that all calls to trig functions
have arguments in the range x..y.  Then the question to be answered
is what x and y should be.  A possible answer is 0 and 2pi; another
answer that some might prefer is -pi to +pi.  Or it might be -2pi to
+2pi to accommodate both preferences at essentially no cost.

 paul




RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 18:36

 
 I am simply lobbying for the separation of hardware intrinsics from
 -funsafe-math-optimizations.

  Well, as long as they're under the control of a flag that also makes it
clear that they are *also* unsafe math optimisations, I wouldn't object.

  But you can't just replace a call to the ANSI C 'sin' function with an
invocation of the x87 fsin intrinsic, because they aren't the same, and the
intrinsic is non-ansi-compliant.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
 I'm really puzzled by that comment, partly because the text book quote
 you gave doesn't match any math I ever learned.  Does knowing your
 math translates to believing that trig functions should be applied
 only to arguments in the range 0 to 2pi?  If so, I must object.

I'll correct myself to say people who know their application. ;) Some
apps need sin() over all possible doubles, while other applications need
sin() over the range of angles.

 What *may* make sense is the creation of a new option (off by default)
 that says you're allowed to assume that all calls to trig functions
 have arguments in the range x..y.  Then the question to be answered
 is what x and y should be.  A possible answer is 0 and 2pi; another
 answer that some might prefer is -pi to +pi.  Or it might be -2pi to
 +2pi to accommodate both preferences at essentially no cost.

I prefer breaking out the hardware intrinsics from
-funsafe-math-optimizations, such that people can compile to use their
hardware *without* the other transformations implicit in the current
collective.

If someone can explain how this hurts anything, please let me know.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
After some off-line exchanges with Dave Korn, it seems to me that part
of the problem is that the documentation for
-funsafe-math-optimizations is so vague as to have no discernable
meaning. 

For example, does the wording of the documentation convey the
limitation that one should only invoke math functions with a small
range of arguments (say, -pi to +pi)?  I cannot see anything remotely
resembling that limitation, but others can.

Given that, I wonder how we can tell whether a particular proposed
optimization governed by that flag is permissible.  Consider:

`-funsafe-math-optimizations'
 Allow optimizations for floating-point arithmetic that (a) assume
 that arguments and results are valid and (b) may violate IEEE or
 ANSI standards.  

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Pinski


On May 26, 2005, at 2:12 PM, Paul Koning wrote:

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...



b) means that (-a)*(b-c) can be changed to a*(c-b) and other 
reassociation

opportunities.

Thanks,
Andrew Pinski



RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
From: Scott Robert Ladd
Sent: 26 May 2005 19:09

 Dave Korn wrote:
   Well, as long as they're under the control of a flag that also makes it
 clear that they are *also* unsafe math optimisations, I wouldn't object.
 
 But they are *not* unsafe for *all* applications.

  Irrelevant; nor are many of the other things that are described by the
term unsafe.

  In fact they are often things that may be safe on one occasion, yet not on
another, even within one single application.  Referring to something as
unsafe doesn't mean it's *always* unsafe, but referring to it as safe (or
implying that it is by contrast with an option that names it as unsafe)
*does* mean that it is *always* safe.

 An ignorant user may not understand the ramifications of unsafe math
 -- however, the current documentation is quite vague as to why these
 optimizations are unsafe, and people thus become paranoid and avoid
 -ffast-math when it would be to their benefit.

  Until they get sqrt(-1.0) returning a value of +1.0 with no complaints, of
course

  But yes: the biggest problem here that I can see is inadequate
documentation.

 First and foremost, GCC should conform to standards. *However*, I see
 nothing wrong with providing additional capability for those who need
 it, without combining everything unsafe under one umbrella.

  That's exactly what I said up at the top.  Nothing wrong with having
multiple unsafe options, but they *are* all unsafe.

 But you can't just replace a call to the ANSI C 'sin' function with an
 invocation of the x87 fsin intrinsic, because they aren't the same, and
 the intrinsic is non-ansi-compliant.
 
 Nobody said they were.

  Then any optimisation flag that replaces one with the other is, QED,
unsafe.

  Of course, if you went and wrote a whole load of builtins, so that with
your new flag in effect sin (x) would translate into a code sequence that
first uses fmod to reduce the argument to the valid range for fsin, I would
no longer consider it unsafe.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Pinski wrote:
 b) means that (-a)*(b-c) can be changed to a*(c-b) and other reassociation
 opportunities.

This is precisely the sort of transformation that, in my opinion, should
be separate from the hardware intrinsics. I mentioned this specific case
earlier in the thread (I think; maybe it went to a private mail).

The documentation should quote you above, instead of being general and
vague (lots of mays, for example, in the current text).

Perhaps we need to have a clearer name for the option,
-funsafe-transformations, anyone? I may want to use a hardware
intrinsics, but not want those transformations.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Joseph S. Myers
On Thu, 26 May 2005, Paul Koning wrote:

  Kevin == Kevin Handy [EMAIL PROTECTED] writes:
 
  Kevin But, you are using a number in the range of 2^90, only have 64
  Kevin bits for storing the floating point representation, and some
  Kevin of that is needed for the exponent.
 
 Fair enough, so with 64 bit floats you have no right to expect an
 accurate answer for sin(2^90).  However, you DO have a right to expect
 an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
 quoted.  I see no words in the description of
 -funsafe-math-optimizations to lead me to expect such a result.

When I discussed this question with Nick Maclaren a while back after a UK 
C Panel meeting, his view was that for most applications (a) the output 
should be close (within 1 or a few ulp) to the sine/cosine of a value 
close (within 1 or a few ulp) to the floating-point input and (b) sin^2 + 
cos^2 (of any input value) should equal 1 with high precision, but most 
applications (using floating-point values as approximations of 
unrepresentable real numbers) wouldn't care about the answer being close 
to the sine or cosine of the exact real number represented by the 
floating-point value when 1ulp is on the order of 2pi or bigger.  This 
does of course disallow 1.2e+27 as a safe answer for sin or cos to give 
for any input.  (And a few applications may care for stronger degrees of 
accuracy.)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: GCC and Floating-Point

2005-05-26 Thread Allan Sandfeld Jensen
On Thursday 26 May 2005 10:15, Vincent Lefevre wrote:
 On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:
  Yes. I still don't understand why gcc doesn't do -ffast-math by
  default like all other compilers.

 No! And I really don't think that other compilers do that.

I can't speak of all compilers, only the ones I've tried. ICC enables it 
always, Sun CC, Dec CXX, and HP CC at certain levels of optimizations 
(equivalent to -O2). 

Basically any compiler that cares about benchmarks have it enabled by default.

Many of them however have multiple levels of relaxed floating point. The 
lowest levels will try to be as accurate as possible, while the higher will 
loosen the accuracy and just try to be as fast as possible.


  The people who needs perfect standard behavior are a lot fewer than
  all the packagers who doesn't understand which optimization flags
  gcc should _always_ be called with.

 Standard should be the default.

 (Is this a troll or what?)

So why isn't -ansi or -pendantic default?



`Allan



Re: GCC and Floating-Point

2005-05-26 Thread Scott Robert Ladd
Allan Sandfeld Jensen wrote:
 Basically any compiler that cares about benchmarks have it enabled by default.
 
 Many of them however have multiple levels of relaxed floating point. The 
 lowest levels will try to be as accurate as possible, while the higher will 
 loosen the accuracy and just try to be as fast as possible.

Perhaps we need something along these lines:

When -ansi or -pedantic is used, the compiler should disallow anything
unsafe that may break compliance, warning if someone uses a paradox
like -ansi -funsafe-math-optimizations.

As has been pointed out elsewhere in this thread,
-funsafe-math-optimizations implies too many different things, and is
vaguely documented. I'd like to see varying levels of floating-point
optimization, including an option that uses an internal library
optimized for both speed and correctness, which are not mutually exclusive.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
|  
| static const double range = PI; // * 2.0;
| static const double incr  = PI / 100.0;
|  
|  
|  The trig insns fail with large numbers; an argument
|  reduction loop is required with their use.
| 
| Yes, but within the defined mathematical ranges for sine and cosine --
| [0, 2 * PI) -- 

this is what they call post-modern maths?

[...]

| I've never quite understood the necessity for performing trig operations
| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Scott Robert Ladd [EMAIL PROTECTED] writes:
 | I've never quite understood the necessity for performing trig operations
 | on excessively large values, but perhaps my problem domain hasn't
 | included such applications.
 
 The world is flat; I never quite understood the necessity of spherical
 trigonometry.

For many practical problems, the world can be considered flat. And I do
plenty of spherical geometry (GPS navigation) without requiring the sin
of 2**90. ;)

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Richard Henderson
On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
 I've never quite understood the necessity for performing trig operations
 on excessively large values, but perhaps my problem domain hasn't
 included such applications.

Whether you think it necessary or not, the ISO C functions allow
such arguments, and we're not allowed to break that without cause.


r~


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Gabriel Dos Reis wrote:
|  Scott Robert Ladd [EMAIL PROTECTED] writes:
|  | I've never quite understood the necessity for performing trig operations
|  | on excessively large values, but perhaps my problem domain hasn't
|  | included such applications.
|  
|  The world is flat; I never quite understood the necessity of spherical
|  trigonometry.
| 
| For many practical problems, the world can be considered flat.

Wooho.

| And I do
| plenty of spherical geometry (GPS navigation) without requiring the sin
| of 2**90. ;)

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*, not just angles or whatever, and happen to appear in
approximations of functions as series (e.g. Fourier series) and therefore
those functions can be applied to things that are not just angles. 

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Uros Bizjak

Hello!


Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

 The source operand to fsin, fcos and fsincos x87 insns must be within 
the range of +-2^63, otherwise a C2 flag is set in FP status word that 
marks insufficient operand reduction. Limited operand range is the 
reason, why fsin  friends are enabled only with 
-funsafe-math-optimizations.


 However, the argument to fsin can be reduced to an acceptable range by 
using fmod builtin. Internally, this builtin is implemented as a very 
tight loop that check for insufficient reduction, and could reduce 
whatever finite value one wishes.


 Out of curiosity, where could sin(2^90) be needed? It looks rather big 
angle to me.


Uros.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
 Uros == Uros Bizjak [EMAIL PROTECTED] writes:

 Uros Hello!
  Fair enough, so with 64 bit floats you have no right to expect an
  accurate answer for sin(2^90).  However, you DO have a right to
  expect an answer in the range [-1,+1] rather than the 1.2e+27 that
  Richard quoted.  I see no words in the description of
  -funsafe-math-optimizations to lead me to expect such a result.
  
 Uros The source operand to fsin, fcos and fsincos x87 insns must be
 Uros within the range of +-2^63, otherwise a C2 flag is set in FP
 Uros status word that marks insufficient operand reduction. Limited
 Uros operand range is the reason, why fsin  friends are enabled
 Uros only with -funsafe-math-optimizations.

 Uros However, the argument to fsin can be reduced to an acceptable
 Uros range by using fmod builtin. Internally, this builtin is
 Uros implemented as a very tight loop that check for insufficient
 Uros reduction, and could reduce whatever finite value one wishes.

 Uros Out of curiosity, where could sin(2^90) be needed? It looks
 Uros rather big angle to me.

It looks that way to me too, but it's a perfectly valid argument to
the function as has been explained by several people.

Unless -funsafe-math-optimizations is *explicitly* documented to say
trig function arguments must be in the range x..y for meaningful
results I believe it is a bug to translate sin(x) to a call to the
x87 fsin primitive.  It needs to be wrapped with fmod (perhaps after a
range check for efficiency), otherwise you've drastically changed the
semantics of the function.

Personally I don't expect sin(2^90) to yield 1.2e27.  Yes, you can
argue that, pedantically, clause (b) in the doc for
-funsafe-math-optimizations permits this.  Then again, I could argue
that it also permits sin(x) to return 0 for all x.

 paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Uros Bizjak [EMAIL PROTECTED] writes:

[...]

|   Out of curiosity, where could sin(2^90) be needed? It looks rather
| big angle to me.

If it was and angle!  Not everything that is an argument to sin or cos
is an angle.  They are just functions!  Suppose you're evaluating an
approximation of a Fourrier series expansion.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Steven Bosscher
On Friday 27 May 2005 00:26, Gabriel Dos Reis wrote:
 Uros Bizjak [EMAIL PROTECTED] writes:

 [...]

 |   Out of curiosity, where could sin(2^90) be needed? It looks rather
 | big angle to me.

 If it was and angle!  Not everything that is an argument to sin or cos
 is an angle.  They are just functions!  Suppose you're evaluating an
 approximation of a Fourrier series expansion.

It would, in a way, still be a phase angle ;-)

Gr.
Steven


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Uros, 

   However, the argument to fsin can be reduced to an 
 acceptable range by using fmod builtin. Internally, this 
 builtin is implemented as a very tight loop that check for 
 insufficient reduction, and could reduce whatever finite 
 value one wishes.

Keep in mind that x87 transcendentals are not the most accurate around, but all 
x86 processors from any manufacturer produce roughly the same results for any 
argument as the 8087 did way back when, even if the result is hundreds of ulps 
off...


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
 On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
 
I've never quite understood the necessity for performing trig operations
on excessively large values, but perhaps my problem domain hasn't
included such applications.
 
 
 Whether you think it necessary or not, the ISO C functions allow
 such arguments, and we're not allowed to break that without cause.

Then, as someone else said, why doesn't the compiler enforce -ansi
and/or -pedantic by default? Or is ANSI purity only important in some
cases, but not others?

I do not and have not suggested changing the default behavior of the
compiler, and *have* suggested that it is not pedantic enough about
Standards.

*This* discussion is about improving -funsafe-math-optimizations to make
it more sensible and flexible.

For a wide variety of applications, the hardware intrinsics provide both
faster and more accurate results, when compared to the library
functions. However, I may *not* want other transformations implied by
-funsafe-math-optimizations. Therefore, it seems to me that GCC could
cleanly and simply implement an option to use hardware intrinsics (or a
highly-optimized but non-ANSI library) for those of us who want it.

No changes to default optimizations, no breaking of existing code, just
a new option (as in optional.)

How does that hurt you or anyone else? It's not as if GCC doesn't have a
few options already... ;)

I (and others) also note other compilers do a fine job of handling these
problems.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Yeah, the problem with people who work only with angles is that they
 tend to forget that sin (and friends) are defined as functions on
 *numbers*, not just angles or whatever, and happen to appear in
 approximations of functions as series (e.g. Fourier series) and therefore
 those functions can be applied to things that are not just angles. 

To paraphrase the above:

Yeah, the problem with people who only work with Fourier series is that
they tend to forget that sin (and friends) can be used in applications
with angles that fall in a limited range, where the hardware intrinsics
produce faster and more accurate results.

I've worked on some pretty fancy DSP code in the last years, and some
spherical trig stuff. Two different kinds of code with different needs.

..Scott




RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

 For a wide variety of applications, the hardware intrinsics 
 provide both faster and more accurate results, when compared 
 to the library functions.

This is not true.  Compare results on an x86 systems with those on an x86_64 or 
ppc.  As I said before, shortcuts were taken in x87 that sacrificed accuracy 
for the sake of speed initially and later of compatibility.

HTH


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
|  
| I've never quite understood the necessity for performing trig operations
| on excessively large values, but perhaps my problem domain hasn't
| included such applications.
|  
|  
|  Whether you think it necessary or not, the ISO C functions allow
|  such arguments, and we're not allowed to break that without cause.
| 
| Then, as someone else said, why doesn't the compiler enforce -ansi
| and/or -pedantic by default?

Care submitting a ptach?

-- Gaby


gcc-4.0-20050526 is now available

2005-05-26 Thread gccadmin
Snapshot gcc-4.0-20050526 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20050526/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.0 CVS branch
with the following options: -rgcc-ss-4_0-20050526 

You'll find:

gcc-4.0-20050526.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.0-20050526.tar.bz2 C front end and core compiler

gcc-ada-4.0-20050526.tar.bz2  Ada front end and runtime

gcc-fortran-4.0-20050526.tar.bz2  Fortran front end and runtime

gcc-g++-4.0-20050526.tar.bz2  C++ front end and runtime

gcc-java-4.0-20050526.tar.bz2 Java front end and runtime

gcc-objc-4.0-20050526.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.0-20050526.tar.bz2The GCC testsuite

Diffs from 4.0-20050521 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.0
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd [EMAIL PROTECTED] writes:

| Gabriel Dos Reis wrote:
|  Yeah, the problem with people who work only with angles is that they
|  tend to forget that sin (and friends) are defined as functions on
|  *numbers*, not just angles or whatever, and happen to appear in
|  approximations of functions as series (e.g. Fourier series) and therefore
|  those functions can be applied to things that are not just angles. 
| 
| To paraphrase the above:
| 
| Yeah, the problem with people who only work with Fourier series is that
| they tend to forget that sin (and friends) can be used in applications
| with angles that fall in a limited range, where the hardware intrinsics
| produce faster and more accurate results.

That is a good try, but it fails in the context in which the original
statement was made.  Maybe it is good time and check the thread aand
the pattern of logic that statement was point out?

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
 This is not true.  Compare results on an x86 systems with those on an
 x86_64 or ppc.  As I said before, shortcuts were taken in x87 that
 sacrificed accuracy for the sake of speed initially and later of
 compatibility.

It *is* true for the case where the argument is in the range [0, 2*PI),
at least according to the tests I published earlier in this thread. If
you think there is something erroneous in the test code, I sincerely
would like to know.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

  This is not true.  Compare results on an x86 systems with 
 those on an
  x86_64 or ppc.  As I said before, shortcuts were taken in x87 that 
  sacrificed accuracy for the sake of speed initially and later of 
  compatibility.
 
 It *is* true for the case where the argument is in the range 
 [0, 2*PI), at least according to the tests I published 
 earlier in this thread. If you think there is something 
 erroneous in the test code, I sincerely would like to know.

Your code just tests every 3.6°, perhaps you won't trip at the problems...  

As I said, x87 can be off by hundreds of ulps, whereas the routines for x86_64 
which ships with SUSE are accurate to less than 1ulp over their entire domain.

Besides, you're also comparing 80-bit calculations with 64-bit calculations, 
not only the accuracy of sin and cos.  Try using -ffloat-store along with 
-mfpmath=387 and see yet another set of results.  At the end of the day, which 
one do you trust?  I wouldn't trust my check balance to x87 microcode... ;-)

HTH


___
Evandro MenezesSoftware Strategy  Alliance
512-602-9940AMD
[EMAIL PROTECTED]  Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
 Scott Robert Ladd [EMAIL PROTECTED] writes:
 | Then, as someone else said, why doesn't the compiler enforce -ansi
 | and/or -pedantic by default?
 
 Care submitting a ptach?

Would a strictly ansi default be accepted on principle? Given the
existing code base of non-standard code, such a change may be unrealistic.

I'm willing to make the -ansi -pedantic patch, if I wouldn't be
wasting my time.

What about separating hardware intrinsics from
-funsafe-math-optimizations? I believe this would make everyone happy by
allowing people to use the compiler more effectively in different
circumstances.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
 Besides, you're also comparing 80-bit calculations with 64-bit
 calculations, not only the accuracy of sin and cos.  Try using
 -ffloat-store along with -mfpmath=387 and see yet another set of
 results.  At the end of the day, which one do you trust?  I wouldn't
 trust my check balance to x87 microcode... ;-)

I wouldn;t trust my bank accounts to the x87 under any circumstances;
anyone doing exact math should be using fixed-point.

Different programs have different requirements. I don't understand why
GCC needs to be one-size fits all, when it could be *better* than the
competition by taking a broader and more flexible view.

..Scott



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-23, at 08:15, Gabriel Dos Reis wrote:



Sixth, there is a real mess about name spaces.  It is true that
every C programmers knows the rule saying tags inhabit different name
space than variable of functions.  However, all the C coding standards
I've read so far usually suggest

   typedef struct foo foo;

but *not*

   typedef struct foo *foo;

i.e. bringing the tag-name into normal name space to name the type
structure or enumeration is OK, but not naming a different type!  the
latter practice will be flagged by a C++ compiler.  I guess we may
need some discussion about the naming of structure (POSIX reserves
anything ending with _t, so we might want to choose something so
that we don't run into problem.  However, I do not expect this issue
to dominate the discussion :-))



In 80% of the cases you are talking about the GCC source code already
follows the semi-convention of appending _s to the parent type.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 09:09, Zack Weinberg wrote:


Gabriel Dos Reis [EMAIL PROTECTED] writes:

[dropping most of the message - if I haven't responded, assume I don't
agree but I also don't care enough to continue the argument.  Also,
rearranging paragraphs a bit so as not to have to repeat myself]



with the explicit call to malloc + explicit specification of sizeof,
I've found a number of wrong codes -- while replacing the existing
xmalloc/xcallo with XNEWVEC and friends (see previous patches and
messages) in libiberty, not counting the happy confusion about
xcalloc() in the current GCC codes.  Those are bugs we do not have
with the XNEWVEC and friends.  Not only, we do get readable code, we
also get right codes.


...


I don't think so.  These patches make it possible to compile the
source code with a C++ compiler.  We gain better checking by doing
that.



Have you found any places where the bugs you found could have resulted
in user-visible incorrect behavior (of any kind)?

If you have, I will drop all of my objections.


You could look at the linkage issues for darwin I have found several  
months

ago. They where *real*.


Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 06:00, Andrew Pinski wrote:



On May 24, 2005, at 12:01 AM, Zack Weinberg wrote:


Use of bare 'inline' is just plain wrong in our source code; this has
nothing to do with C++, no two C compilers implement bare 'inline'
alike.  Patches to add 'static' to such functions (AND MAKING NO  
OTHER

CHANGES) are preapproved, post-slush.


That will not work for the cases where the bare 'inline' are used
because they are external also in this case.  Now this is where C99  
and

C++ differs at what a bare 'inline' means so I have no idea what to
do, except for removing the 'inline' in first place.


This actually applies only to two function from libiberty:

 /* Return the current size of given hash table. */
-inline size_t
-htab_size (htab)
- htab_t htab;
+size_t
+htab_size (htab_t htab)
{
   return htab-size;
}
/* Return the current number of elements in given hash table. */
-inline size_t
-htab_elements (htab)
- htab_t htab;
+size_t
+htab_elements (htab_t htab)
{
   return htab-n_elements - htab-n_deleted;
}

It could be resolved easy be moving those macro wrappers in to a  
header
and making the static inline there. Actually this could improve the  
GCC code

overall a bit.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 18:06, Diego Novillo wrote:


On Mon, May 23, 2005 at 01:15:17AM -0500, Gabriel Dos Reis wrote:



So, if various components maintainers (e.g. C and C++, middle-end,
ports, etc.)  are willing to help quickly reviewing patches we can
have this done for this week (assuming mainline is unslushed soon).
And, of course, everybody can help :-)



If the final goal is to allow GCC components to be implemented in
C++, then I am all in favour of this project.  I'm pretty sick of
all this monkeying around we do with macros to make up for the
lack of abstraction.


Amen. GCC cries and woes through struct tree for polymorphism.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-25, at 08:06, Christoph Hellwig wrote:


On Tue, May 24, 2005 at 05:14:42PM -0700, Zack Weinberg wrote:

I'm not sure what the above may imply for your ongoing  
discussion, tough...




Well, if I were running the show, the 'clock' would only start  
running

when it was consensus among the libstdc++ developers that the soname
would not be bumped again - that henceforth libstdc++ was  
committed to
binary compatibility as good as glibc's.  Or better, if y'all can  
manage

it.  It doesn't sound like we're there yet, to me.



Why can't libstdc++ use symbol versioning?  glibc has maintained  
the soname

and binary comptiblity despite changing fundamental types like FILE


Please stop spreading rumors:

1. libgcc changes with compiler release. glibc is loving libgcc. ergo:
   glibc has not maintained the soname and binary compatibility.

2. The whole linker tricks glibc plays to accomplish this
   are not portable and not applicable to C++ code.

3. Threads are the death to glibc backward compatibility.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 21:34, Scott Robert Ladd wrote:


For many practical problems, the world can be considered flat. And  
I do
plenty of spherical geometry (GPS navigation) without requiring the  
sin

of 2**90. ;)


Yes right. I guess your second name is ignorance.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-27, at 00:00, Gabriel Dos Reis wrote:

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*,



The problem with people who work only with angles is that they are  
without sin.




Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 22:39, Gabriel Dos Reis wrote:


Scott Robert Ladd [EMAIL PROTECTED] writes:

| Richard Henderson wrote:
|  On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
| 
| static const double range = PI; // * 2.0;
| static const double incr  = PI / 100.0;
| 
| 
|  The trig insns fail with large numbers; an argument
|  reduction loop is required with their use.
|
| Yes, but within the defined mathematical ranges for sine and  
cosine --

| [0, 2 * PI) --

this is what they call post-modern maths?

[...]

| I've never quite understood the necessity for performing trig  
operations

| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.


I agree fully. And who was this Fourier anyway?


help, cvs screwed up

2005-05-26 Thread Mike Stump
I did a checkin using ../ in one of the files and cvs screwed up.   
The ChangeLog file came out ok, but, all the others were created  
someplace else.  I'm thinking those ,v files should just be rmed off  
the server...  but, would rather someone else do that.  Thanks.


I was in gcc/testsuite/objc.dg at the time.

mrs $ cvs ci ../ChangeLog $f
Checking in ../ChangeLog;
/cvs/gcc/gcc/gcc/testsuite/ChangeLog,v  --  ChangeLog
new revision: 1.5540; previous revision: 1.5539
done
RCS file: /cvs/gcc/comp-types-8.m,v
done
Checking in comp-types-8.m;
/cvs/gcc/comp-types-8.m,v  --  comp-types-8.m
initial revision: 1.1
done
RCS file: /cvs/gcc/encode-6.m,v
done
Checking in encode-6.m;
/cvs/gcc/encode-6.m,v  --  encode-6.m
initial revision: 1.1
done
RCS file: /cvs/gcc/extra-semi.m,v
done
Checking in extra-semi.m;
/cvs/gcc/extra-semi.m,v  --  extra-semi.m
initial revision: 1.1
done
RCS file: /cvs/gcc/fix-and-continue-2.m,v
done
Checking in fix-and-continue-2.m;
/cvs/gcc/fix-and-continue-2.m,v  --  fix-and-continue-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/isa-field-1.m,v
done
Checking in isa-field-1.m;
/cvs/gcc/isa-field-1.m,v  --  isa-field-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/lookup-1.m,v
done
Checking in lookup-1.m;
/cvs/gcc/lookup-1.m,v  --  lookup-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-15.m,v
done
Checking in method-15.m;
/cvs/gcc/method-15.m,v  --  method-15.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-16.m,v
done
Checking in method-16.m;
/cvs/gcc/method-16.m,v  --  method-16.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-17.m,v
done
Checking in method-17.m;
/cvs/gcc/method-17.m,v  --  method-17.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-18.m,v
done
Checking in method-18.m;
/cvs/gcc/method-18.m,v  --  method-18.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-19.m,v
done
Checking in method-19.m;
/cvs/gcc/method-19.m,v  --  method-19.m
initial revision: 1.1
done
RCS file: /cvs/gcc/next-runtime-1.m,v
done
Checking in next-runtime-1.m;
/cvs/gcc/next-runtime-1.m,v  --  next-runtime-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/no-extra-load.m,v
done
Checking in no-extra-load.m;
/cvs/gcc/no-extra-load.m,v  --  no-extra-load.m
initial revision: 1.1
done
RCS file: /cvs/gcc/pragma-1.m,v
done
Checking in pragma-1.m;
/cvs/gcc/pragma-1.m,v  --  pragma-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/stubify-1.m,v
done
Checking in stubify-1.m;
/cvs/gcc/stubify-1.m,v  --  stubify-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/stubify-2.m,v
done
Checking in stubify-2.m;
/cvs/gcc/stubify-2.m,v  --  stubify-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-class-4.m,v
done
Checking in super-class-4.m;
/cvs/gcc/super-class-4.m,v  --  super-class-4.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-dealloc-1.m,v
done
Checking in super-dealloc-1.m;
/cvs/gcc/super-dealloc-1.m,v  --  super-dealloc-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-dealloc-2.m,v
done
Checking in super-dealloc-2.m;
/cvs/gcc/super-dealloc-2.m,v  --  super-dealloc-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-6.m,v
done
Checking in try-catch-6.m;
/cvs/gcc/try-catch-6.m,v  --  try-catch-6.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-7.m,v
done
Checking in try-catch-7.m;
/cvs/gcc/try-catch-7.m,v  --  try-catch-7.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-8.m,v
done
Checking in try-catch-8.m;
/cvs/gcc/try-catch-8.m,v  --  try-catch-8.m
initial revision: 1.1
done
mrs $ echo $f
comp-types-8.m encode-6.m extra-semi.m fix-and-continue-2.m isa- 
field-1.m lookup-1.m method-15.m method-16.m method-17.m method-18.m  
method-19.m next-runtime-1.m no-extra-load.m pragma-1.m stubify-1.m  
stubify-2.m super-class-4.m super-dealloc-1.m super-dealloc-2.m try- 
catch-6.m try-catch-7.m try-catch-8.m




Re: help, cvs screwed up

2005-05-26 Thread Ian Lance Taylor
Mike Stump [EMAIL PROTECTED] writes:

 I did a checkin using ../ in one of the files and cvs screwed up.
 The ChangeLog file came out ok, but, all the others were created
 someplace else.  I'm thinking those ,v files should just be rmed off
 the server...  but, would rather someone else do that.  Thanks.

I have removed these files from the server.

Ian


Re: help, cvs screwed up

2005-05-26 Thread Mike Stump

On May 26, 2005, at 8:47 PM, Ian Lance Taylor wrote:

I have removed these files from the server.


Much thanks.



[Bug java/9861] method name mangling ignores return type

2005-05-26 Thread rmathew at gcc dot gnu dot org

--- Additional Comments From rmathew at gcc dot gnu dot org  2005-05-26 
06:08 ---
Some useful tips can be found here:

  http://gcc.gnu.org/ml/java-patches/2005-q2/msg00558.html

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9861


[Bug middle-end/21709] [3.4 regression] ICE on compile-time complex NaN

2005-05-26 Thread roger at eyesopen dot com

--- Additional Comments From roger at eyesopen dot com  2005-05-26 06:11 
---
This should now be fixed on the gcc-3_4-branch (and the same patch has been
applied to mainline to prevent this ever causing problems in future).


-- 
   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21709


[Bug fortran/17283] UNPACK issues

2005-05-26 Thread cvs-commit at gcc dot gnu dot org

--- Additional Comments From cvs-commit at gcc dot gnu dot org  2005-05-26 
06:26 ---
Subject: Bug 17283

CVSROOT:/cvs/gcc
Module name:gcc
Changes by: [EMAIL PROTECTED]   2005-05-26 06:26:18

Modified files:
libgfortran: ChangeLog 
libgfortran/intrinsics: unpack_generic.c 
gcc/testsuite  : ChangeLog 
gcc/testsuite/gfortran.fortran-torture/execute: 
intrinsic_unpack.f90 

Log message:
2005-05-26  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/17283
* gfortran.fortran-torture/execute/intrinsic_unpack.f90:
Test callee-allocated memory with write statements.

2005-05-26  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/17283
* intrinsics/unpack_generic.c:  Fix name of routine
on top.  Update copyright years.
(unpack1):  Remove const from return array descriptor.
rs:  New variable, for calculating return sizes.
Populate return array descriptor if ret-data is NULL.

Patches:
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/libgfortran/ChangeLog.diff?cvsroot=gccr1=1.228r2=1.229
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/libgfortran/intrinsics/unpack_generic.c.diff?cvsroot=gccr1=1.6r2=1.7
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/testsuite/ChangeLog.diff?cvsroot=gccr1=1.5529r2=1.5530
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/testsuite/gfortran.fortran-torture/execute/intrinsic_unpack.f90.diff?cvsroot=gccr1=1.2r2=1.3



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17283


[Bug c++/21763] New: Fails to find inherited protected member

2005-05-26 Thread igodard at pacbell dot net
This does not appear to be the usual problem where an ambiguous name is 
reported 
as undefined. The protected data member is unambiguous after being publicly 
inherited, but is still reported as undefined.

-- 
   Summary: Fails to find inherited protected member
   Product: gcc
   Version: 3.4.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P2
 Component: c++
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: igodard at pacbell dot net
CC: gcc-bugs at gcc dot gnu dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21763


[Bug c++/21763] Fails to find inherited protected member

2005-05-26 Thread igodard at pacbell dot net

--- Additional Comments From igodard at pacbell dot net  2005-05-26 06:27 
---
Created an attachment (id=8969)
 -- (http://gcc.gnu.org/bugzilla/attachment.cgi?id=8969action=view)
compiler output


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21763


[Bug c++/21763] Fails to find inherited protected member

2005-05-26 Thread igodard at pacbell dot net

--- Additional Comments From igodard at pacbell dot net  2005-05-26 06:28 
---
Created an attachment (id=8970)
 -- (http://gcc.gnu.org/bugzilla/attachment.cgi?id=8970action=view)
source code (compressed)


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21763


[Bug fortran/17283] UNPACK issues

2005-05-26 Thread cvs-commit at gcc dot gnu dot org

--- Additional Comments From cvs-commit at gcc dot gnu dot org  2005-05-26 
06:40 ---
Subject: Bug 17283

CVSROOT:/cvs/gcc
Module name:gcc
Branch: gcc-4_0-branch
Changes by: [EMAIL PROTECTED]   2005-05-26 06:40:42

Modified files:
libgfortran: ChangeLog 
libgfortran/intrinsics: unpack_generic.c 
gcc/testsuite  : ChangeLog 
gcc/testsuite/gfortran.fortran-torture/execute: 
intrinsic_unpack.f90 

Log message:
2005-05-26  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/17283
* gfortran.fortran-torture/execute/intrinsic_unpack.f90:
Test callee-allocated memory with write statements.

2005-05-26  Thomas Koenig  [EMAIL PROTECTED]

PR libfortran/17283
* intrinsics/unpack_generic.c:  Fix name of routine
on top.  Update copyright years.
(unpack1):  Remove const from return array descriptor.
rs:  New variable, for calculating return sizes.
Populate return array descriptor if ret-data is NULL.

Patches:
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/libgfortran/ChangeLog.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=1.163.2.41r2=1.163.2.42
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/libgfortran/intrinsics/unpack_generic.c.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=1.6r2=1.6.12.1
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/testsuite/ChangeLog.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=1.5084.2.198r2=1.5084.2.199
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/testsuite/gfortran.fortran-torture/execute/intrinsic_unpack.f90.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=1.2r2=1.2.46.1



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17283


[Bug target/21761] [4.1 Regression] mainline gcc causing internal compiler error.

2005-05-26 Thread geoffk at gcc dot gnu dot org

--- Additional Comments From geoffk at gcc dot gnu dot org  2005-05-26 
06:50 ---
This should fix it:

*** rs6000.md.~1.367.~  Sat May 14 22:06:45 2005
--- rs6000.md   Wed May 25 23:48:56 2005
***
*** 1672,1678 
(const_int 0)))
 (set (match_operand:P 0 gpc_reg_operand )
(neg:P (match_dup 1)))]
!   TARGET_32BIT  reload_completed
[(set (match_dup 0)
(neg:P (match_dup 1)))
 (set (match_dup 2)
--- 1672,1678 
(const_int 0)))
 (set (match_operand:P 0 gpc_reg_operand )
(neg:P (match_dup 1)))]
!   reload_completed
[(set (match_dup 0)
(neg:P (match_dup 1)))
 (set (match_dup 2)

-- 
   What|Removed |Added

 AssignedTo|unassigned at gcc dot gnu   |geoffk at gcc dot gnu dot
   |dot org |org
 Status|NEW |ASSIGNED
   Last reconfirmed|2005-05-26 01:15:24 |2005-05-26 06:50:58
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21761


[Bug c++/21764] New: visibility attributes on namespace scope

2005-05-26 Thread bkoz at gcc dot gnu dot org
As per the commentary in 

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19664.

What is desired is to add the visibility attributes to namespace scope, in
addition to class scope.

So, things like:

namespace std __attribute__ ((visibility (default) ));
{
  class foo { ... };
}

when compiled with -fvisibility=hidden would hide all foo symbols, including
ctor, dtor, etc.

-- 
   Summary: visibility attributes on namespace scope
   Product: gcc
   Version: 4.0.0
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P2
 Component: c++
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: bkoz at gcc dot gnu dot org
CC: gcc-bugs at gcc dot gnu dot org
 GCC build triplet: i686-pc-linux-gnu
  GCC host triplet: i686-pc-linux-gnu
GCC target triplet: i686-pc-linux-gnu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21764


[Bug java/19870] gcj -C doesn't generate accessors for private members across nested class boundaries

2005-05-26 Thread rmathew at gcc dot gnu dot org

--- Additional Comments From rmathew at gcc dot gnu dot org  2005-05-26 
07:31 ---
I have now submitted a patch for fixing this bug:

  http://gcc.gnu.org/ml/java-patches/2005-q2/msg00570.html

-- 
   What|Removed |Added

   Keywords||patch


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19870


[Bug target/21716] [3.4/4.0/4.1 Regression] ICE in reg-stack.c's swap_rtx_condition

2005-05-26 Thread cvs-commit at gcc dot gnu dot org

--- Additional Comments From cvs-commit at gcc dot gnu dot org  2005-05-26 
08:07 ---
Subject: Bug 21716

CVSROOT:/cvs/gcc
Module name:gcc
Changes by: [EMAIL PROTECTED]   2005-05-26 08:07:36

Modified files:
gcc: ChangeLog reg-stack.c 

Log message:
PR target/21716
* reg-stack.c (swap_rtx_condition): Don't crash if %ax user was not
found in the basic block and last insn in the basic block is not
INSN_P.  Remove explicit unspec numbers that are no longer valid
from comments.

Patches:
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/ChangeLog.diff?cvsroot=gccr1=2.8907r2=2.8908
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/reg-stack.c.diff?cvsroot=gccr1=1.177r2=1.178



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21716


[Bug tree-optimization/21765] New: -free-vrp is undocumented.

2005-05-26 Thread kazu at cs dot umass dot edu
 

-- 
   Summary: -free-vrp is undocumented.
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Keywords: documentation
  Severity: normal
  Priority: P2
 Component: tree-optimization
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: kazu at cs dot umass dot edu
CC: gcc-bugs at gcc dot gnu dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21765


[Bug fortran/21730] Character length incorrect.

2005-05-26 Thread fxcoudert at gcc dot gnu dot org

--- Additional Comments From fxcoudert at gcc dot gnu dot org  2005-05-26 
08:31 ---
Confirmed. Parse tree output for slightly modified testcase:

$ cat a.f
  character*2 a
  character*4 b
  character*4 c
  parameter(a=12)
  parameter (b = a)
  c = a
  write (*, '(#,A,#)') b
  write (*, '(#,A,#)') c
  end
$ gfortran a.f -fdump-parse-tree

Namespace: A-H: (REAL 4) I-N: (INTEGER 4) O-Z: (REAL 4)
symtree: b  Ambig 0
symbol b (CHARACTER 4)(PARAMETER UNKNOWN-INTENT UNKNOWN-ACCESS 
UNKNOWN-PROC)
value: '12'

symtree: a  Ambig 0
symbol a (CHARACTER 2)(PARAMETER UNKNOWN-INTENT UNKNOWN-ACCESS 
UNKNOWN-PROC)
value: '12'

symtree: c  Ambig 0
symbol c (CHARACTER 4)(VARIABLE UNKNOWN-INTENT UNKNOWN-ACCESS 
UNKNOWN-PROC)


  ASSIGN c '12'
  WRITE UNIT=6 FMT='(#,A,#)'
  TRANSFER '12'
  DT_END
  WRITE UNIT=6 FMT='(#,A,#)'
  TRANSFER c
  DT_END

$ ./a.out 
#12#
#12  #


-- 
   What|Removed |Added

 Status|UNCONFIRMED |NEW
 Ever Confirmed||1
   Last reconfirmed|-00-00 00:00:00 |2005-05-26 08:31:17
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21730


[Bug target/21716] [3.4/4.0/4.1 Regression] ICE in reg-stack.c's swap_rtx_condition

2005-05-26 Thread cvs-commit at gcc dot gnu dot org

--- Additional Comments From cvs-commit at gcc dot gnu dot org  2005-05-26 
08:55 ---
Subject: Bug 21716

CVSROOT:/cvs/gcc
Module name:gcc
Branch: gcc-4_0-branch
Changes by: [EMAIL PROTECTED]   2005-05-26 08:55:07

Modified files:
gcc: ChangeLog reg-stack.c 

Log message:
PR target/21716
* reg-stack.c (swap_rtx_condition): Don't crash if %ax user was not
found in the basic block and last insn in the basic block is not
INSN_P.  Remove explicit unspec numbers that are no longer valid
from comments.

Patches:
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/ChangeLog.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=2.7592.2.263r2=2.7592.2.264
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/reg-stack.c.diff?cvsroot=gcconly_with_tag=gcc-4_0-branchr1=1.171r2=1.171.10.1



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21716


[Bug target/21716] [3.4/4.0/4.1 Regression] ICE in reg-stack.c's swap_rtx_condition

2005-05-26 Thread cvs-commit at gcc dot gnu dot org

--- Additional Comments From cvs-commit at gcc dot gnu dot org  2005-05-26 
09:05 ---
Subject: Bug 21716

CVSROOT:/cvs/gcc
Module name:gcc
Branch: gcc-3_4-branch
Changes by: [EMAIL PROTECTED]   2005-05-26 09:05:05

Modified files:
gcc: ChangeLog reg-stack.c 

Log message:
PR target/21716
* reg-stack.c (swap_rtx_condition): Don't crash if %ax user was not
found in the basic block and last insn in the basic block is not
INSN_P.  Remove explicit unspec numbers that are no longer valid
from comments.

Patches:
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/ChangeLog.diff?cvsroot=gcconly_with_tag=gcc-3_4-branchr1=2.2326.2.871r2=2.2326.2.872
http://gcc.gnu.org/cgi-bin/cvsweb.cgi/gcc/gcc/reg-stack.c.diff?cvsroot=gcconly_with_tag=gcc-3_4-branchr1=1.140.4.2r2=1.140.4.3



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21716


[Bug fortran/17283] UNPACK issues

2005-05-26 Thread tkoenig at gcc dot gnu dot org

--- Additional Comments From tkoenig at gcc dot gnu dot org  2005-05-26 
09:41 ---
A scalar mask is invalid for unpack, so the error message
is correct.

The memory allocation issue has been fixed for 4.0 and mainline.

Closing this bug.

-- 
   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17283


[Bug c++/21764] visibility attributes on namespace scope

2005-05-26 Thread giovannibajo at libero dot it

--- Additional Comments From giovannibajo at libero dot it  2005-05-26 
10:18 ---
Please, explicitally specify how you would like this to work with nested 
namespaces.

Also, whoever implements this should also make sure it works correctly with 
namespace:

namespace N __attribute__((visibility (hidden)))
{
   template class T
   struct B
   {
B() {}
~B() {}
void foo(void) {}
   };

   template 
   struct Bint;
}

template 
struct ::N::Bint
{
   void bar(void) {}// still hidden!
}


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21764


[Bug tree-optimization/21639] poisoned ggc memory used for -ftree-vectorize

2005-05-26 Thread dorit at il dot ibm dot com

--- Additional Comments From dorit at il dot ibm dot com  2005-05-26 12:02 
---
patch: http://gcc.gnu.org/ml/gcc-patches/2005-05/msg02477.html



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21639


[Bug target/21716] [3.4/4.0/4.1 Regression] ICE in reg-stack.c's swap_rtx_condition

2005-05-26 Thread pinskia at gcc dot gnu dot org

--- Additional Comments From pinskia at gcc dot gnu dot org  2005-05-26 
12:05 ---
Fixed.

-- 
   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED
   Target Milestone|--- |3.4.5


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21716


[Bug tree-optimization/21639] [4.1 Regression] poisoned ggc memory used for -ftree-vectorize

2005-05-26 Thread pinskia at gcc dot gnu dot org


-- 
   What|Removed |Added

URL||http://gcc.gnu.org/ml/gcc-
   ||patches/2005-
   ||05/msg02477.html
   Keywords||ice-on-valid-code, patch
Summary|poisoned ggc memory used for|[4.1 Regression] poisoned
   |-ftree-vectorize|ggc memory used for -ftree-
   ||vectorize
   Target Milestone|--- |4.1.0


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21639


[Bug tree-optimization/21765] -free-vrp is undocumented.

2005-05-26 Thread pinskia at gcc dot gnu dot org

--- Additional Comments From pinskia at gcc dot gnu dot org  2005-05-26 
12:13 ---
Confirmed.

-- 
   What|Removed |Added

 CC||pinskia at gcc dot gnu dot
   ||org
 Status|UNCONFIRMED |NEW
 Ever Confirmed||1
   Last reconfirmed|-00-00 00:00:00 |2005-05-26 12:13:53
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21765


[Bug c++/21762] [4.0/4.1 Regression] void return with pointer to member function to undefined class

2005-05-26 Thread pinskia at gcc dot gnu dot org

--- Additional Comments From pinskia at gcc dot gnu dot org  2005-05-26 
12:18 ---
Confirmed, very much related to PR 21614.  Changing the undefined class to a 
defined class makes the 
code work.

-- 
   What|Removed |Added

 CC||pinskia at gcc dot gnu dot
   ||org
  BugsThisDependsOn||21614
 Status|UNCONFIRMED |NEW
 Ever Confirmed||1
   Keywords||rejects-valid
   Last reconfirmed|-00-00 00:00:00 |2005-05-26 12:18:23
   date||
Summary|Regression: Void functions  |[4.0/4.1 Regression] void
   |can't return invocation of  |return with pointer to
   |pointers to void member |member function to undefined
   |functions   |class
   Target Milestone|--- |4.0.1


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21762


[Bug c++/21763] Fails to find inherited protected member

2005-05-26 Thread pinskia at gcc dot gnu dot org


-- 
   What|Removed |Added

   Attachment #8969|application/octet-stream|text/plain
  mime type||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21763


[Bug c++/21763] Fails to find inherited protected member

2005-05-26 Thread pinskia at gcc dot gnu dot org

--- Additional Comments From pinskia at gcc dot gnu dot org  2005-05-26 
12:23 ---
Please read the changes page for 3.4.0 about dependent names.

-- 
   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||INVALID


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21763


[Bug libstdc++/20150] allocate(0) consistency checks

2005-05-26 Thread pinskia at gcc dot gnu dot org


-- 
   What|Removed |Added

 Status|UNCONFIRMED |NEW
 Ever Confirmed||1
   Last reconfirmed|-00-00 00:00:00 |2005-05-26 12:36:30
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20150


[Bug middle-end/21706] MAXPATHLEN usage in [gcc]/gcc/tlink.c

2005-05-26 Thread ams at gnu dot org

--- Additional Comments From ams at gnu dot org  2005-05-26 12:59 ---
(In reply to comment #3)
 Like most POSIX limits PATH_MAX may not be defined if the actual limit is not 
  
 fixed.  

Correct, and GNU doesn't have such a limit for the length of filenames, the
number of arguments passed to a program or the length of a hostname.  And
probobly a whole bunch of other things that have slipped my mind right now.
All of this is perfectly compliant with POSIX.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21706


[Bug bootstrap/21766] New: bootstrap failure

2005-05-26 Thread dims at yahoo dot com
/cygdrive/c/sf/gcc-build/./gcc/xgcc -B/cygdrive/c/sf/gcc-build/./gcc/
-B/usr/local/i686-pc-cygwin/bin/ -B/usr/local/i686-pc-cygwin/lib/ -isystem
/usr/local/i686-pc-cygwin/include -isystem /usr/local/i686-pc-cygwin/sys-include
-c -DHAVE_CONFIG_H -O2 -g -O2  -I. -I../../../gcc/libiberty/../include  -W -Wall
-pedantic -Wwrite-strings -Wstrict-prototypes ../../../gcc/libiberty/regex.c -o
regex.o
../../../gcc/libiberty/regex.c:128: warning: function declaration isn't a 
prototype
../../../gcc/libiberty/regex.c:128: warning: conflicting types for built-in
function 'malloc'
../../../gcc/libiberty/regex.c:129: warning: function declaration isn't a 
prototype
In file included from ../../../gcc/libiberty/../include/xregex.h:26,
 from ../../../gcc/libiberty/regex.c:191:
../../../gcc/libiberty/../include/xregex2.h:538: warning: ISO C90 does not
support 'static' or type qualifiers in parameter array declarators
In file included from ../../../gcc/libiberty/regex.c:636:
../../../gcc/libiberty/regex.c: In function 'byte_regex_compile':
../../../gcc/libiberty/regex.c:2437: warning: implicit declaration of function
'free'
../../../gcc/libiberty/regex.c: In function 'byte_compile_range':
../../../gcc/libiberty/regex.c:4485: warning: signed and unsigned type in
conditional expression
../../../gcc/libiberty/regex.c:4495: warning: signed and unsigned type in
conditional expression
../../../gcc/libiberty/regex.c:4495: warning: signed and unsigned type in
conditional expression
../../../gcc/libiberty/regex.c: In function 'byte_re_compile_fastmap':
../../../gcc/libiberty/regex.c:4833: warning: implicit declaration of function
'abort'
../../../gcc/libiberty/regex.c:4833: warning: incompatible implicit declaration
of built-in function 'abort'
../../../gcc/libiberty/regex.c: In function 'byte_re_match_2_internal':
../../../gcc/libiberty/regex.c:7419: warning: incompatible implicit declaration
of built-in function 'abort'
../../../gcc/libiberty/regex.c: In function 'xre_comp':
../../../gcc/libiberty/regex.c:7817: warning: return discards qualifiers from
pointer target type
../../../gcc/libiberty/regex.c: In function 'xregerror':
../../../gcc/libiberty/regex.c:8076: warning: incompatible implicit declaration
of built-in function 'abort'
../../../gcc/libiberty/regex.c: In function 'byte_regex_compile':
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: error: invariant not recomputed when
ADDR_EXPR changed
_ctype_D.1871[1];
 
../../../gcc/libiberty/regex.c:2283: internal compiler error: verify_stmts 
failed.
Please submit a full bug report,
with preprocessed source if appropriate.
See URL:http://gcc.gnu.org/bugs.html for instructions.
make[2]: *** [regex.o] Error 1
make[2]: Leaving directory `/cygdrive/c/sf/gcc-build/i686-pc-cygwin/libiberty'
make[1]: *** [all-target-libiberty] Error 2
make[1]: Leaving directory `/cygdrive/c/sf/gcc-build'
make: *** [bootstrap] Error 2

-- 
   Summary: bootstrap failure
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: normal
  Priority: P1
 Component: bootstrap
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: dims at yahoo dot com
CC: gcc-bugs at gcc dot gnu dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21766


[Bug bootstrap/21766] bootstrap failure

2005-05-26 Thread dims at yahoo dot com

--- Additional Comments From dims at yahoo dot com  2005-05-26 13:54 ---
Environment: latest cygwin on winxp.

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21766


[Bug middle-end/21766] [4.1 Regression] Bootstrap failure on i686-pc-cygwin

2005-05-26 Thread pinskia at gcc dot gnu dot org


-- 
   What|Removed |Added

  Component|bootstrap   |middle-end
 GCC target triplet||i686-pc-cygwin
   Keywords||build, ice-on-valid-code
Summary|bootstrap failure   |[4.1 Regression] Bootstrap
   ||failure on i686-pc-cygwin
   Target Milestone|--- |4.1.0


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21766


[Bug middle-end/21766] [4.1 Regression] Bootstrap failure on i686-pc-cygwin

2005-05-26 Thread pinskia at gcc dot gnu dot org


-- 
   What|Removed |Added

 CC||pinskia at gcc dot gnu dot
   ||org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21766


[Bug rtl-optimization/20070] If-conversion can't match equivalent code, and cross-jumping only works for literal matches

2005-05-26 Thread amylaar at gcc dot gnu dot org


-- 
   What|Removed |Added

  BugsThisDependsOn||21767


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20070


[Bug rtl-optimization/21767] New: if-convert leaves invalid REG_EQUAL notes

2005-05-26 Thread amylaar at gcc dot gnu dot org
if-convert sometimes moves instructions from after to before a conditional
jump.  Some REG_EQUAL are no longer true after this transformation, yet
they are not removed.

-- 
   Summary: if-convert leaves invalid REG_EQUAL notes
   Product: gcc
   Version: 3.4.3
Status: UNCONFIRMED
  Keywords: wrong-code
  Severity: normal
  Priority: P2
 Component: rtl-optimization
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: amylaar at gcc dot gnu dot org
CC: gcc-bugs at gcc dot gnu dot org
OtherBugsDependingO 20070
 nThis:


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21767


[Bug middle-end/20297] #pragma GCC visibility isn't properly handled for builtin functions

2005-05-26 Thread pluto at agmk dot net

--- Additional Comments From pluto at agmk dot net  2005-05-26 14:40 ---
(In reply to comment #4) 
 A patch is posted at 
  
 http://gcc.gnu.org/ml/gcc-patches/2005-03/msg00248.html 
  
 FYI, gcc 3.4 from RH does include this pragma. 
 
this patch ices gcc-4.1-20050522 bootstrap. 
 
../../gcc/libgcc2.c: In function '__absvsi2':  
../../gcc/libgcc2.c:215: internal compiler error: tree check: expected class 
'declaration', have 'expression' (call_expr) in expand_builtin, at  
builtins.c:6260  
 

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20297


  1   2   >