Re: S/390 as GCC 4.3 secondary plattform?

2006-10-09 Thread Wolfgang Gellerich
Hello Everyone,

> In the criteria for primary plattforms I've read that primary plattforms
> have to be "popular systems". Reading this as "widely used" I think that
> this will be a requirement which mainframes are unlikely to meet in the
> near future, so I propose to make s390 and s390x secondary plattforms for

> now. I think this can be important to show users that gcc works reliably
on
> S/390 and that it can be expected to do so in the future as well.

I agree and would like to add that with respect to the s390 platform one
should consider that "popular" and "widely used" cannot have the same
meaning as, for example, in the context of computers for personal use.

The s390 back end does not only compile any Linux-related software
on IBM System z but also the system´s Firmware (the software layer between
operating system and hardware). So, every System z machine uses code
generated by gcc, even if there if the system does not yet run Linux.

Regards, Wolfgang



Re: automatic --disable-multilib

2006-10-09 Thread Peter O'Gorman


On Oct 9, 2006, at 2:24 PM, Geoffrey Keating wrote:


Jack Howarth <[EMAIL PROTECTED]> writes:


   Shouldn't configure in gcc be made to
automatically test if -m64 is working on
the build machine in question and automatically
invoke --disable-multilib if not? Currently
on Darwin for example we have to explicitly
invoke --disable-multilib when building on
G4's or non-EMT64 Macintel machines. It would
much better if configure would automatically
handle this.


I believe trying to disable the multilib is fundamentally the wrong
approach.  I have posted a patch which I believe is the correct
approach to the automake list, where it is awaiting review.


You might want to ping it, it's been awaiting review for a while.

Peter


Re: S/390 as GCC 4.3 secondary plattform?

2006-10-09 Thread Robert Dewar

In the criteria for primary plattforms I've read that primary plattforms
have to be "popular systems". Reading this as "widely used" I think that
this will be a requirement which mainframes are unlikely to meet in the
near future, so I propose to make s390 and s390x secondary plattforms for
now. I think this can be important to show users that gcc works reliably
on S/390 and that it can be expected to do so in the future as well.


But how do you meansre "popular system" or "widely used". Number
installed?  Revenue from sales? Number of applications? Number of 
companies using the products?


I find that people are often likely to have VERY peculiar notions
of the mainframe business, and indeed have met many who seem to
think that mainframes have disappeared! Most extraordinary. The
S/390 is still a very important system by any reasonable criterion,
and likely to be for a while (when you call an airline, you are
not talking to an application running on a Dell PC :-))

I would think it perfectly reasonable for the S/390 to be
considered a primary platform on the popularity basis, but
of course it has to have a level of support that is
consistent with this. What *is* happening is that fewer
and fewer people are aware of this technology, so free
software support is what is likely to be available in
the near future.

For a small window into the real situation, see

http://www.forbes.com/2005/12/06/ibm-earnings-mainframes-1206markets05.html

IBM is expecting strong revenue growth from the introduction
of the new mainframe series in September.


Re: automatic --disable-multilib

2006-10-09 Thread Jack Howarth
Geoff,
Can you point me to the proposed patch in the gcc-patches
mailing list archives? I can't seem to find it.
 Jack

On Sun, Oct 08, 2006 at 10:24:36PM -0700, Geoffrey Keating wrote:
> 
> I believe trying to disable the multilib is fundamentally the wrong
> approach.  I have posted a patch which I believe is the correct
> approach to the automake list, where it is awaiting review.


Re: S/390 as GCC 4.3 secondary plattform?

2006-10-09 Thread Andreas Krebbel
Hi Robert,

On Mon, Oct 09, 2006 at 08:21:45AM -0400, Robert Dewar wrote:
> >In the criteria for primary plattforms I've read that primary plattforms
> >have to be "popular systems". Reading this as "widely used" I think that
> >this will be a requirement which mainframes are unlikely to meet in the
> >near future, so I propose to make s390 and s390x secondary plattforms for
> >now. I think this can be important to show users that gcc works reliably
> >on S/390 and that it can be expected to do so in the future as well.
> 
> But how do you meansre "popular system" or "widely used". Number
> installed?  Revenue from sales? Number of applications? Number of 
> companies using the products?

The release criteria for gcc 4.2 say: "Primary platforms are popular systems, 
both in the sense that there are many such systems in existence and in the 
sense that GCC is used frequently on those systems."
That's why I didn't propose to make S/390 a primary plattform in the first 
place.
But don't get me wrong, of course I would love to see this happen and I'm sure
that our team here is able to provide the necessary support for S/390 as
primary plattform as well.
But first things first I considered it preferable to start with a smaller step
towards a secondary plattform.
The decision whether S/390 meets one or the other requirement for being a 
primary or 
secondary plattform is of course up to the steering committee and I really hope
that they consider S/390 in one way or the other.

> I find that people are often likely to have VERY peculiar notions
> of the mainframe business, and indeed have met many who seem to
> think that mainframes have disappeared! Most extraordinary. The
> S/390 is still a very important system by any reasonable criterion,
> and likely to be for a while (when you call an airline, you are
> not talking to an application running on a Dell PC :-))

Not to mention all our bank accounts handled by S/390 systems and
nobody would probably like to see his money disappearing due to a gcc
bug miscompiling the banking application ;-)

> I would think it perfectly reasonable for the S/390 to be
> considered a primary platform on the popularity basis, but
> of course it has to have a level of support that is
> consistent with this. What *is* happening is that fewer
> and fewer people are aware of this technology, so free
> software support is what is likely to be available in
> the near future.

Thank you very much for your support.  I'm looking forward to see how 
the steering committee decides upon that.

Bye,

-Andreas-


Re: automatic --disable-multilib

2006-10-09 Thread Peter O'Gorman


On Oct 9, 2006, at 9:27 PM, Jack Howarth wrote:


Geoff,
Can you point me to the proposed patch in the gcc-patches
mailing list archives? I can't seem to find it.


http://lists.gnu.org/archive/html/automake-patches/2006-09/msg00027.html

It's automake -patches.

Peter


Re: automatic --disable-multilib

2006-10-09 Thread Jack Howarth
Peter,
Thanks. This problem was holding up the testing of the
libffi i386 Darwin patch because Sandro has a non-EMT64
MacBook Pro. He had to resort to --disable-multi.
   Jack
ps If I understand this issue correctly, even if the automake
maintainers accepted the patch, wouldn't autoreconf have to
be run throughout the gcc source directories?

On Mon, Oct 09, 2006 at 10:20:03PM +0900, Peter O'Gorman wrote:
> 
> On Oct 9, 2006, at 9:27 PM, Jack Howarth wrote:
> 
> >Geoff,
> >Can you point me to the proposed patch in the gcc-patches
> >mailing list archives? I can't seem to find it.
> 
> http://lists.gnu.org/archive/html/automake-patches/2006-09/msg00027.html
> 
> It's automake -patches.
> 
> Peter


source location of a tree node

2006-10-09 Thread Basile STARYNKEVITCH

Dear All

(sorry for such a naive question, I am a beginner within GCC)

How does one get the source location (e.g. start and end filename,
linenumber, ...) of a tree node; for example, the source position of every
loop inside current_loops or of every function body inside cgraph_nodes?
for these nodes, doing EXPR_FILENAME(node->decl), EXPR_LINENO(node->decl)
does not work, probably because node->decl is a declaration, but also
EXPR_FILENAME(DECL_SAVED_TREE(node->decl)),
EXPR_LINENO(DECL_SAVED_TREE(node->decl))) don't work neither...


I do understand that some of the tree (Gimple/ssa) nodes do not have any
positions with them. What is the policy on this? When transforming trees, do
we have to provide the location in the source code of the transformed tree.


Apologies for such a naive question, but I tried to find out for one hour
without success!

Thanks for reading
-- 
Basile STARYNKEVITCH http://starynkevitch.net/Basile/ 
email: basilestarynkevitchnet 
aliases: basiletunesorg = bstarynknerimnet
8, rue de la Faïencerie, 92340 Bourg La Reine, France


Re: [RFC PATCH]: enable building GMP/MPFR in local tree

2006-10-09 Thread Mike Stump

On Oct 8, 2006, at 1:42 PM, Kaveh R. GHAZI wrote:
It turned out to be much easier than I thought to decipher the top  
level

machinery and get GMP/MPFR building inside the GCC tree. :-)


Some thoughts, if this configures and builds most (all?) of the time,  
then we are changing the portability profile of gcc to be min 
(gcc,mpfr,gmp) which could be < gcc.  If the user has installed a  
newer gmp/mpfr on the system, do we want to use the build tree  
version anyway?  Can we rm -rf gmp/mpfr from the source tree to  
disable the building of these?  I suspect all the GMPLIBS and GMPINC  
stuff in the configure script is dead after this (with this version  
of the patch), though, it is probably better to leave it in there for  
now.  What is the change in build time?  Do we want to always build  
it, or only when some languages (fortran?) are configured?  And  
lastly, do we want to do this in stage 3?


Re: [RFC PATCH]: enable building GMP/MPFR in local tree

2006-10-09 Thread Joseph S. Myers
On Mon, 9 Oct 2006, Mike Stump wrote:

> On Oct 8, 2006, at 1:42 PM, Kaveh R. GHAZI wrote:
> > It turned out to be much easier than I thought to decipher the top level
> > machinery and get GMP/MPFR building inside the GCC tree. :-)
> 
> Some thoughts, if this configures and builds most (all?) of the time, then we
> are changing the portability profile of gcc to be min(gcc,mpfr,gmp) which
> could be < gcc.  If the user has installed a newer gmp/mpfr on the system, do
> we want to use the build tree version anyway?  Can we rm -rf gmp/mpfr from the
> source tree to disable the building of these?  I suspect all the GMPLIBS and
> GMPINC stuff in the configure script is dead after this (with this version of
> the patch), though, it is probably better to leave it in there for now.  What

Clearly --with-system-gmp --with-system-mpfr (like --with-system-zlib) 
would be the natural way to enable using the system copies of these 
libraries rather than building GCC's local copies.  I expect Linux 
distributors would use these options to link with the system shared 
libraries.  We do want to keep something like the existing --with-gmp 
--with-mpfr options as well to say where the system copies are, if they 
aren't in the default compiler / linker search paths.

> is the change in build time?  Do we want to always build it, or only when some
> languages (fortran?) are configured?  And lastly, do we want to do this in
> stage 3?

The patch is clearly something being proposed now for discussion and 
potential commit in stage 1 or 2.  While a proposal as a GCC 4.3 project 
would have been a good idea (in fact, it might still make sense to create 
a project page linked from 
), it's not such a big 
project as to require such a proposal.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


RFC: "make pdf" target for documentation?

2006-10-09 Thread Brooks Moses
I would like to propose that a "make pdf" target be added to the GCC 
general makefile.


I did a search to see if there was any previous discussion on this, and 
what I found were a few messages from 1999 and 2001 that seemed to imply 
that it might be a good idea, and even included a partial patch, but the 
conversation apparently died without anything coming of it.


Personally, I find .pdf files to fulfill a similar niche as .dvi files, 
but much more usefully.  Support for them on current computers is far 
more widespread, and they're more portable.  They're also my preferred 
means for looking at this sort of thing on-screen, but I acknowledge 
that I'm weird that way and more people like html.  :)


In any case, some observations that I think are relevant to this proposal:

* Generating .pdf files is exactly like generating .dvi files, except 
that one uses "texi2pdf" instead of "texi2dvi".  Thus, the makefile 
additions should be quite straightforward.


* Making the .pdf files by hand with texi2pdf is a pain, because of 
include files in various directories.


* Using "make dvi" and then running dvipdf on the results is not a 
complete substitute.  When the .pdf file is made directly from the .texi 
file, it gets a table-of-contents menu of hyperlinks that shows up in 
Acrobat's "bookmarks" pane and is invaluable for quickly locating things 
within the document; also, all of the in-document hyperlinks (@uref, 
etc.) are proper links.  None of that happens with "make dvi" and dvipdf.


* Having a "make pdf" target makes it considerably easier for people 
like me who have a TeX installation but no .dvi viewer to run a 
TeX-based check of their documentation changes, thereby (I suspect) 
reducing the number of format-specific errors that creep in.


I would be willing to do the work of figuring out the relevant makefile 
changes and submitting patches, but before I do that, I'm curious as to 
whether or not such a change is actually likely to get approved. 
Comments?  Suggestions on how to go about doing this?


Thanks much,
- Brooks



Re: source location of a tree node

2006-10-09 Thread Ian Lance Taylor
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes:

> How does one get the source location (e.g. start and end filename,
> linenumber, ...) of a tree node; for example, the source position of every
> loop inside current_loops or of every function body inside cgraph_nodes?
> for these nodes, doing EXPR_FILENAME(node->decl), EXPR_LINENO(node->decl)
> does not work, probably because node->decl is a declaration, but also
> EXPR_FILENAME(DECL_SAVED_TREE(node->decl)),
> EXPR_LINENO(DECL_SAVED_TREE(node->decl))) don't work neither...

For a DECL, use DECL_SOURCE_{LOCATION,FILE,LINE}.

Ian


Re: source location of a tree node

2006-10-09 Thread Sebastian Pop
Basile STARYNKEVITCH wrote:
> 
> How does one get the source location (e.g. start and end filename,
> linenumber, ...) of a tree node; 

You can use the same code as in tree-vectorizer.h:

#ifdef USE_MAPPED_LOCATION
  typedef source_location LOC;
  #define UNKNOWN_LOC UNKNOWN_LOCATION
  #define EXPR_LOC(e) EXPR_LOCATION(e)
  #define LOC_FILE(l) LOCATION_FILE (l)
  #define LOC_LINE(l) LOCATION_LINE (l)
#else
  typedef source_locus LOC;
  #define UNKNOWN_LOC NULL
  #define EXPR_LOC(e) EXPR_LOCUS(e)
  #define LOC_FILE(l) (l)->file
  #define LOC_LINE(l) (l)->line
#endif

Sebastian


Re: S/390 as GCC 4.3 secondary plattform?

2006-10-09 Thread Toon Moene

Robert Dewar wrote:


I would think it perfectly reasonable for the S/390 to be
considered a primary platform on the popularity basis


Another, technical, reason to consider the s390x to be a primary 
platform is that it is a different 64-bit big-endian target.


I always watch the test-result outcomes for gfortran of s390x closely - 
it's too easy to mess things up using little-endian.


--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
A maintainer of GNU Fortran: http://gcc.gnu.org/fortran/
Who's working on GNU Fortran: 
http://gcc.gnu.org/ml/gcc/2006-01/msg0.html


Re: RFC: "make pdf" target for documentation?

2006-10-09 Thread Joseph S. Myers
On Mon, 9 Oct 2006, Brooks Moses wrote:

> I would like to propose that a "make pdf" target be added to the GCC general
> makefile.

I agree.  If you look at the current GNU Coding Standards you'll see a 
series of targets {,install-}{html,dvi,pdf,ps} and associated directories 
for installation.

At present, we have html, dvi and install-html support.  Because we're 
using an autoconf version before 2.60, we have a special configure option 
--with-htmldir; 2.60 adds the --htmldir option (likewise --pdfdir etc.).  
Automake directories automatically support building these formats, but not 
installing them before automake 1.10 which isn't out yet.  So a move to 
autoconf 2.60/2.61 and automake 1.10 (for gcc and src) will substantially 
help get these targets supported throughout both repositories.

Apart from the new configure options, which will require all toplevel and 
all subdirectories to move to autoconf >= 2.60 before they can be used, 
you can add support bit-by-bit.  For example, you could start by adding 
the new targets to toplevel (in both gcc and src).  Then you could add 
dummy targets that do nothing to the subdirectories without documentation, 
so that the targets can actually be used at toplevel.  Adding proper 
support for the targets to the "gcc" subdirectory, or any other 
subdirectory that doesn't use automake, should be essentially independent 
of changes to other subdirectories.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Request for acceptance of new port (Cell SPU)

2006-10-09 Thread trevor_smigiel
Dear Steering Committee,

We, Sony Computer Entertainment, would like to contribute a port for a
new target, the Cell SPU, and seek acceptance from the Steering
Committee to do so.

(David Edelsohn indicated that before submitting patches we should
request acceptance for the new port from the Steering Committee.)

Thank you,
Trevor Smigiel



Re: RFC: "make pdf" target for documentation?

2006-10-09 Thread Brooks Moses

Joseph S. Myers wrote:

On Mon, 9 Oct 2006, Brooks Moses wrote:

I would like to propose that a "make pdf" target be added to the GCC general
makefile.


I agree.  If you look at the current GNU Coding Standards you'll see a 
series of targets {,install-}{html,dvi,pdf,ps} and associated directories 
for installation.


At present, we have html, dvi and install-html support.  Because we're 
using an autoconf version before 2.60, we have a special configure option 
--with-htmldir; 2.60 adds the --htmldir option (likewise --pdfdir etc.).  
Automake directories automatically support building these formats, but not 
installing them before automake 1.10 which isn't out yet.  So a move to 
autoconf 2.60/2.61 and automake 1.10 (for gcc and src) will substantially 
help get these targets supported throughout both repositories.


Apart from the new configure options, which will require all toplevel and 
all subdirectories to move to autoconf >= 2.60 before they can be used, 
you can add support bit-by-bit.  For example, you could start by adding 
the new targets to toplevel (in both gcc and src).  Then you could add 
dummy targets that do nothing to the subdirectories without documentation, 
so that the targets can actually be used at toplevel.  Adding proper 
support for the targets to the "gcc" subdirectory, or any other 
subdirectory that doesn't use automake, should be essentially independent 
of changes to other subdirectories.


Thanks!  So, to make sure I'm understanding the implications of this:

1.) As a first step, it sounds like I should concentrate on getting 
"make pdf" to work, without worrying about how the .pdf files get 
installed for now.  (This looks similar to the existing case with .dvi 
files, as there is a "dvi" target but no "install-dvi" target.)


2.) Support for building a "pdf" target can functionally be added 
piecemeal, directory by directory.  Does it make sense for me to try to 
get everything to the point of a patch that builds cleanly (with empty 
"pdf" targets in all the subdirectories, and rebuilding all of the 
Makefile.in files in the directories that do use automake, which is 
going to make for a quite large patch file), or to submit patches as 
pieces that allow the "pdf" target to build correctly up to a point at 
which it gets to the end of the modified subdirectories and breaks?


(FWIW, so far I've got things working in the gcc subdirectory, at least 
for the C, C++, and Fortran languages.)


- Brooks


aligned attribute and the new operator (pr/15795)

2006-10-09 Thread trevor_smigiel
Hi,

I would like to reopen the discussion for pr/15795, or at least get
clarification on the current resolution of WONTFIX.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15795

Let me state right at the beginning that I am also volunteering to do
the actual work to come up with an agreed solution and implement it.

Briefly, pr/15795 was submitted to address the issue of calling the new
operator for a class which has an alignment greater than that guaranteed
by the underlying malloc().  It specifically talks about alignment for
vector types, but it applies to any alignment.

For example, a 16 byte vector type typically requires 16 byte alignment
and malloc() might only guarantee 8 byte alignment.  When a class
contains that vector the class requires alignment of 16, but a new
operator that calls that malloc() will not always return properly
aligned memory.

The pr proposes some solutions:

  solution:  operator new should always return 16 byte aligned memory
  response:  glibc doesn't do that, neither should libstdc++

I agree with this response.  A solution that handles any alignment
is better, simply changing the default isn't sufficient.

  solution:  create an additional signature for operator new which
 has a parameter that indicates the alignment.  The compiler
 should call this version of the operator when necessary and
 the standard library should provide an appropriate
 implementation.
  response:  "would interact badly with overriding the default operator
 new."

At first glance, this could break some existing code, but I think
this solution could be made to work.  I'll discuss below.

  solution:  the library will provide an implementation of a new operator
 with an additional parameter, but the user is responsible for
 calling it.
  response:  doesn't work well with existing template class libraries,
 like STL or Qt.

I agree with this response.  (There was no agreement or disagreement
with this response in the pr.)

For the second proposed solution, let's define precisely the cases that
"interact badly".  I can think of only one case, defined by these
conditions:
  - a type has an alignment greater than what malloc() guarantees
  - operator new for that type does not call the default implementation
for operator new
  - an object of that type is created using operator new
  - an appropriate implementation of operator new with the additional
alignment parameter is not provided for this type.
 
Clearly, if the compiler does not call the user defined version of
operator new in this case, the code is likely to break.  Are there
other cases which "interact badly"?

I'm hoping a solution to this case is as simple as:
  - when the compiler would call the aligned version of operator new it
first checks which definition of operator new would have been called
if it were not aligned.  If there is an aligned version of operator
new in the same place*, call it, otherwise call the non-aligned
version and issue a warning.  (* for "place" fill in the appropriate
C++ jargon for it to make sense, e.g., namespace, class, scope.)
  - the default versions of operator new and the aligned version of
operator new should be defined in the same section.  That way,
when a user overrides the default operator new, they will get
a link error (duplicate definitions of new) unless they also
define the aligned version of operator new.

Can anyone identify situations where this wouldn't work?  In the
case where it generates a warning the code might not work because
of improper alignment, but at that point I would consider it the
users problem.

While I'm here let me also point out that an object which is allocated
on the stack and has alignment greater than what the stack guarantees is
also an issue.  I have a patch which fixes this for any alignment,
though it doesn't take advantage of stack ordering.  

Thanks,
Trevor



Re: Including GMP/MPFR in GCC repository?

2006-10-09 Thread Mark Mitchell

Kaveh R. GHAZI wrote:

Has there been any thought to including GMP/MPFR in the GCC repository
like we do for zlib and intl?


I do not think we should be including more such packages in the GCC 
repository.  It's complicated from an FSF perspective and it bloats our 
software.  GCC is a complicated piece of software, and to build it you 
need a lot of stuff installed on your system.  I think we should just 
accept that. :-)


I think that making our build system more complicated, and adding more 
packages, with the goal of making life simpler for people building from 
source is deceptive.  It doesn't seem particularly harder to build and 
install a few libraries and then install GCC.  But, making our build 
system more complex gives us more ways to make mistakes.  It also tempts 
us to add lots of configuration options for in-tree and "system" 
versions of the libraries, for building but not installing libraries, etc.


I do think we should do is provide a known-good version (whether via a 
tag in some version control system, or via a tarball) of these libraries 
so that people can easily get versions that work.


(For avoidance of doubt, the above statements are just my opinions as a 
GCC developer; they're in no way "official".)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Including GMP/MPFR in GCC repository?

2006-10-09 Thread Steve Kargl
On Mon, Oct 09, 2006 at 08:22:25PM -0700, Mark Mitchell wrote:
> Kaveh R. GHAZI wrote:
> >Has there been any thought to including GMP/MPFR in the GCC repository
> >like we do for zlib and intl?
> 
> I do not think we should be including more such packages in the GCC 
> repository.  It's complicated from an FSF perspective and it bloats our 
> software.  GCC is a complicated piece of software, and to build it you 
> need a lot of stuff installed on your system.  I think we should just 
> accept that. :-)

Should we consider removing zlib and intl?  In particular, zlib 1.2.3
was released on 19 Jul 05 and included 2 fixes for security issues.
GCC did not update zlib until 12 Sep 05.  Whether the security issues
in GCC's version of zlib could be exploited, I do not know.  I do know
a 2 month lag time seems inappropriate.

> I do think we should do is provide a known-good version (whether via a 
> tag in some version control system, or via a tarball) of these libraries 
> so that people can easily get versions that work.

I support this position.  Unfortunately, the first patch I 
submitted (several months ago) that upped the requirement to
mpfr 2.2.0 for gfortran resulted in several people expressing
objections about requiring a newer version of mpfr.  In fact,
I suspect the only reason that my recent changes to toplevel
configure to require 2.2.0 were accepted is because I had 2
gfortran bug fixes that required that version.

-- 
Steve