On Tue, 2005-05-17 at 12:11 -0500, Joel Sherrill  wrote:
> Joe Buck wrote:
> > 
> > I used to be an embedded programmer myself, and while I cared very much
> > about the size and speed of the embedded code I wound up with, I didn't
> > care at all about being able to run the compiler itself on a machine that
> > wasn't reasonably up to date, much less trying to bootstrap the compiler
> > on an embedded target.  Is that really what we should be aiming for?
Well, an embedded programmer won't care much about this issue, but as
RTEMS maintainer, I am building and packaging the toolchains - so GCC
building times are a concern to me.

> >  As
> > opposed to just making -Os work really well?
ACK, this is an issue as well. 

ATM, I am experiencing objects being generated by GCC to increase in
size and forcing us to gradually abandon targets with tight memory
requirements. At least one cause of this seems to be GCC abandoning COFF
in favor of ELF, which seems to imply larger memory requirements.

>   If I could get better embedded
> > code by having the compiler use a lot of memory, but I could easily afford
> > a machine with that amount of memory, I would have had no complaint.
ACK.

> There are at least three classes of development I have noticed on this
> thread:
> 
>    (1) self-hosting gcc on slow but traditional hosts (e.g. 68040 UNIX
>      or old Sun's)
>    (2) self-hosted embedded development (e.g. sounds like Peter Barada)
>    (3) embedded development using regular cross-compilation
> 
> In general all are concerned about lower end CPUs than are used for
> the mainstream GCC user on GNU/Linux and other UNIces.
> 
> (1) and (2) are similar and probably have similar requirements.  They 
> would like building GCC and running it to be possible on what would
> be considered low end hardware for main-stream development purposes.
> 
> (3) is the model for RTEMS, other RTOSes, no-OS targets, and could
> be the model used by (2).  I won't include (1) because they want their
> systems to be self supporting and users will compile their own code.
> 
> We are all concerned about the time and memory required to build GCC.
> But it is a critical concern for (1) and (2) since they are on small 
> hosts.  For (3) and RTEMS, it concerns us because the RTEMS Project
> provides RPMs for  11 targets and tries to include every language
> we can possibly support (C, C++, Ada, FORTRAN, and Java).  I know for 
> the targets that it compiles on, RTEMS works well with C, C++, and Ada.
> I am unsure about the precise status of RTEMS+Java
gcj builds for most RTEMS-targets. RTEMS support for libgcj and its
infrastructure is missing and probably won't be implemented any time
soon. A usable gcj+libgcj for RTEMS is not in sight.

>  and FORTRAN is currently up for discussion.
gfortran builds fine for all CPUs, building libgfortran fails for some
CPUs, which don't meet the expectations of gfortran/f95.

Objc builds without problems for all targets.

Ada builds for all CPUs Ada supports, but suffers from general cross-
building deficits and lacks multilibs.

>   Our tool build times are thus very long
> and when we follow up by building RTEMS and add-on libraries, it
> gets longer.  We struggle to keep up which is why RTEMS reports are
> sporadic and tend to cluster nearer a release point.
Be lucky, we can't build libgcj and gnat doesn't support multilibs. I
would not expect us to handle this.

> > It therefore seems that we have two *separate* problems: one is
> that 
> > increased resource consumption makes gcc harder to use as a hosted
> > compiler on older systems, and the other is that embedded target support
> > isn't getting the attention it needs
> [..]
> > it seems sometimes they get mixed solely because too many
> > free software projects don't support cross-compilation properly, but
> > that is a bug in those projects.
> 
> You are correct.  Many free libraries have rough edges when cross-
> building.
My point is: GCC also has them, and could do better wrt. this.

Ralf


Reply via email to