Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-11 Thread Bernd Petrovitsch
On Son, 2009-01-04 at 11:23 +0100, Leon Woestenberg wrote:
[...]
> On Sun, Jan 4, 2009 at 4:06 AM, Paul Mundt  wrote:
[...]

I'm ignoring the "cross-compile perl" issue - haven't tried it for
years.

> 5. Tool *version* dependency is hard to get right. When cross-building
> 30 software packages all requiring native perl, we probably need to
> build a few versions of perl (native), one for each set of packages.

perl is IMHO special (and quite different to others - including
especially autotools): perl5 is used widely enough so that "one somewhat
recent version" should cover all of 30 software packages.
The hard part are the CPAN modules and their versions which are really a
PITA.
As long as you don't use modules from CPAN, "perl5" should be specific
enough.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-05 Thread Bernd Petrovitsch
On Mon, 2009-01-05 at 15:01 +, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > I assume that the NFS-mounted root filesystem is a real distribution.
> 
> Not unless you call uClinux (MMU-less) a real distribution, no.

Not really.

> > > (* - No MMU on some ARMs, but I'm working on ARM FDPIC-ELF to add
> > >  proper shared libs.  Feel free to fund this :-)
> > 
> > The above mentioned ARMs have a MMU. Without MMU, it would be truly
> > insane IMHO.
> 
> We have similar cross-build issues without MMUs... I.e. that a lot of

Of course.

> useful packages don't cross-build properly (including many which use
> Autoconf), and it might be easier to make a native build environment

Tell me about it - AC_TRY_RUN() is the culprit.
And `pkg-config` supports cross-compilation only since 18 months or so.
Before one had to rewrite the generated .pc files.

[...]
> You mentioned ARM Debian.  According to
> http://wiki.debian.org/ArmEabiPort one recommended method of
> bootstrapping it is building natively on an emulated ARM, because
> cross-building is fragile.

That's of course the other solution - if qemu supports your
$EMBEDDED_CPU good enough.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-05 Thread Bernd Petrovitsch
On Son, 2009-01-04 at 22:50 -0600, Rob Landley wrote:
> On Sunday 04 January 2009 18:15:30 Bernd Petrovitsch wrote:
[...]
> > ACK. A bash can IMHO be expected. Even going for `dash` is IMHO somewhat
> > too extreme.
> 
> I have yet to encounter a system that uses dash _without_ bash.  (All ubuntu 

Hmm, should be doable with a chroot environment quite cheap and simple.

> variants, even jeos, install bash by default.  They moved the /bin/sh symlink 

Yes, I know (small) embedded systems that have a bash (and not "only"
one of busybox shells). It eases writing somewhat fast shell scripts
without the need for lots of fork()s+exec()s too .

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-05 Thread Bernd Petrovitsch
On Mon, 2009-01-05 at 02:23 +, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > > (I have 850 Linux boxes on my network with a bourne shell which
> > > doesn't do $((...)).  I won't be building kernels on them though :-)
> > 
> > Believe it or not, but there are folks out there who build the firmware
> > on ARM 200 MHz NFS-mounted systems natively  (and not simply
> > cross-compile it on a 2GHz PC .).
> 
> Really?
> 
> My 850 Linux boxes are 166MHz ARMs and occasionally NFS-mounted.
> Their /bin/sh does not do $((...)), and Bash is not there at all.

I assume that the NFS-mounted root filesystem is a real distribution.
And on the local flash is a usual busybox based firmware.

> If I were installing GCC natively on them, I'd install GNU Make and a
> proper shell while I were at it.  But I don't know if Bash works

ACK.

> properly without fork()* - or even if GCC does :-)
> 
> Perl might be hard, as shared libraries aren't supported by the
> toolchain which targets my ARMs* and Perl likes its loadable modules.

The simplest way to go is probably to use CentOS or Debian or another
ready binary distribution on ARM (or MIPS or PPC or whatever core the
embedded system has) possibly on a custom build kernel (if necessary).

[...]
> (* - No MMU on some ARMs, but I'm working on ARM FDPIC-ELF to add
>  proper shared libs.  Feel free to fund this :-)

The above mentioned ARMs have a MMU. Without MMU, it would be truly
insane IMHO.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Bernd Petrovitsch
On Son, 2009-01-04 at 22:13 +, Jamie Lokier wrote:
> Rob Landley wrote:
> > In a private email, Bernd Petrovitsch suggested "set -- $i" and then
> > using NAME=$1; PERIOD=$2.  (I keep getting private email responses
> > to these sort of threads, and then getting dismissed as the only one
> > who cares about the issue.  Less so this time around, but still...)
> > This apparently works all the way back to the bourne shell.
> 
> If you're going "all the way back to the bourne shell", don't use "set

"Going back to a Bourne shell" was neither the intention nor makes it
sense IMHO.
I mentioned it to point out that the `set -- ' (or `set x `) is nothing
new or even a bash-ism.

> -- $i"; use "set x $i" instead, and don't expect to do any arithmetic
> in the shell; use "expr" or "awk" for arithmetic.
> 
> (Not relevant to kernel scripts, imho, since you can always assume
> something a bit more modern and not too stripped down).

ACK. A bash can IMHO be expected. Even going for `dash` is IMHO somewhat
too extreme.

> (I have 850 Linux boxes on my network with a bourne shell which
> doesn't do $((...)).  I won't be building kernels on them though :-)

Believe it or not, but there are folks out there who build the firmware
on ARM 200 MHz NFS-mounted systems natively  (and not simply
cross-compile it on a 2GHz PC .).

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch
On Mit, 2008-08-27 at 18:51 +0100, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
[...]
> It is, but the idea that small embedded systems go through a 'all
> components are known, drivers are known, test and if it passes it's
> shippable' does not always apply.

Not always but often enough. And yes, there is ARM-based embedded
hardware with 1GB Flash-RAM and 128MB RAM.

> > > I'm seriously thinking of forwarding porting the 4 year old firmware
> > > from 2.4.26 to 2.6.current, just to get new drivers and capabilities.
> > 
> > That sounds reasonable (and I never meant maintaining the old system
> > infinitely.
> 
> Sounds reasonable, but it's vetoed for anticipated time and cost,

That is to be expected;-)

[]
> > ACK. We avoid MMU-less hardware too - especially since there is enough
> > hardware with a MMU around.
> 
> I can't emphasise enough how much difference MMU makes to Linux userspace.
> 
> It's practically: MMU = standard Linux (with less RAM), have everything.
> No-MMU = lots of familiar 'Linux' things not available or break.

ACK. And tell that a customer that everything is more effort and more
risk and not just "simply cross-compile it as it runs on my desktop
too".

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch
On Wed, 2008-08-27 at 16:48 +0100, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > If you "develop" an embedded system (which is partly system integration
> > of existing apps) to be installed in the field, you don't have that many
> > conceivable work loads compared to a desktop/server system. And you have
> > a fixed list of drivers and applications.
> 
> Hah!  Not in my line of embedded device.
> 
> 32MB no-MMU ARM boards which people run new things and attach new
> devices to rather often - without making new hardware.  Volume's too
> low per individual application to get new hardware designed and made.

Yes, you may have several products on the same hardware with somewhat
differing requirements (or not). But that is much less than a general
purpose system IMHO.

> I'm seriously thinking of forwarding porting the 4 year old firmware
> from 2.4.26 to 2.6.current, just to get new drivers and capabilities.

That sounds reasonable (and I never meant maintaining the old system
infinitely. Actually once the thing is shipped it usually enters deep
maintenance mode and the next is more a fork from the old).

> Backporting is tedious, so's feeling wretchedly far from the mainline
> world.

ACK. But that also depends on amount local changes (and sorry, but not
all locally necessary patches would be accepted in mainline in any way).

> > A usual approach is to run stress tests on several (or all)
> > subsystems/services/... in parallel and if the device survives it
> > functioning correctly, it is at least good enough.
> 
> Per application.
> 
> Some little devices run hundreds of different applications and
> customers expect to customise, script themselves, and attach different
> devices (over USB).  The next customer in the chain expects the bits
> you supplied to work in a variety of unexpected situations, even when
> you advise that it probably won't do that.

Basically their problem. Yes, "they" actually think they get a Linux
system where they can do everything and it simply works.

Oh, that's obviously not a usual "WLAN-router style" of product (where
you are not expected to actually login on a console or per ssh).

> Much like desktop/server Linux, but on a small device where silly
> little things like 'create a process' are a stress for the dear little
> thing.
> 
> (My biggest lesson: insist on an MMU next time!)

ACK. We avoid MMU-less hardware too - especially since there is enough
hardware with a MMU around.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch

On Wed, 2008-08-27 at 08:56 -0400, Parag Warudkar wrote:
> On Wed, Aug 27, 2008 at 5:00 AM, Bernd Petrovitsch <[EMAIL PROTECTED]> wrote:
> > They probably gave the idea pretty soon because you need to
> > rework/improve large parts of the kernel + drivers (and that has two
> > major problems - it consumes a lot of man power for "no new features and
> > everything must be completely tested again"[0] and it adds new risks).
> > And that is practically impossible if one sells "stable driver APIs" for
> > 3rd party (commercial) drivers because these must be changed too.
> 
> But not many embedded Linux arches support 4K stacks like Adrian

What is an "embedded Linux arch"?
Personally I encountered i386, ARM, MIPS and PPC in the embedded world.

> pointed out earlier.
> So the same (lot of man power requirement) would apply to Linux.

Of course. Look at the amount of work done by lots of people in that
area (including stack frame size reductions) and on-going discussions.

> Sure it will be good - but how reasonable it is to attempt it and how
> reliably it will work under all conceived loads - those are the
> questions.

If you "develop" an embedded system (which is partly system integration
of existing apps) to be installed in the field, you don't have that many
conceivable work loads compared to a desktop/server system. And you have
a fixed list of drivers and applications.
A usual approach is to run stress tests on several (or all)
subsystems/services/... in parallel and if the device survives it
functioning correctly, it is at least good enough.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch
On Tue, 2008-08-26 at 20:58 -0400, Parag Warudkar wrote:
[...]
> The savings part -financial ones- are not always realizable with the
> way memory is priced/sized/fitted.
> Savings in few Mb of Kernel stack are not necessarily going to allow
> getting rid of a single memory chip of 64M or so.

No, but you can put an additional service(s) on it and sales people have
one (or two or ) line more for their sales brochures.

> Either that or embedded manufacturing/configurations are different
> than the desktop world.

They are different. Think of running the complete system acting as a
bridge, router and/or firewall (Kernel early 2.4 though) from 4MB flash
in 32MB RAM and - listing the outside visible services - having a
command-line interface, web-GUI (implying a http server) and and a
(net-)SNMP agent on it.
Running a glibc without thread support is win there (implying that there
is no thread support available on that device).

> (If my device has 2 memory slots and my user space requires 100Mb
> including kernel memory - I anyways have to put in 64Mx2 there to take
> advantage of mass manufactured, general purpose memory - so no big
> deal if I saved 1.2Mb in Kernel stack or not. And savings of 64Mb
> Kernel memory are not feasible anyways to allow user space to work
> with 64Mb.)

As soon as product management realizes that there is space left on the
device, they get new ideas and/or customer requirements to run more
services on that device.

> On the other hand reducing  user space memory usage on those devices
> (not counting savings from kernel stack size) is a way more attractive
> option.

There is no question if save space here or there. You save it - sooner
or later - on all fronts. Period.

> And although you said in your later reply that Linux x86 with 4K
> stacks should be more than usable - my experiences running a untainted
> desktop/file server with 4K stack have been always disastrous XFS or
> not.  It _might_ work for some well defined workloads but you would
> not want to risk 4K stacks otherwise.

The embedded world of really small devices usually doesn't run XFS (or
ext? or reiser* of jfs or NFS or ...) or stacks block devices on files
or .

> I understand the having 4K stack option as a non-default for very
> specific workloads is a good idea but apart from that I think no one
> else seems to bother with reducing stack sizes (by no one I mean other
> OSes.)

They probably gave the idea pretty soon because you need to
rework/improve large parts of the kernel + drivers (and that has two
major problems - it consumes a lot of man power for "no new features and
everything must be completely tested again"[0] and it adds new risks).
And that is practically impossible if one sells "stable driver APIs" for
3rd party (commercial) drivers because these must be changed too.

Bernd

[0]: Let alone if you (or your customers) need certificates from some
 governmental agencys.
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch
On Tue, 2008-08-26 at 22:16 -0400, Parag Warudkar wrote:
[...]
> Well, sure  - but the industry as a whole seems to have gone the other

"The industry as a whole" doesn't exist on that low level. You can't
compare the laptop and/or desktop computer market (where one may buy
today hardware that runs in 3 years with the next generation/release of
the OS and applications) with the e.g. "WLAN router" market where - from
the commercial point of view - every Euro counts (and where the
requirements for the lifetime of the device are long frozen before the
thing gets in a shop).

> way - do more with more at the similar or lower price points!
> By that definition of less is better we should try and make the kernel
> memory pageable (or has someone already done that?) - Windows does it,

That doesn't help as in really small devices (like WLAN routers, cable
modems, etc.) you run without any means of paging/swapping. And even
binaries/read-only files are not necessarily executable in place (but
must be loaded into RAM). So you can't flush these pages.

And pageable kernel memory doesn't come for free - even if one only
counts the increased code and it's complexity.

> by default ;)

Which is more a sign that it is probably a very bad idea.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Bernd Petrovitsch
On Tue, 2008-08-26 at 18:54 -0400, Parag Warudkar wrote:
> On Tue, Aug 26, 2008 at 5:04 PM, Linus Torvalds
> <[EMAIL PROTECTED]> wrote:
> 
> > And embedded people (the ones that might care about 1% code size) are the
> > ones that would also want smaller stacks even more!
> 
> This is something I never understood - embedded devices are not going
> to run more than a few processes and 4K*(Few Processes)
>  IMHO is not worth a saving now a days even in embedded world given
> falling memory prices. Or do I misunderstand?

Falling prices are no reason to increase the amount of available RAM (or
other hardware).
Especially if you (intend to) build >1E5 devices - where every Euro
counts.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Bernd Petrovitsch
On Wed, 2008-07-30 at 14:07 +0100, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > If "GOLD" is as old and flexible (and portable?) as binutils,
> 
> The author says it will only work with ELF, and he does not
> intend to add support for all the other things binutils does.

Well, supporting 80% of the deployed systems requires probably only 20%
of the code;-)
But then it won't really replace binutils. And if, some quirky
hardware/systems have a problem .

> > gcc and/or other huge software maintained to death, it is probably
> > similar complex and odd.  If people take a > 10 year old tool and
> > rewrite it from scratch, I would assume that design is better.
> 
> Only true if the cruft is no longer relevant.  If the cruft is
> intrinsic to the problem, e.g. supporting umpteen quirky architectures
> implies umpteen quirks of cruft, then it'll be in the new design.

Yes, but one can make a better design in always knowing/planning to have
hooks here and there and everywhere.

> Btw, gcc and binutils are more like 30 years old :-)

That doesn't make it better;-)
I was too lazy to search for more exact numbers.

> > And I can't see any direct dependence on the used programming
> > language(s) if one compares running code and what is left of "design"
> > after years of design extensions, changes, enhancements, etc. to a new
> > design from scratch from the lessons learned (hopefully) from the former
> > one.
> 
> Some programming languages allow you to express a problem concisely
> and clearly, and some don't.  That clarity then affects whether an

And if C is too low-level, one abstracts with functions etc. I call that
"design" - independent if the design existed before the source or if the
design evolved over years with the software
And yes, at first it is enough to add a parameter and/or function here
and there without breaking implicit or explicit assumptions.
But at one point from a larger view, the "design problems" will be
obvious and one can either solve them (investing time/money for
effectively no real gain in features and/or functionality, just only
cleanups or refactoring of parts or whatever one wants to call it) or
lives on with patching/maintaining the software to death.

> evolving design becomes loaded with implementation cruft or not - and
> you can't always tell the difference.

Yes.
And over the years and decades, the implementation evolves with the
problems - new and existing ones. If the design doesn't involve - which
IMHO implies refactoring of existing, tested and working code(!)
possible breaking it - you have at some point such a mess that each
"trivial" enhancement takes age (and breaks again somewhere else
something).

> Most languages are well-matched to different problem domains.

Maybe. IMHO these differences are almost nothing compared to the below
point:

> Binutils and bfd look very crufty, but I think it's hard to tell how
> much of that is due to the implementation language and implementation,
> or the design and requirements, and how much would exist in any
> implementation on any language.

IMHO it's (mostly) independent of the implementation language:

If changes in design are not completed (including removal of old
deprecated stuff or at least push it in peripheral places where nobody
cares;-) in the implementation (for whatever reason - no one does it, no
one wants to pay it, one wants to support every API indefinitely, ),
it will lead more sooner than later to unmaintanable crufty software.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Bernd Petrovitsch
On Wed, 2008-07-30 at 13:04 +0200, Bart Van Assche wrote:
[...]
> I don't know whether C++ is intrinsic to GOLD's linking superiority.
> The reason I cited the GOLD project is because of the programming
> style of the GOLD source code. A quote from
> http://lwn.net/Articles/274859/, about the GOLD source code:
> 
> I looked through the gold sources a bit. I wish everything in the GNU
> toolchain were written this way. It is very clean code, nicely
> commented, and easy to follow. It shows pretty clearly, I think, the
> ways in which C++ can be better than C when it is used well.

If "GOLD" is as old and flexible (and portable?) as binutils, gcc and/or
other huge software maintained to death, it is probably similar complex
and odd.
If people take a > 10 year old tool and rewrite it from scratch, I would
assume that design is better.

And I can't see any direct dependence on the used programming
language(s) if one compares running code and what is left of "design"
after years of design extensions, changes, enhancements, etc. to a new
design from scratch from the lessons learned (hopefully) from the former
one.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-29 Thread Bernd Petrovitsch
On Tue, 2008-07-29 at 10:58 +0200, Alexander Neundorf wrote:
> On Tuesday 29 July 2008 10:20:12 you wrote:
> > On Tue, 2008-07-29 at 09:51 +0200, Alexander Neundorf wrote:
> ...
> > Yes, one *can* use the above features and get small features. But most
> > people simply can't - if only that they use some tool/lib written in C++
> > (and coming from the "normal" world) which simply uses them without
> > thinking about space and wonder why the device won't run with "only"
> > 128MB flash and run in 16MB RAM.
> 
> Well, if somebody carelessly uses general purpose apps/libs in a tiny 
> embedded 
> project he will have problems, no matter if it's C or C++.

Of course.
But it is IMHO much more easier and seductive to use the code bloating
features with C++ - especially if you don't know what to do and do not
realize (until it's too late).

Evey other potential customer asks about C++ on an embedded device. And
if you say "yes" they *expect* to use all that g++ allows. Period.

Getting exceptions and restrictions to the use of C++ (including any 3rd
party software - known and unknown) in an offer?
Please be serious.

Discussing afterwards that these templates are a very bad idea (and need
to be converted to "a pure virtual class and lots of classes" to avoid
code bloat and that it will cost a few man-weeks and calendar time)?
I can hear it already: "But you said that C++ is OK and this is plain C
++".

> > BTW why should I use C++ if I don't use any "fancy features"?
> 
> If you just skip RTTI and exceptions you have enough fancy features left :-)

Hmm, does g++ has options to completely disabled these (and other)
"fancy features"? At least one could check 3rd party software more
easily if they actually use that.

Multiple inheritance[0] is in my experience not really necessary (if
ever used).
I already have "Safe C" with gcc anyways (if I want and enable some
warnings;-).
OO design is a question of design and not of the implementation
language. I can't see much difference between
- declaring a class and using a method or
- declaring a struct and use a pointer to an instance as the first
   parameter in several functions.
Leaves operator/function overloading and default values for parameters.
But it adds usually libstdc++.so 

> Just know what you're doing if you're using templates and multiple 

ACK. But what is with the other 90%?

> inheritance, there is no problem with them. Templates are so much better than 
> macros, and if used carefully they don't bloat the code size.

Don't get me wrong - I'm not religiously against C++ in anyway.
It's just that you *really* need to know what you do and that implies
IMHO for C++ that you must know how templates work/are implemented.
Similar for exceptions (and no, using exceptions usually doesn't save
space anywhere - at least not if your calling depth is < 100). if you
use them (or use a library that uses them).
And may need or may not need libstdc++.so - an additional piece of code
using space.
Of course, if you have 1GB of flash and 256M RAM, who cares. But most of
the devices I see are not that "fat".

In short: It is far from easy to *not* shoot yourself in the foot with
C++. At least compared to plain C.

Bernd

[0]: Yes, I know what's the difference between normal and virtual
 inheritance.
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-29 Thread Bernd Petrovitsch
On Tue, 2008-07-29 at 09:51 +0200, Alexander Neundorf wrote:
> On Tuesday 29 July 2008 09:40:20 Marco Stornelli wrote:
> > Robert P. J. Day ha scritto:
> > >   just curious -- how many folks are working in C++ in their embedded
> > > linux work?

Not if it's in anyway avoidable.

[]
> > Like Linus Torvals said "...C++ is an horrible language" :)
> 
> If you avoid RTTI and exceptions and if you are handle templates and multiple 
> inheritance carefully I see nothing which speaks against using it for 
> embedded and real-time software.

That's the main reason for *not* using C++ in the embedded world in the
first place.
Tell people that they may use C++ and see them happy.
Then tell them that you better not use templates, RTTI, exceptions and
multiple inheritance if you want to boot from small space.

Yes, one *can* use the above features and get small features. But most
people simply can't - if only that they use some tool/lib written in C++
(and coming from the "normal" world) which simply uses them without
thinking about space and wonder why the device won't run with "only"
128MB flash and run in 16MB RAM.

BTW why should I use C++ if I don't use any "fancy features"?

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-16 Thread Bernd Petrovitsch
On Mon, 2008-06-16 at 12:17 +0100, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > > _check_ for many installed libraries.  Get them wrong, and you have
> > > the same problems of untested combinations.
> > 
> > As long as I can specify that libfoo support must be compiled in (and
> > thus libfoo must be present) and the tools throw an error if it doesn't
> > find it, I have no problem.
> > Otherwise all package builders have a serious problem.
> 
> They do have problems, when you want to repeatably build and deploy,
> if the build environment isn't very similar each time.

Sometimes you have a different build environments - if only that you
want to rebuild e.g. your .src.rpm on several versions of CentOS and
Fedora.

> Typically the way you specify that libfoo support must be compiled in
> is --with-libfoo=/path/to/libfoo.
> 
> That way lies bitrot between your build script which calls ./configure

Cannot be really avoided IMHO. 

> (since you won't by typing it manually with 20+ options like that each
> time you rebuild), and the changing version of an upstream package you
> configure.

So be it. At least one sees errors/bugs immediately.

> To prevent it trying to compile in libs you don't want, you also need
> --without-libfoo.  Using Kerberos as an example, which I remember when
> building CVS ages ago: If you don't _prevent_ it using libraries you
> don't want, you get different binariesn depending on whether a
> Kerberos library was installed on the build system at build time.  You
> might then send a built program to another system, and find it won't
> run at all, or has unwanted behaviour.
> 
> Do you really see package building scripts with 20 --with-libfoo= and
> --without-libfoo= options in them for every library?  Sometimes.  But

For (in an ideal world 100%) repeatable builds - for a .rpm. .deb, some
cross-compiled embedded device - one usually ends up with that explicit
list (and IMHO it's the smaller PITA than to cope with strange bug
reports because something was broken in the build weeks ago).
Mainly because the dependency information is also present elsewehre
(e.g. in the package). Or you really want control over the installed
software.

> more often, not: instead, they more often have build-time installed
> prerequisites.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-16 Thread Bernd Petrovitsch
On Sam, 2008-06-14 at 01:07 +0100, Jamie Lokier wrote:
[...]
> You said about too many user-selectable options.  Many large packages

These are IME not a problem if they have somewhat sensible defaults.

> _check_ for many installed libraries.  Get them wrong, and you have
> the same problems of untested combinations.

As long as I can specify that libfoo support must be compiled in (and
thus libfoo must be present) and the tools throw an error if it doesn't
find it, I have no problem.
Otherwise all package builders have a serious problem.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-16 Thread Bernd Petrovitsch
On Mon, 2008-06-16 at 10:02 +0200, Alexander Neundorf wrote:
[...]
> Seriously, why is a wrapper for the compiler/linker required AT ALL if the 
> calls to these tools are made from _generated_ files ?

AFAIU the motivation of libtool to provide OS-independent (and toolchain
independent?) means to compile and link (etc.).

> The generated files should just contain the appropriate calls for the 
> respective commands.

But these calls are - e.g. for shared libraries - not identical. The Gnu
toolchain is not everything.

> This layer of abstraction is unnecessary and IMO just adds confusion. 
> (modifying libtool so that it calls unitool even seems to add yet another 
> layer which can potentially break or bitrot etc.)

I know of more than one "occasion" where "gcc" et.al. actually where
wrappers around the actually binaries to check and guarantee commandline
parameters and options (some must be there, some mustn't be there, ...).
Usually sane Makefiles (and sane autotools usages) are enough for that.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-16 Thread Bernd Petrovitsch
On Fre, 2008-06-13 at 20:51 +0200, Robert Schwebel wrote:
> On Fri, Jun 13, 2008 at 08:30:52AM +0200, Alexander Neundorf wrote:
> > Battle of Wesnoth is currently converted to both Scons and CMake, and
> > in the end they will decide about the winner. (since Eric is good at
> > arguing I guess it will be scons).
> 
> The thing is that 'configure && make && make install' plus the usuall
> --enable-foo / --disable-foo / --with-bla=blub semantics is simply *the*
> standard way of configuring stuff for unix systems. You don't need fancy
> tools, you get cross compiling almost for free and unix people simply
> know how to use it.

As long as people avoid AC_TRY_RUN() and similar and allow the
"configurator" to tell `configure.sh` facts for the unavoidable cases
about the target (and there were some apps - and I forgot the names -
out there where this wasn't easily possible with editing the generated
configure.sh. Yes, that's not fault of autotools as such but autotools
make it IMHO far too easy to write that sort without generating lots of
warnings[0]).

> All the cool kids out there who think they know everything better
> usually start with "I hate autotools", then invent something which

That has IMHO 2 main reasons:
- For lost of apps, a Makefile and some coding discipline is more than
  enough to support Linux/*BSD/MacOS-X even on different hardware.
  And there are always cases where you need OS-specific code anyways
  (e.g. manipulating routes).
  Yes, that may need much more coding discipline than the average
  programmer is used to.
- Converting $PROJECT to autotools is not easy. One has to learn and
  understand how the tools work[1] and what should be done in which way.
  And AFAIU (which is not much for autotools) one has to adapt the
  source anyways here and there (so it is not really a drop-in
  replacement).
  And if people consider using autotools, it is probably quite large and
  complex 

Add that to negative experiences with other autotools packages and I can
understand the above sentence.

> solves 0.1% of the problems (including their very special problem) and
> tell the rest of the world that their shiny new tools is *s cl*.

Bernd

[0]: Yes, this is free software and I could send patches etc. to fix
 that. But that price is IMHO higher than just not using autotools
 for new stuff.
[1]: And I didn't find a site yet with an easy understandable
 description for seasoned programmers with cross-compile
 and multi-hardware experience.
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Bernd Petrovitsch
On Fre, 2008-06-13 at 17:16 +0200, Enrico Weigelt wrote:
> * Bernd Petrovitsch <[EMAIL PROTECTED]> schrieb:
> 
> > > Basically yes. But if you have a big number of packages (or a huge 
> > > package) 
> > > which you didn't write yourself, there will be tests which run 
> > > executables. 
> > > Figuring out what all the tests are supposed to test in a complex unknown 
> > > software project is not trivial.
> > 
> > Yes, you get used to find the relevant lines in config.log and similar
> > with `grep` and similar tools;-)
> 
> Which are different on each package. So you have to configure each package

ACK.

> for each target manually, which leads the whole point of autoconf
> ad absurdum ;-o

Yup.

[...]
> > pkg-config generated (and generates? - I didn't check recently)
> > references to libraries including the full absolute path (which is the
> > one at build time. And at run-time there is usually
> > no /home/bernd/src/... or where some build may just run).
> 
> Recent pkg-config supports sysroot.

FC-6 has "only" 0.21.

> So you simply build your .pc files as usual (w/o sysroot prefix) and
> set the sysroot prefix via env on the pkg-config call.

>From a quick glance over the man page of 0.23, yes.

> > > Can you please explain ? How do the generated pkg_config files look like ?
> > > Ahh, you mean they contain e.g 
> > > -L/my/build/host/target/environment/opt/foo/lib 
> > > instead of just -L/opt/foo/lib ?
> 
> Then you've got a broken .pc file ;-P

The problem is that the build-time (cross-)linker needs to find the
(cross-compiled) lib under /my/build/host/target/environment/opt/foo/lib
at link time and the shared linker under /opt/foo/lib at run-time.
Hmm, after digging into that old project, it seems that libtool and
the .la files were the problem.

> > Yes. And even worse the compiled lib "foo" had explicit dependencies (on
> > lib "bar") on
> > "/my/build/host/target/environment/opt/bar/lib/libbar.so.1.2.3.4". 
> 
> And that's even more broken.

Yup. Maybe it was a result of my trial to make libtool work somehow ..

> > And BTW pkg-config didn't support the concept of a "DESTDIR" variable
> > (and I don't care about the name of that variable).
> 
> No, why should it ?! It does not install anything.

But it may "use" installed files.

> Probably you're looking for sysroot ?

Yes, very probably.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Bernd Petrovitsch
On Fre, 2008-06-13 at 14:17 +0100, Jamie Lokier wrote:
> Bernd Petrovitsch wrote:
> > Actually the size of ints (or any other type) can be easily deduced
> > without running a (for the target) compiled binary:
> > - compile the binary (for the target) with an initialized variable with
> >   that value.
> > - use cross nm (or a similar tool) to read it from there.
> 
> Or the method autoconf uses - binary search, using a compile-time
> numeric comparison which resolves to a successful or failed compile.

Good, I didn't know that.

> That seems more portable to me.

Yes, just using the compiler is better.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Bernd Petrovitsch
On Fre, 2008-06-13 at 08:43 +0200, Alexander Neundorf wrote:
> On Thursday 12 June 2008 17:50:31 you wrote:
> > On Thu, 2008-06-12 at 08:23 -0700, Tim Bird wrote:
> > > Rob Landley wrote:
> > > > However, having one or more full-time engineers devoted to debugging
> > > > cross-compile issues is quite a high price to pay too.  Moore's law
> > > > really doesn't help that one.
> > > >
> > > > I'm not saying either solution is perfect, I'm just saying the "build
> > > > under emulation" approach is a viable alternative that gets more
> > > > attractive as time passes, both because of ongoing development on
> > > > emulators and because of Moore's law on the hardware.
> > >
> > > I agree with much that you have said, Rob, and I understand the argument
> > > for getting the most gain from the least resources, but I have a
> > > philosophical problem with working around the cross-compilation problems
> > > instead of fixing them in the upstream packages (or in the autoconf
> > > system itself).
> > >
> > > Once someone fixes the cross-compilation issues for a package, they
> > > usually stay fixed, if the fixes are mainlined.
> >
> > I don't think that's true, unfortunately. Autoconf makes it _easy_ to do
> > the wrong thing, and people will often introduce new problems.
> >
> > If we just made people write portable code and proper Makefiles, it
> > would be less of an issue :)

ACK. And proper build time tools.

> Well, IMO this makes it sound too easy.
> If you write portable software, you have to do platform checks.
> Basically they can be done by
> -checking for the existence of files

That can be done as - sooner or later - one must install the compiled
stuff anyway. So one has root directory somewhere and one can tell the
tools.

> -checking if something builds
> -checking the output of running something you just built

And the above are not really a big problem - embedded people usually
know such details and can tell the autoconf tools.
Even worse is (or at least were) tools like pkg_config and libtool,
which generate directories to the build time library.

The only simple solution so far (without diving into the implementation
and searching for root causes) were AFAICS:
- do not use libtool for linking (as the link line as such without
  libtool works as expected)
- rewrite generated pkg_config files after generation.
Yes, that's pretty ugly.
But perhaps I was just too dumb to find the correct solutions.

> The last one is the problem for cross compiling.
> Example: detecting the size of ints

Why on earth does someone need this explicitly during the build?
If you have portable software, all of that should be hidden in the code
and use "sizeof(int)".

IMHO the code (or whatever piece uses it) should be fixed and the build
time stuff removed.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Bernd Petrovitsch
On Fre, 2008-06-13 at 11:06 +0200, Alexander Neundorf wrote:
> On Friday 13 June 2008 10:38:36 you wrote:
> > On Fre, 2008-06-13 at 08:43 +0200, Alexander Neundorf wrote:
> ...
> > > Well, IMO this makes it sound too easy.
> > > If you write portable software, you have to do platform checks.
> > > Basically they can be done by
> > > -checking for the existence of files
> >
> > That can be done as - sooner or later - one must install the compiled
> > stuff anyway. So one has root directory somewhere and one can tell the
> > tools.
> 
> Yes.
> 
> > > -checking if something builds
> > > -checking the output of running something you just built
> >
> > And the above are not really a big problem - 
> 
> "checking if something builds" is no problem, this just works. Running 
> something is a problem, as in "it doesn't just work" (...because you cannot 
> run it).

ACK. AC_TRY_RUN() must die completely.

> > embedded people usually know such details and can tell the autoconf tools.
> 
> Basically yes. But if you have a big number of packages (or a huge package) 
> which you didn't write yourself, there will be tests which run executables. 
> Figuring out what all the tests are supposed to test in a complex unknown 
> software project is not trivial.

Yes, you get used to find the relevant lines in config.log and similar
with `grep` and similar tools;-)
But most embedded projects haven't that much number of "large tools" -
mainly because the space is limited.

> > Even worse is (or at least were) tools like pkg_config and libtool,
> > which generate directories to the build time library.
> 
> What do you mean with "generate directories" ? RPATH ?

pkg-config generated (and generates? - I didn't check recently)
references to libraries including the full absolute path (which is the
one at build time. And at run-time there is usually
no /home/bernd/src/... or where some build may just run).

[...]
> > - rewrite generated pkg_config files after generation.
> > Yes, that's pretty ugly.
> > But perhaps I was just too dumb to find the correct solutions.
> 
> Can you please explain ? How do the generated pkg_config files look like ?
> Ahh, you mean they contain e.g 
> -L/my/build/host/target/environment/opt/foo/lib 
> instead of just -L/opt/foo/lib ?

Yes. And even worse the compiled lib "foo" had explicit dependencies (on
lib "bar") on
"/my/build/host/target/environment/opt/bar/lib/libbar.so.1.2.3.4". And
that is not trivially overridable at run-time AFAIK so that ld-linux
finds "/opt/bar/lib/libbar.so.1.2.3.4" instead.
For real world names: glib is pretty commonly used by other libs. Voila,
an indirect dependency.

And BTW pkg-config didn't support the concept of a "DESTDIR" variable
(and I don't care about the name of that variable).

> > > The last one is the problem for cross compiling.
> > > Example: detecting the size of ints
> >
> > Why on earth does someone need this explicitly during the build?
> > If you have portable software, all of that should be hidden in the code
> > and use "sizeof(int)".
> 
> From the "developer of a buildsystem" POV: there will be users who will need 
> it. 

If there is at least one valid technical reason: Yes.
If the only reasons are "we had it since 10 years with the old system"
or "we don't want to fix the code because it takes us too much time":
well, tough decision.


> But this was not the point. My point was: testing something by running an 
> executable can be _a lot_ easier than testing the same without running 
> something.

Of course. But *that's* in general possible for cross-compiling. And
having a 100% binary compatible qemu installation for every ARM and MIPS
core out there is IMHO also not feasible.

Actually the size of ints (or any other type) can be easily deduced
without running a (for the target) compiled binary:
- compile the binary (for the target) with an initialized variable with
  that value.
- use cross nm (or a similar tool) to read it from there.

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Bernd Petrovitsch
On Don, 2008-06-12 at 19:25 -0500, Rob Landley wrote:
> On Thursday 12 June 2008 11:12:13 Robert P. J. Day wrote:
> > On Thu, 12 Jun 2008, Mike Frysinger wrote:
> > > On Thu, Jun 12, 2008 at 11:50 AM, David Woodhouse wrote:
> > > > If we just made people write portable code and proper Makefiles,
> > > > it would be less of an issue :)
> > >
> > > people cant even write proper *native* makefiles.  mtd-utils for
> > > example ;).
> >
> > meooowww!  :-)  but at the risk of dragging this even further
> > off-topic, i am *constantly* asked by people how to set up makefiles
> > for their software project, and what would be nice is a small
> > collection of examples of a makefile (or makefiles) done *right*.  as
> > in,
> 
> Make doesn't scale.
> 
> 99% of the builds in the open source world are "make all", and most of the 
> smaller projects build natively on modern dual processor 2ghz laptops in 
> under 10 seconds anyway.
>
> The larger projects with significant build times usually find that make 
> doesn't suit their needs, so that they write some other build system.  
> Sometimes they do it on top of make, such as the kernel's kbuild.  Sometimes 
> they use another language like apache's ANT.  Sometimes they roll their own 

"ant" is also only "make reimplemented in Java" (or did I miss
something). I see no win here.

> in C (anybody remember X11's imake?)  KDE switched to cmake: 

That generated "only" a Makefile IIRC.

[...]
> Current compilers have a "build at once" mode where they suck the whole 
> project in and run the optimizer on it at once, resulting in noticeably 
> smaller and faster output at the expense of needing buckets of memory to hold 
> all the source code and intermediate structures in memory at once.  The main 
> roadblock to making use of this?  Ripping out the existing makefiles and 
> replacing them with a very small shell script that does something similar 
> to "gcc *.c".
> 
> The first question you should be asking when doing a new build system from 
> scratch is probably "should I really be using make"?
> 
> > properly recursive, 
> 
> Recursive make considered harmful:
>   http://aegis.sourceforge.net/auug97.pdf

ACK.

> How is needing to call make recursively _not_ just another way of sayng "the 
> dependency checking make does, which was the central idea behind its design, 
> is a lost cause and we need to jettison it to do builds"?

The problem is that build systems have (at least) two layers:
- the lower are the usual apps and libs and kernel ... bringing their
  working Makefile's with them.
- the upper layer needs to build the kernel, libs and apps.
  This needs usually a defined sequence and set of consistent
  parameters.
But the lower layer doesn't "export" it's local available rule base so
the `make` (or shell script) on the upper layer can't use it and one
must therefore `make -C $tooldir` - even if there is absolutely nothing
to do.
Of course the upper layer may remember if an lib/app has been build to
avoid 60% of all "obviously" useless `make -C` calls. But that doesn't
solve any problem really.

> I just did a "make distclean" on a qemu tree I had lying around.  On my 1.7 
> ghz 64 bit laptop, it took 9.2 seconds to figure out it had nothing to do, 
> just because it had to recurse into so many subdirectories to do it.

"Recursive make considered harmful"
Or you need more RAM and faster disks;-)

Bernd
-- 
Firmix Software GmbH   http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
  Embedded Linux Development and Services


--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html