Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Leon Woestenberg
Hello all,

On Thu, Jun 12, 2008 at 2:41 AM, Rob Landley <[EMAIL PROTECTED]> wrote:
>
> Most packages don't cross compile at all.  Debian has somewhere north of
> 30,000 packages.  Every project that does large scale cross compiling
> (buildroot, gentoo embedded, timesys making fedora cross compile, etc) tends
> to have about 200 packages that cross compile more or less easily, another
> 400 or so that can be made to cross compile with _lot_ of effort and a large
> enough rock, and then the project stalls at about that size.
>

Agreed, OpenEmbedded has a few thousands, but your point is valid.
However, fleeing to target-native compilation is not the way to
improve the situation IMHO.

Moore's law on hardware also goes for the host, I think the progress
is even bigger on big iron.

Also, how much of the 3 packages are useful for something like
your own firmware Linux?

> Distcc can take advantage of smp, but that won't help the ./configure stage
> and I need to do some work on distcc to teach it to understand more gcc
>
If you want to build 1000+ packages, you don't need to run configure
itself multithreaded. There are enough jobs available to keep 16/32
processors busy (beyond that, you probably end up in
inter-package-dependencies stalling the build). This is just a guess
from what I see during a multi-threaded bake and multi-threaded make
on OpenEmbedded.

> However, having one or more full-time engineers devoted to debugging
> cross-compile issues is quite a high price to pay too.  Moore's law really
> doesn't help that one.
>
How about 30+ volunteers.

> I'm not saying either solution is perfect, I'm just saying the "build under
> emulation" approach is a viable alternative that gets more attractive as time
> passes, both because of ongoing development on emulators and because of
> Moore's law on the hardware.
>
I cannot follow your reasoning - Moore's law will help you more on the
big iron side of things.

That said, I welcome any effort (such as yours) to help improve the
embedded Linux domain, I rather try to fix the cross-compile stuff of
the few thousand packages I am interested in.

Yes it hurts my brain.

Regards,
-- 
Leon
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: about size optimizations (Re: Not as much ccache win as I expected)

2008-06-15 Thread Jamie Lokier
David Woodhouse wrote:
> On Sat, 2008-06-14 at 10:56 +0100, Oleg Verych wrote:
> > I saw that. My point is pure text processing. But as it seems doing
> > `make` is a lot more fun than to do `sh` && `sed`.
> 
> The problem is that it _isn't_ pure text processing. There's more to
> building with --combine than that, and we really do want the compiler to
> do it.
> 
> _Sometimes_ you can just append C files together and they happen to
> work. But not always. A simple case where it fails would be when you
> have a static variable with the same name in two different files.

I suspect the simplest way to adapt an existing makefile is:

1. Replace each compile command "gcc args... file.c -o file.o"
   with "gcc -E args... file.c -o file.o.i".

2. Replace each incremental link "ld -r -o foo.o files..." with
   "cat `echo files... | sed 's/$/.i/'` > foo.o.i".

3. Similar replacement for each "ar" command making .a files.

4. Replace the main link "ld -o vmlinux files..." with
   "gcc -o vmlinux --combine -fwhole-program `echo files... | sed 
's/$/.i/'`".

You can do this without changin the Makefile, if you provide suitable
scripts on $PATH for the make.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: about size optimizations (Re: Not as much ccache win as I expected)

2008-06-15 Thread Oleg Verych
> You can do this without changin the Makefile, if you provide suitable
> scripts on $PATH for the make.

I want to add here whole issue of kbuild's way of dependency
calculation and rebuild technique.

1) This whole infrastructure is needed only for developers. But
developer while writing/updating  some code must know what is changed
and how it impacts all dependent/relevant code. Thus, one must create
list of all files *before* doing edit/build/run cycles (even with
git/quilt aid). And this list must be fed to build system to make sure
everything needed is rebuilt, and anything else is not (to save time).

This is matter of organizing tools and ways of doing things -- a very
important feature of doing anything effectively.

2) OTOH user needs no such thing at all. New kernel -- new build from
scratch. Distros are same. Also blind belief for correct rebuild using
old object pool is a naive thing.

3) Testers applying and testing patches. OK, now it's a rule to have
diffstat, thus list of changed files. But one can filter out them from
diff/patch with `sed` easily. It can be done even rejecting pure
whitespace/comment changes.

Now you have list of files, feed them to build system, like in (1). No
`make` (recursive or not, or whatever) is needed (use ccache-like
thing in general case to save build time). Its key-thing -- timestamps
-- is a lock for development somehow overcame by `make`-based kbuild
2.6. What an irony.

Problems:

* more flexible source-usage (thus dependency) tracking is needed
(per-variable, per-function, per-file). This must not be a random
comments near #include, it must be natural part of source files
themselves. Filenames are not subject to frequent changes. Big ones
can be split, but main prefix must be the same, thus no need of
changing it in all users. Small "ENOENT || prefix*" heuristics is
quite OK here.

* implemented features and their options must be described and
documented in-place in sources (distributed configuration). Licence
blocks are not needed, one has top file with it or MODULE_LICENSE().
Describe your source in a form, that will be easily parse-able for
creating dependency and configuration items/options.

* once all this in place, creating specific config sets by end users
must not be so painful for both sides as it now is.

#include's && #ifdef's are proven PITA; flexible text processing
(analysis, transformations) with basic tools like `sed` (or `perl`) is
the right way IMHO. On this stage no `gcc -E` for working `cat $all
>linux.c` is needed.

(My another stone to "The art of thinking in `make` and C". Hope, it's
constructive. Again all this i see as handled with very small set of
universal scripts.)
-- 
sed 'sed && sh + olecom = love'  <<  ''
-o--=O`C
 #oo'L O
<___=E M
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Not as much ccache win as I expected

2008-06-15 Thread Jörn Engel
On Fri, 13 June 2008 14:10:29 -0700, Tim Bird wrote:
> 
> Maybe I should just be grateful for any ccache hits I get.

ccache's usefulness depends on your workload.  If you make a change to
include/linux/fs.h, close to 100% of the kernel is rebuilt, with or
without ccache.  But when you revert that change, the build time differs
dramatically.  Without ccache, fs.h was simply changed again and
everything is rebuild.  With ccache, there are hits for the old version
and all is pulled from the cache - provided you have allotted enough
disk for it.

If you never revert to an old version or do some equivalent operation,
ccache can even be a net loss.  On a fast machine, the additional disk
accesses are easily more expensive than the minimal cpu gains.

Jörn

-- 
Public Domain  - Free as in Beer
General Public - Free as in Speech
BSD License- Free as in Enterprise
Shared Source  - Free as in "Work will make you..."
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel boot problem on IXP422 Rev. A

2008-06-15 Thread Rob Landley
On Friday 13 June 2008 15:05:54 Tim Bird wrote:
> Rob,
>
> This is an excellent and concise description of the open
> source perspective on the problem.  I'll add just one note below.
>
> Rob Landley wrote:
> > 1) Try to reproduce the bug under a current kernel.  (Set up a _test_
> > system.)
>
> This sounds easy, but can be quite difficult.

It's not a question of difficult or easy: it's the procedure that works.

You don't get support from a commercial vendor unless you pay them money, and 
you don't get support from open source developers unless you help us make the 
next release just a little bit better.  (We never said our help was free, we 
just said it didn't cost _money_.  Ok, the FSF did but they don't speak for 
all of us...)

> Very often, product developers are several versions behind, with
> no easy way to use the current kernel version.

I'm aware of that.  But if you can't set up a test system to reproduce the bug 
on a current system, the rest of us haven't got a _chance_.

> For example, a 
> common scenario is starting with a kernel that comes with a board
> (with source mind you), where the kernel came from the semi-conductor
> vendor, who paid a Linux vendor to do a port, and it was
> released in a time-frame relative to the Linux vendor's
> product schedule.

Then poke your vendor to fix the problem.

If you've decided to use a divergent fork from a vendor rather than the 
mainstream version, then the vendor has to support that fork for you because 
we're not going to be familiar with it.  (You can _hire_ one of us to support 
it for you, but we're not going to do so on a volunteer basis.)

We're happy to debug _our_ code.  But "our code" is the current vanilla 
release tarball.  If you can't reproduce the problem in the current vanilla 
tarball, then it's not our bug.  If you can only reproduce it in an older 
version: congratulations, we must have fixed it since.  If you can only 
reproduce it in some other fork, obviously their changes introduced the bug.  
If it's "your code plus this patch", we need to see the patch.

If _you_ can't reproduce it in our code, how do you expect _us_ to?

> This is how you end up having people STARTING projects today
> using a 2.6.11 kernel.  (I know of many).

Oldest I've seen a new project launch with this year is 2.6.15, but I agree 
with your point.

Whoever decided backporting bug fixes to a 2.6.16 kernel forever was a good 
idea seems to have muddied the waters a bit.  Ironically I don't know anybody 
actually _using_ that version, but I've seen several people point to it to 
show that "the community" supports arbitrarily older versions forever, and 
thus they don't have to upgrade to get support, and 2.6.18 is actually 
_newer_ than that...

> The real difficulty, when a developer finds themselves in
> this position, is how to forward-port the BSP code necessary to
> reproduce the bug in the current kernel.  Often, the code
> is not isolated well enough (this is a vendor problem that
> really needs attention.  If you have the BSP in patches, it
> is usually not too bad to forward port even across several
> kernel versions.  But many vendors don't ship stuff this way.)

Yup.  Sucks, doesn't it?  This is not a problem that improves with the passage 
of time.

Might be a good idea to make it clear up front that even if your changes never 
get mainlined, failure to break up and break out your patches is still likely 
to cause maintenance problems down the road.

> The fact is, that by a series of small steps and delays by
> the linux vendor, chip vendor, board vendor,
> and product developer the code is out-of step.

Hence the importance of breaking out and breaking up the changes.

> It's easy to say "don't get in this position", but
> this even happens when everyone is playing nice and actively
> trying to mainline stuff.  BSP support in arch trees often
> lag mainline by a version or two.

Getting out of sync is inevitable.  Happens to full-time kernel developers, 
that's why they have their own trees.  That's a separate issue from asking 
for patches and getting a source tarball that compiles instead.  "Here's a 
haystack, find the needle."

Mainlining changes and breaking them up into clean patches on top of some 
vanilla version (_any_ vanilla version) are two separate things.  You have to 
win one battle before you can even start the other.

> The number of parties involved here is why, IMHO, it has
> taken so long to make improvements in this area.

The lack of a clear consistent message from us to the vendors hasn't helped.

Rob
-- 
"One of my most productive days was throwing away 1000 lines of code."
  - Ken Thompson.
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Sunday 15 June 2008 10:39:43 Leon Woestenberg wrote:
> Hello all,
>
> On Thu, Jun 12, 2008 at 2:41 AM, Rob Landley <[EMAIL PROTECTED]> wrote:
> > Most packages don't cross compile at all.  Debian has somewhere north of
> > 30,000 packages.  Every project that does large scale cross compiling
> > (buildroot, gentoo embedded, timesys making fedora cross compile, etc)
> > tends to have about 200 packages that cross compile more or less easily,
> > another 400 or so that can be made to cross compile with _lot_ of effort
> > and a large enough rock, and then the project stalls at about that size.
>
> Agreed, OpenEmbedded has a few thousands, but your point is valid.
> However, fleeing to target-native compilation is not the way to
> improve the situation IMHO.

You say it like fleeing is a bad thing. :)

I believe building natively under emulation is the Right Thing.  Cross 
compiling has always historically been a transitional step until native 
compiling became available on the target.

When Ken Thompson and Dennis Ritchie were originally creating Unix for the 
PDP-7, they cross compiled their code from a honking big GE mainframe because 
that was their only option.  One of the first things they wrote was a PDP-7 
assembler that ran on the PDP-7.  The reason they created the B programming 
language in the first place was to have a tiny compiler that could run 
natively on the PDP-7, and when they moved up to a PDP-11 Dennis had more 
space to work with and expanded B into C.

When they severed the mainframe umbilical cord as soon as they were able to 
get the system self-hosting, it wasn't because the PDP-7 had suddenly become 
faster than the GE mainframe.

Compiling natively where possible has been the normal way to build Unix 
software ever since.  Linux became a real project when Linus stopped needing 
Minix to cross-compile it.  Linus didn't "flee" Minix, he assures us he 
erased his minix partition purely by accident. :)

> Moore's law on hardware also goes for the host, 

Which is why people no longer regularly write application software in assembly 
language, because we don't need to do that anymore.  The result would be 
faster, but not better.

The rise of scripting languages like Python and javascript that run the source 
code directly is also related (and if you don't think people don't write 
complete applications in those you haven't seen any of the google apps).  The 
big push for Java in 1998 could happen because the hardware was now fast 
enough to run _everything_ under an emulator for a processor that didn't 
actually exist (until Rockwell built one, anyway).

Build environments are now literally thousands of times faster than when I 
started programming.  The first machine I compiled code on was a commodore 64 
(1mhz, 8 bits, the compiler was called "blitz" and the best accelerator for 
it was a book).  The slowest machine I ever ran Linux on was a 16 mhz 386sx.

According to my blog, I moved from a 166mhz laptop to a 266mhz one on April 
13, 2002.  I started building entire Linux From Scratch systems on the 166mhz 
machine, including a ton of optional packages (apache, postgresql, openssh, 
samba, plus it was based on glibc and coreutils and stuff back then so the 
build was _slow_), hence the necessity of scripting it and leaving the build 
to its own devices for a few hours.

Even without distcc calling out to the cross compiler, the emulated system 
running on my laptop is several times faster than the build environment I had 
7 years ago (2001), somewhat faster than the one I had 5 years ago (2003), 
and somewhat slower than the one I had 3 years ago (2005).  (That's emulating 
an x86 build environment on my x86_64 laptop.  I didn't _have_ a non-x86 
build enviornment 5 years ago for comparison purposes.)

> I think the progress is even bigger on big iron.

Not that I've noticed, unless by "big iron", you mean "PC clusters".  (You can 
expand laterally if you've got the money for it and your problem distributes 
well...)

> Also, how much of the 3 packages are useful for something like
> your own firmware Linux?

None of them, because Firmware Linux has a strictly limited agenda: provide a 
native build environment on every system emulation supported by qemu.  That's 
the 1.0 release criteria.  (Some day I may add other emulators like hercules 
for s390, but the principle's the same.)

Once you have the native build environment, you can bootstrap Gentoo, or 
Debian, or Linux From Scratch, or whatever you like.  I've got instructions 
for some of 'em.

The buildroot project fell into the trap of becoming a distro and having to 
care about the interaction between hundreds of packages.  I'm not interested 
in repeating that mistake.

Figuring out what packages will other people might need is something I stopped 
trying to predict a long time ago.  If it exists, somebody wanted it.  People 
want/need the weirdest stuff: the accelerometer in laptops is used for 
rolling marble games

Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Thursday 12 June 2008 13:18:07 Enrico Weigelt wrote:
> * Rob Landley <[EMAIL PROTECTED]> schrieb:
>
> Hi,
>
> > There's also qemu.  You can native build under emulation.
>
> did you ever consider that crosscompiling is not only good for
> some other arch, but a few more things ?

Sure, such as building a uClibc system on a glibc host, which my _previous_ 
firmware linux project (http://landley.net/code/firmware/old) was aimed at.

That used User Mode Linux instead of qemu, because "fakeroot" wasn't good 
enough and chroot A) requires the build to run as root, B) sometimes gets a 
little segfaulty if you build uClibc with newer kernel headers than the 
kernel in the system you're running on.

You can't get away from cross compiling whenever you want to bootstrap a new 
platform.  But cross compiling can be minimized and encapsulated.  It can be 
a stage you pass through to get it over with and no longer have to deal with 
it on the other side, which is the approach I take.

> > In addition, if you have a cross compiler but don't want to spend all
> > your time lying to ./configure, preventing gcc from linking against the
> > host's zlib or grabbing stuff out of /usr/include that your target hasn't
> > got, or
>
> #1: use a proper (sysroot'ed) toolchain

I break everything.  (I've broken native toolchains.  I just break them 
_less_.)

By my count sysroot is the fifth layer of path logic the gcc guys have added 
in an attempt to paint over the dry rot.

Personally I use a derivative of the old uClibc wrapper script that rewrites 
the command line to start with "--nostdinc --nostdlib" and then builds it 
back up again without having any paths in there it shouldn't.

> #2: fix broken configure.in's (and feed back to upstream or OSS-QM)

Whack-a-mole.  Fun for the whole family.  Only problem is, it never stops.

> #3: replace libtool by unitool

Uninstall libtool and don't replace it with anything, it's a NOP on Linux.

> > libraries are linked inside the emulator, anything that wants to look
> > at /proc or sysinfo does it natively inside the emulator...)
>
> Only crap sw looks at /proc at build time.
> Yes, there's *much* crap sw out there :(

99% of all the developers out there don't really care about portability, and 
never will.  Even if you eliminate the windows guys and the people who don't 
do C, 90% of the people who are _left_ get to work on the PC first, get it to 
work natively on other Linux platforms afterwards.

Cross compiling is a step beyond "portability".  They'll _never_ care about 
cross compiling.  If they get inspired to make it work on MacOS X, then 
you'll have to extract the source and _build_ it on MacOS X to make that 
work.  And 99% of all developers will nod their heads and go "quite right, as 
it should be".

This isn't going to change any time soon.

Rob
-- 
"One of my most productive days was throwing away 1000 lines of code."
  - Ken Thompson.
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Robert Schwebel <[EMAIL PROTECTED]> schrieb:

Hi,

> Instead of hacking around and inventing new things, you should have
> spent your time for improving libtool ...

No, not with libtool. 
I do not want to support that insane approach of tweaking command
lines in the middle - it's an Pandorra's Box. I've already spent 
too much time on it and decided to completely drop it.

Instead I prefer *clean* lines.

Unitool provides commands on an higher level than, gcc+co do.
These commands are on an higher functional level, hiding the 
individual platform's details. Also including things like importing
.la-libs, sysroot, etc.

Lt-unitool is a wrapper which parses the libtool commands and calls 
Unitool to do the actual work.


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Rob Landley
On Thursday 12 June 2008 13:34:21 Enrico Weigelt wrote:
> * Bill Gatliff <[EMAIL PROTECTED]> schrieb:
> > If the build system derives from autoconf, then a hacked-up
> > config.cache (or equivalent command-line args) often solves
> > problems for me.
>
> Only if you're working on *one specific* target for a long time.
> I, for example, have to support lots of different targets, so your
> approach does not work for me. Ah, and it's not *solving* any problem,
> just deferring it to some other day.

It's not deferring it, it's ripping out the failed automation and configuring 
it manually, answering each question by hand.  (That said, it's the approach 
I took to get bash to cross-compile in FWL.  Sometimes it's all you can 
do...)

Rob
-- 
"One of my most productive days was throwing away 1000 lines of code."
  - Ken Thompson.
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-15 Thread Enrico Weigelt
* Bernd Petrovitsch <[EMAIL PROTECTED]> schrieb:

> > Recent pkg-config supports sysroot.
> 
> FC-6 has "only" 0.21.

Not sure when the sysroot stuff got into upstream releaes, but
maybe you just should update ;-P

> > So you simply build your .pc files as usual (w/o sysroot prefix) and
> > set the sysroot prefix via env on the pkg-config call.
> 
> From a quick glance over the man page of 0.23, yes.

Yep, it doesn't take $SYSROOT (as my original patch did), but with 
some prefix ...

> The problem is that the build-time (cross-)linker needs to find the
> (cross-compiled) lib under /my/build/host/target/environment/opt/foo/lib
> at link time and the shared linker under /opt/foo/lib at run-time.
> Hmm, after digging into that old project, it seems that libtool and
> the .la files were the problem.

Yes, libtool doesn't understand anything like sysroot. It generates 
broken pathnames in the .la files. You could use unitool and it's 
libtool-replacement.

> > > Yes. And even worse the compiled lib "foo" had explicit dependencies (on
> > > lib "bar") on
> > > "/my/build/host/target/environment/opt/bar/lib/libbar.so.1.2.3.4". 
> > 
> > And that's even more broken.
> 
> Yup. Maybe it was a result of my trial to make libtool work somehow ..

Heh, I've given up trying to repair libtool for a long time ;-P

> > > And BTW pkg-config didn't support the concept of a "DESTDIR" variable
> > > (and I don't care about the name of that variable).
> > 
> > No, why should it ?! It does not install anything.
> 
> But it may "use" installed files.

Yes, but they have to be strictly from sysroot. As said, recent 
pkg-config can handle this properly.
 
> > Probably you're looking for sysroot ?
> 
> Yes, very probably.

:)


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Jamie Lokier <[EMAIL PROTECTED]> schrieb:

> > > E.g. in python there are tests which call functions and check 
> > > their result to see if we are currently on a platform where 
> > > that function is broken (I think there was such a test for 
> > > poll() and some other functions).
> > 
> > IMHO, that's broken sw engineering from ground up.
> 
> Oh?  The alternative I see is to do the test at run-time.  But that
> adds to executable size and run-time slowdown on most platforms.

There's no generic answer to this, we have to look at the details
carefully ;-P

Most times I've seen those checks, they silently enable some 
features, eg. if it looks for certain kernel devices. Definitively
the wrong way! It really should be in users/packagers control to
explicitly enable features. Nevertheless, the existence of some
file or device says nothing about whether it will be usable
(or *should* be used) at runtime. I've seen packages silently
enabling some feature and then failing at runtime since the 
previously detected device is missing later. What a nightamare 
for packagers.

Another point are broken syscalls. Well, you *have* check at runtime 
to be sure, or perhaps choose to ignore it and expect a sane system.

*If* you really want to do constraint checks, you should do it 
as an separate, optional step. Maybe issue a big-fat hint that 
one really should run that test (and follow certain instructions)
if he don't exactly know what he's doing.

> Doing it at build time is an improvement, for those people who don't
> care about cross-compilation.  (Not me, you understand.)

IMHO, it's just lazyness, at least for about 99% of the cases ;-P


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Robert Schwebel <[EMAIL PROTECTED]> schrieb:
> On Fri, Jun 13, 2008 at 08:30:52AM +0200, Alexander Neundorf wrote:
> > Battle of Wesnoth is currently converted to both Scons and CMake, and
> > in the end they will decide about the winner. (since Eric is good at
> > arguing I guess it will be scons).
> 
> The thing is that 'configure && make && make install' plus the usuall
> --enable-foo / --disable-foo / --with-bla=blub semantics is simply *the*
> standard way of configuring stuff for unix systems. You don't need fancy
> tools, you get cross compiling almost for free and unix people simply
> know how to use it.

ACK. The ./configure script's syntax is quite convenient and semi-standard.
So, IMHO, all packages which have such an configure stage should support
that syntax, even those who code those scripts manually.
Strange, but some projects insist in having their config scripts incompatible
with autoconf's, just to show that it wasn't produced by it ;-o

> Been there, seen that. I maintain > 500 packets in PTXdist and guess
> which ones make 90% of the problems. Hint: they are not related to
> autotools ...

Well, if you count in the improber uses of autoconf (AC_TRY_RUN, etc),
then: yes.


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Firmware Linux (was Re: Cross Compiler and loads of issues)

2008-06-15 Thread Enrico Weigelt
* Rob Landley <[EMAIL PROTECTED]> schrieb:

> Did you try my FWL project? :)
> 
> http://landley.net/code/firmware

hmm, doesnt look like supporting sysroot ...


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Jamie Lokier <[EMAIL PROTECTED]> schrieb:

> A trouble with that is some packages have hundreds of user-selectable
> options - or even thousands.  It is unfeasible to use --enable-foo
> options for all of those when configuring then.

Well, not that much ;-o
But taking care of such feature switches is the job of an automated
distro builder tool, including things like dependency tracking.
Actually, I'm really too lazy for doing those stuff by hand ;-P

But you're right, some packages have too many optional features, 
which better should be their own packages, and there's sometimes
much code out there which should be reused ...

> Some other packages _should_ have more options, but don't because it's
> too unwieldy to make them highly configurable with Autoconf.  

Adding new feature switches w/ autoconf is almost trivial
(well, not completely ;-o)

> Imho, Kconfig would be good for more programs than it's currently used for,
> and could be made to work with those --enable/--with options: you'd be
> able to configure them entirely on the command line, or interactively
> with "./configure --menu" (runs menuconfig), or with a config file.

Yes, that would be fine. But for me the primary constraint is that
all switches/options can be specified by command line - otherwise
I'd need extra complexity for each package in my distbuilder tool.

> Perhaps it might even be possible to write a very small, portable,
> specialised alternative to Make which is small enough to ship with
> packages that use it?

No, I really wouldn't advise this. Make tools are, IMHO, part of 
the toolchain (in a wider sense). Once point is avoiding code 
duplication, but the really important one is: a central point of
adaption/configuration. That's eg. why I like pkg-config so much:
if I need some tweaking, I just pass my own command (or a wrapper).
If each package does it's library lookup completely by itself, I
also need to touch each single package in case I need some tweaks.
I had exactly that trouble w/ lots of packages, before I ported
them to pkg-config.


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Jamie Lokier <[EMAIL PROTECTED]> schrieb:

> Media players with lots of optional formats and drivers are another.
> (They also have considerable problems with their Autoconf in my
> experience).

You probably mean their hand-written ./configure script, which is
intentionally incompatible w/ autoconf ("this is not autoconf" 
as primary directive" ;-P) ... I guess we've got the same one
in mind ;-)

> Reality is that Kconfig front end to autotools does work - as you've
> proved.  It's a good idea. :-)

Now, we just need an autoconf-alike frontend for Kconfig ;-)

> Most packages need lots of additional libraries installed - and the
> development versions of those libraries, for that matter.  Too often
> the right development version - not too recent, not too old.  
> With the wrong versions, there are surprises.

But that's not the problem of autoconf or any other buildsystem,
just bad engineering (often on both sides).

> You said about too many user-selectable options.  Many large packages
> _check_ for many installed libraries.  Get them wrong, and you have
> the same problems of untested combinations.

It even gets worse when they silently enable certain features on
presence/absence of some lib. That's btw one of the reasons why
sysroot is an primary constraint for me, even when building for the
platform+arch.

> Have you felt uncomfortable shipping a package that does use Autoconf,
> Automake and Libtool, knowing that the scripts generated by those
> tools are huge compared with the entire source of your package?

Yeah, that's one of those things in autotools I never understood:
why isn't there just one function for each type of check/action, 
which is just called with the right params ?


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html