On Wednesday 05 January 2011 06:31:01 Alexander Graf wrote:
> On 05.01.2011, at 13:07, Rob Landley wrote:
> > On Tuesday 04 January 2011 15:00:12 Alexander Graf wrote:
> >>>> I have this very issue with s390. The only host to run (and compile)
> >>>> this on is an s390. And few people have those. So it breaks from time
> >>>> to time.
> >>>
> >>> I have some pages bookmarked hinting how to get S390 Linux to boot
> >>> under hercules, the same way I have instructions for running m68k under
> >>> Aranym. But in general, if QEMU doesn't support it I have a hard time
> >>> making myself care...
> >>
> >> Few people jump through the hoops to run an emulator to compile and run
> >> qemu inside then when they only want to verify if their patches break
> >> something. The general philosophy I've seen is that the best we can
> >> expect is a "does ./configure && make break on your x86_64 box?".
> >
> > If you're talking about running qemu on a non-x86 host, I don't do that. 
> > But if you're talking about running non-x86 code on qemu, my project's
> > motto is "we cross compile so you don't have to".  The thing is designed
> > so you grab a tarball and go "./run-emulator.sh" to get a shell prompt in
> > your emulated environment, with full development tools.  If you go
> > "./dev-environment.sh" instead you get a 2 gigabyte pesistent ext2 image
> > mounted on /home so you can wget and build fairly large packages.
> >
> > I've built the whole of Linux From Scratch 6.7 inside this this, on a
> > couple different platforms.  (Still debugging powerpc and mips, probably
> > uClibc issues.  Less spare time than I used to have...)
> >
> > In theory I could do the same with qemu itself, just like any other
> > software package.  If you feed it just one architecture in a
> > --target-list it shouldn't take _too_ long to build.  But in practice
> > running the result would be too slow to do more than boot to a shell
> > prompt and demonstrate that it worked.
>
> Yeah, don't worry. My point was that anything that doesn't build and run on
> x86 hosts has little chance of getting test coverage.

Which is half the reason I did Aboriginal Linux.

Download one of the prebuilt system images, use ./dev-environment.sh to get a 
shell prompt in the emulator with full development tools, wget your source 
code, ./configure, make, run test suite.

That means people who don't own this hardware can still test it on their 
laptop, or from a cron job.  It means you can package up a test environment 
with your bug report so a package maintainer (who hasn't got the hardware 
either) can reproduce your non-x86 bug and test their fixes.

I did a giant evil presentation on this once upon a time, over 200 slides.  
(An 8 hour, day-long course to actually cover all that, but everybody kept 
wanting 1 hour summaries...  Oh well.)

  http://landley.net/aboriginal/downloads/presentation.pdf

> >>> I have been know to test out of tree architecture patches, though.  I
> >>> only ever got sh4 to work by patching qemu, for example.
> >>
> >> I really dislike out-of-tree.
> >
> > I can't stand 'em, but I don't control what gets merged into most
> > projects.
>
> The main issue is that it takes time and effort to get stuff upstream - and
> it's good that way. There are people out there who are great programmers,
> but unfortunately don't have the patience to go through an upstream merging
> cleanup process.

Sometimes it's not a question of cleanup, sometimes it's a question of the 
upstream developers simply not wanting to do something you want to do.  It's 
possible to disagree on _goals_.

If the then-kconfig maintainer thinks miniconfig is simply a bad idea, you 
can't 
exactly work around him by rewriting the implementation.

I've been submitting perl removal patches to the linux kernel build on and off 
since Peter Anvin went crazy and simultaneously introduced perl as a build 
prerequisite to the 2.6.25 kernel (and every other project he contributed to, 
ala klibc and syslinux and such, all at the same time).  He resists the 
removal on principle because he thinks it's GOOD that the kernel depends on 
perl to build.  I have no idea why.

Is sysfs an ABI the kernel exports to userspace that should be stable and 
documented, or is it a private channel that Greg KH and Kay Sievers made for 
their userspace udev package that nobody else should ever use?  When Linus 
believes one thing and Greg and Kay clearly believe another but won't _admit_ 
it, you can get some (hilarious or frustrating, depending on your point of 
view) squirming out of them:

  http://lkml.org/lkml/2007/7/20/487

Technical issues I'll engage with ad nauseam, but irreconcilable differences of 
_opinion_ you can't always work around.

The squashfs maintainer's efforts to get his patch merged upstream were 
positively _heroic_, but really: by the time EVERY SINGLE DISTRO is shipping 
an out of tree patch, the vanilla kernel not including it is a problem with 
the vanilla kernel.  (And yes a year or so before it was merged I checked, and 
couldn't find a non-toy distro (I.E. one with a package management system and a 
reasonably current kernel) that _didn't_ include squashfs.  Which didn't get 
merged for another 4 releases after that, and when it _did_ the file format was 
incompatible with what was already out there.)

I'm not sure the _twelve_year_ saga of union mounts (at least since I first saw 
an out of tree implementation) is really a "patience and cleanup" issue 
either.  Yes it's a hard problem to get right, but so is virtual memory.  The 
linux kernel has spent its entire history with a series of sucky VM it's 
ripped out and replaced at least twice (anybody remember 2.4.10?), and that's 
not counting token based thrashing control and /proc/sys/vm/swappiness and the 
OOM killer and so on.  But it worked all that out in-tree, and union mounts 
are still out of tree.  But NFS is in-tree.  Sometimes, it seems a bit 
arbitrary to me.  Oh well.

> >> As soon as an architecture runs publicly
> >> available code, it should get upstream, so others can benefit from it.
> >
> > Entirely agreed.  I've been waiting for any of the m68k improvements to
> > QEMU (to run more than just coldfire) work to get merged for a long time.
> >  And I have a todo item to look at https://github.com/uli/qemu-s390
> > also...
>
> I'm currently working on the s390 parts, so no worries. I have a cleaned up
> tree and partially working system emulation already.

Ooh, I'm very interested in adding support for this to aboriginal linux.  
Please keep me posted...

Rob
-- 
GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem 
Forever, and as welcome as New Coke.

Reply via email to