* Moffett, Kyle D | 2010-03-25 17:49:33 [-0500]:

>We can just use --enable-e500-double when building (recent?) GCC.
Yep, looks good.

>Ok, so hopefully we can all agree on "e500v2"?  That's the name I'm going to
>go ahead and use in my newest build-cycle.
Yep, I think so. However we will see what will slips into dpkg once it
is there.

>For reference, I've included a summary of the rationale behind the
>suggestion:
>
>  * The only chipset families that support "SPE" instructions are:
>    * PowerPC e200
>    * PowerPC e500v1
>    * PowerPC e500v2
>
>  * The incompatibility between various SPE-capable CPUs mean that an arch
>spec of "spe" or "powerpcspe" is probably insufficiently descriptive.
Yes, "probably". Right now we don't see any.

>  * The "e200" processor series is an automotive processor and has
>insufficient storage to run even something like Emdebian Crush, let alone to
>be able to build anything on its own.  It should therefore be excluded from
>our discussion.  This means we just care about e500v{1,2} cores.
Right. The spec says, that e200z4 and e200z6 are binary compatible with
e500. However, they also mention that double precision can only be
achieved in software. So this looks like double precision opcodes result
in an invalid opcode and we have to emulate them in kernel. This counts
as binary compatible I guess.

>  * Freescale has indicated that they will not be building any more chipset
>families including the SPE instructions, so we don't have to worry about any
>newer chipset families.
>
>  * We can't tell exactly how common or uncommon the e500v1 chipsets are
>because Freescale's chipset comparison tables all just say "e500" without
>referring to the version.  As a result, we should probably be safe rather
>than sorry and refer to the version in the arch name (IE: e500v1/e500v2).
>
>  * We should just call it just "e500v2":
>    * Sufficiently descriptive of the hardware architecture
>    * Shorter and easier to type in commands (of which there are a lot)
>    * Similar situation to "lpia" (which is not called "i386lpia")
>
>The "easier-to-type" reason is especially applicable if we do a uclibc port,
>as the name "uclibc-linux-powerpce500" is much more of a pain to type out
>repeatedly than "uclibc-linux-e500".
>
>Is there anything I left out?
No, I think it is fine. You summarzied it well.

>The difference between a regular cross-compile and an icecc/distcc
>cross-buildd is that all the ./configure shell-script madness and some of
>the preprocessor crap is run *entirely* on the target system, then the
>preprocessed code is shipped across the network to a big beefy x86 box for
>building.  The environment is indistinguishable from a native build. (except
>for the fact that things build a lot faster)

I know how it works. I used it myself thus the bug I pointed you to. I
used it only for the first iteration, second (and following) were native
only.
Compile a little program with -fstack-protector native and cross with
icecc. I saw different results with gcc 4.3 and I haven't checked later.

>So even a relatively wimpy 1GHz dual-core system can keep 8-16 cores worth
>of beefy x86 systems busy, especially if it's ugly template-heavy C++ code
>or something else very CPU intensive to compile.  The downside is that the
>shell scripts, preprocessor, and linker all need to be run on the target
>board, but that's still way better than doing the whole build there.

Right. I'm okay using icecc/distcc on buildds if the target icecc
machine runs native architecture. I don't want to compile cross even
with icecc unless I have to.
Looking at the build time of xulrunner 1.9.0.14:
- s390: 30min
- i386: 33min
- kfreebsd i386 : 39mins
- powerpc: 1h
- alpha: 1:01
- ia64: 1:20
- me[0]: 1:29
- sparc: 1:35
- hppa: 2h
- mipsel: 3h
- mips: 3h
- armel: 14h

So I think it does not look too bad.

[0] I've built it complete, including all debs, ro clue how much extra
time it takes.

>Cheers,
>Kyle Moffett

Sebastian



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to