Jon Dufresne wrote: > 2008/11/18 Varrun Ashok <[EMAIL PROTECTED]>: > >> hello everybody, >> I have followed the clfs book version 1.1.0 till chapter IV :building the >> basic tools. >> then I also decided to boot and not to chroot. the procedures till now >> explain the installation many important tools. >> why are we repeating the building and installing all the packages like >> binutils,gcc etc in chapter V again: building the CLFS system? >> when I went through the book I did not find the reason. >> do I have to get in to the new kernel and reinstall all the packages again >> instead of installing them from the host? >> > > I'm curious about this myself. > > It seems to me you don't _have_ to build a native toolchain unless you > want a native toolchain. Are the authors suggesting that a native > toolchain is more stable than a cross toolchain? > > The type of build the book does is called a canadian cross. This page does a good job at explaining it. I'll try to sum it up. http://www.airs.com/ian/configure/configure_6.html .
When compiling gcc you have 3 options related to target. --build (triplet of the system the build is taking place on, eg i486-pc-linux-gnu) --host (triplet of the system the package will run on, eg x86_64-unknown-linux-gnu) --target (triplet of the system the package will compile for, eg alpha-unknown-linux-gnu) It is possible to design a build where the compiler will be compiled on a i486, then the compiler will execute on a x86_64, and that compiler will generate code for a alpha. cross-tools is the first stage in the build, it is a set of tools designed to run on your host machine and build for the target architecture. Most of the time the host machine and the target machine are one in the same. You're probably asking yourself, well, then why don't we just build the final system directly? The problem is interference from the host. This is what the sysroot book does, with varying degrees of success (See gettext & libtool for perfect examples). Most packages are unaware/negligent of sysroot style builds, they hard code paths to files that are found on the host system. So we build a minimal system into /tools. /tools contains a minimal set of utilities to compile the final system. A compiler, a shell, sed, patch, tar, etc... Now that we are in a chroot, all that's there is /tools to worry about. The build is very controllable, you don't have to worry about stuff getting linked against the host system / compiled with the detected path of binaries from the host system. The *lfs builds are designed so that as you install packages into your file system, they have a higher priority then packages in /tools. The build order was re-worked a year or two ago. It was designed to be as alphabetical as possible, and to satisfy dependencies. So, as you install packages, they may use the programs in /tools, but as each binary is generated, the system depends on /tools less and less, until its not needed at all anymore. CLFS bootstraps an entire linux system much the same way GCC bootstraps its compiler with the 3 stage system. Stage 1, the compiler that is built with your host compiler. Stage 2, the compiler that is built using the stage 1 compiler. Final Stage 3, The compiler that is build using the stage 2 compiler. You might think that the binaries from the stage 2 & stage 3 compilers would always be identical, but actually, they're not always, host operating systems can have an unpredictable influence when compiling. I hope this answers your question, it was a lot of typing!! ;-) Joe _______________________________________________ Clfs-dev mailing list [email protected] http://lists.cross-lfs.org/listinfo.cgi/clfs-dev-cross-lfs.org
