Re: LFS 6.2-rc1 Released
On Tue, 24 Jul 2007 13:53:17 -0500 Bruce Dubbs [EMAIL PROTECTED] wrote: The Linux From Scratch community is pleased to announce the first release candidate of LFS 6.3. Please see http://www.linuxfromscratch.org/lfs/view/6.3-rc1/chapter01/whatsnew.html for a complete list of new packages since the last release. This being a test release, we would appreciate you taking the time to try it out and report any bugs you find in it to the LFS development team at [EMAIL PROTECTED] Bruce Dubbs LFS 6.3 Release Manager Shouldn't the subject be: LFS 6.3-rc1 Released ? --- R. -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: LFS 6.2-rc1 Released
On Wed, 25 Jul 2007 07:45:43 +0100 TheOldFellow [EMAIL PROTECTED] wrote: On Tue, 24 Jul 2007 13:53:17 -0500 Bruce Dubbs [EMAIL PROTECTED] wrote: The Linux From Scratch community is pleased to announce the first release candidate of LFS 6.3. Please see http://www.linuxfromscratch.org/lfs/view/6.3-rc1/chapter01/whatsnew.html for a complete list of new packages since the last release. This being a test release, we would appreciate you taking the time to try it out and report any bugs you find in it to the LFS development team at [EMAIL PROTECTED] Bruce Dubbs LFS 6.3 Release Manager Shouldn't the subject be: LFS 6.3-rc1 Released ? --- R. Sorry about the x-post I wasn't awake. R. -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
Jeremy Huntwork jhuntwork at linuxfromscratch.org writes: If I end up getting it sorted it out, I'll let you take a look before I commit anything. Manuel, I'm slowly beginning to understand how the HLFS render 'magic' works. One question: would the 'condition' parameter be usable in an ENTITY declaration? If it is, the differences between the books could be even more minimal as we can set an entity for the target triplet and dynamic linker based on the arch we are building. -- JH -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
Jeremy Huntwork jhuntwork at linuxfromscratch.org writes: 2) The commands to adjust the gcc spec file would have to change to incorporate either dynamic linker. (Also, the current command in chapter 5's adjusting the toolchain, gcc -dumpspecs | sed 's at ^/lib/ld-linux.so.2 at /tools at g' \, assumes that we will find the name of the dynamic linker at the beginning of the line. In x86_64, this isn't the case. I think that at least the chapter6/readjusting phase could be made a little simpler. On the x86_64 arch this command: gcc -dumpspecs | sed \ -e 's@/tools/lib64/ld-linux-x86-64.so.2@/lib64/[EMAIL PROTECTED]' \ can easily become: gcc -dumpspecs | sed -e 's@/tools@@g' \ I can't test this on x86 right atm... would anyone be able to verify that this command would also work for x86? -- JH -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: lfs build-logs
Jeremy Huntwork wrote: Heya, Not sure how important this is for the rc books, but chapter 6 gcc tells the user to compare the gcc test results with those at this missing link: http://www.linuxfromscratch.org/lfs/build-logs/6.3-rc1 Just want to make sure we don't forget to generate these for 6.3. I don't have a copy of my logs, but I put in a stub for the logs. Dan, can you add the logs for -rc1? Also, for a while now, the gcc dummy tests have not been totally accurate for one section. This command ends up producing a lot more output than the book says it will: grep -o '/usr/lib.*/crt[1in].* .*' dummy.log Should we refine the above command or the expectd output? I remember that discussion. I thought we fixed it. I'll check into it some more. -- Bruce -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: lfs build-logs
On 7/25/07, Bruce Dubbs [EMAIL PROTECTED] wrote: Jeremy Huntwork wrote: Heya, Not sure how important this is for the rc books, but chapter 6 gcc tells the user to compare the gcc test results with those at this missing link: http://www.linuxfromscratch.org/lfs/build-logs/6.3-rc1 Just want to make sure we don't forget to generate these for 6.3. I don't have a copy of my logs, but I put in a stub for the logs. Dan, can you add the logs for -rc1? I don't have them at the moment, and I'm not at my main build box. Manuel is probably the best candidate since he can whip together a jhalfs build rather quickly. He may even have logs from the other day when he did a test build. Also, for a while now, the gcc dummy tests have not been totally accurate for one section. This command ends up producing a lot more output than the book says it will: grep -o '/usr/lib.*/crt[1in].* .*' dummy.log Should we refine the above command or the expectd output? I remember that discussion. I thought we fixed it. I'll check into it some more. I recall that, but nothing ever came of it. -- Dan -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: Gnome-2.18: Ejecting CD/DVD ROMs
Randy McMurchy wrote: Bruce Dubbs wrote these words on 07/25/07 19:21 CST: Where is the `eject' program located? It is not in BLFS; at least it is not in the index. It is not in BLFS. It is referenced a couple of times in the book as an optional component. It is truly CMMI. If I recall, it installs two little programs in /usr/bin (eject is one) and respective man pages and nothing else. That's the one, it's an optional runtime dep of gnome virtual file system. You can find it at, http://ca.geocities.com/[EMAIL PROTECTED]/eject.html . -- http://linuxfromscratch.org/mailman/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
Re: x86_64 build method
Greg Schafer wrote: Anyhow, I still suspect there is a buglet involving MULTILIB_OSDIRNAMES somewhere in the GCC driver that needs to be accounted for in this `--disable-multilib' build method, but my brain hurts when trying to figure out all the twisty parts of gcc.c. Thanks for your help and the bits you were able to confirm. And sorry for being a little bit testy on the other threads. At least we now have a definite working setup in the branch. Enough that I was able to build a 64-bit LiveCD based on it. (More on that shortly...) Do you know off-hand if anything changes with gcc-4.2? Now that we have a separate branch for 6.3, trunk will be milestoned for 7.0 which means updating glibc and gcc. -- JH -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
Jeremy Huntwork wrote: Do you know off-hand if anything changes with gcc-4.2? I've only tested x86 with GCC-4.2. I'll get to x86_64 and ppc when time allows. Regards Greg -- http://www.diy-linux.org/ -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
Jeremy Huntwork wrote: Jeremy Huntwork wrote: As an aside, the effects of their not having a /lib64 dir or symlink seems to be that if I want to use a CLFS system as a host, I *must* use their pure64 patch. I tried a build last night without using that patch and just using --disable-multilib and appropriate symlinks and gcc pass1 failed when it got to stage1 of the bootstrap. I didn't get the opportunity to add a /lib64 symlink and test it further... Hmm, that's interesting. I didn't expect that. I suppose that if the above is correct this also means that your native build expects a /lib64 dir or symlink on the host? This is confirmed. Adding /lib64 and /usr/lib64 symlinks to a CLFS host enables gcc to bootstrap in pass1 without using the pure64 patch. Initially I thought this part of the pure 64 patch: --- gcc-4.2.0.orig/gcc/config/i386/t-linux642007-05-16 19:21:19.0 -0400 +++ gcc-4.2.0/gcc/config/i386/t-linux64 2007-05-18 17:04:36.0 -0400 @@ -6,7 +6,7 @@ MULTILIB_OPTIONS = m64/m32 MULTILIB_DIRNAMES = 64 32 -MULTILIB_OSDIRNAMES = ../lib64 ../lib +MULTILIB_OSDIRNAMES = ../lib ../lib32 was why you weren't seeing the -L/lib/../lib64 stuff in your -v output. I tried my build with just that part changed but I still ended up with -L/lib/../lib appearing (which wasn't in your log). Therefore I don't really know what is going on :-( Tho' it does appear from what you say that the host's /lib* layout does indeed play a role. Ideally, the build method should work no matter what the /lib* layout on the host is. But I must say, it does appear that any 64-bit host that is missing /lib64 and /usr/lib64 is not kosher. After all, the facts are that a) the lib64 stuff is the default config throughout the entire upstream toolchain sources, b) the big distros all appear to use lib64 and c) lib64 is apparently mandated by the LSB (tho' it's a bit vague there judging from my casual glance). But on the flipside, all of the above possibly assumes a multilib setup? Dunno. It would be very interesting if all the 64-bit distros could be surveyed (ie: ls -ld /l*) to find out their /lib* layout arrangements and whether they are multilib or pure 64. Anyhow, I still suspect there is a buglet involving MULTILIB_OSDIRNAMES somewhere in the GCC driver that needs to be accounted for in this `--disable-multilib' build method, but my brain hurts when trying to figure out all the twisty parts of gcc.c. Regards Greg -- http://www.diy-linux.org/ -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: dbus-python error
On 7/25/07, DJ Lucas [EMAIL PROTECTED] wrote: /sources/dbus-python-0.82.0/install-sh -c -m 644 'types.py' '/usr/lib/python2.5/site-packages/dbus/types.py' 'import site' failed; use -v for traceback Traceback (most recent call last): File string, line 2, in module File /usr/lib/python2.5/os.py, line 696, in module import copy_reg as _copy_reg File /usr/lib/python2.5/copy_reg.py, line 7, in module from types import ClassType as _ClassType File types.py, line 6, in module from _dbus_bindings import ObjectPath, ByteArray, Signature, Byte,\ ImportError: No module named _dbus_bindings It's working fine for me, although I have python-2.4 on this system. Do you see any strange output from python -v -c 'import _dbus_bindings' For me, it shows it loads /usr/lib/python2.4/site-packages/_dbus_bindings.so FWIW, I have these python modules added (in order): [EMAIL PROTECTED] sources]# grep py /var/log/packages numpy-1.0.3 pyorbit-2.14.3 pygobject-2.12.3 pycairo-1.4.0 pygtk-2.10.4 gnome-python-2.18.2 gnome-python-desktop-2.18.0 I believe only dbus, python and dbus-glib should make a difference when trying to load dbus-python. Any strange output from ldd /usr/lib/python2.5/site-packages/_dbus_bindings.so Mine shows just linking to libdbus-1 and libc. -- Dan -- http://linuxfromscratch.org/mailman/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
Re: Gnome-2.18: Ejecting CD/DVD ROMs
Randy McMurchy wrote: Bruce Dubbs wrote these words on 07/25/07 19:21 CST: Where is the `eject' program located? It is not in BLFS; at least it is not in the index. It is not in BLFS. It is referenced a couple of times in the book as an optional component. It is truly CMMI. If I recall, it installs two little programs in /usr/bin (eject is one) and respective man pages and nothing else. Thanks, I see it now in HAL and gnome-vfs. Both are kinda hidden down below the usual dependencies sections. I know it is a runtime dependency. I just never noticed this before. -- Bruce -- http://linuxfromscratch.org/mailman/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
Re: x86_64 build method
On Wed, Jul 25, 2007 at 05:45:48PM -0400, Ivan Kabaivanov wrote: The only big issue is 32bit vs 64bit. As someone already mentioned previously in this thread, there are almost nil benefits in building a 64bit userland. Very few applications can make use of being compiled 64bit. So on ultrasparc (64bit sparc) I've always done what the ultrasparc gurus have suggested for many years, 32bit userland + 64bit cross compiler and 64bit kernel. So if you decide to support x86_64 you'll end up needing a cross compiler just for the kernel. Oh, and you don't actually need multilib glibc either if you go with pure 32bit/pure 64bit userland. Even though 64bit CPUs sold outnumber 32bit CPUs sold at the moment, the installed base of 32bit CPUs is far larger than 64bit CPUs. So I suggest LFS remain for the foreseeable future purely 32bit userland. For traditional 64-bit platforms, that is true. On my own mac G5 (which needs a 64-bit kernel), the only real benefit of multilib is that I get to run the testsuite on the kernel compiler. OTOH, I get to say this one goes up to 64 ;). Having said that, I don't actually notice that the 64-bit parts of my desktop (that would be gimp, gnumeric, kde, the audio apps, and icewm on multilib) are slower, nor do I notice that the 32-bit desktop as a whole is any more responsive to e.g. changing the active window - in fact if anything it *feels* slower with 32-bit userspace. But enough of traditional multilib 64-bit platforms, I'm not about to propose that ppc64 be added to LFS ;-) For x86_64, the situation is very different. The problem with x86 is that it lacks registers, so gcc produces slow code. With x86_64, the code is faster. A 64-bit kernel also appears to avoid the problems of accessing large amounts of memory (I say appears because none of my boards have more than 2GB of memory, and hardware or bios limitations have been reported several times on lkml). For sure, it avoids the whole idea of highmem and bounce buffers in kernelspace. I was hoping that this discussion would be deferred until after the holiday season. If not, I guess I'll have to come down in the 64 bits good, 32 bits less good camp for x86|x86_64. The LFS family of projects are all about learning and *building* the software. Building multilib can be an aggravation - the base system has to build packages with libraries in both sizes (for LFS we could argue about a few of them, but the problem is that *somebody* will find an application that they want to build in the other size), and inevitably that sometimes means wasting time by installing the associated programs from the first size and then overwriting them with the programs from the second size. Sure, the base system is not a big deal, it just takes longer. The real multilib fun is in BLFS - can you say gdk-pixbuf-query-loaders or gnome servers ? I'm tempted to suggest that this would be a good time to put ppc (32-bit only) back into the book, but I'm not sure if there is a big enough user base to make that worthwhile ? ĸen -- das eine Mal als Tragödie, das andere Mal als Farce -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
On Tuesday 24 July 2007 12:10, Matthew Burgess wrote: On Tue, 24 Jul 2007 11:59:39 -0400, Jeremy Huntwork [EMAIL PROTECTED] wrote: Matthew Burgess wrote: On Tue, 24 Jul 2007 11:40:24 -0400, Jeremy Huntwork [EMAIL PROTECTED] wrote: The question is, do we want x86_64 to be a separate book, or simply roll these small changes into a conglomerate book with x86? I'd certainly prefer them to be in the same book, My biggest problem with this approach is that it gets to be a nightmare to edit. But, it is do-able. Hmm, that nightmare seems a bit extreme. Certainly, for native x86-64, which is the only additional target we're contemplating at the moment, having 2 paragraphs (or small sections at the most) in the book surrounded in the relevant profiling syntax, doesn't seem too onerous to me. Once in there, I doubt they'd need amending much - probably only if newer GCC versions change relevant portions of the specs file. Of course, if more targets are desired in the future, our approach may well need to change, but for now I think x86 x86-64 native builds capture the largest section of the LFS audience and anyone else can continue on to CLFS. Regards, Matt. Speaking from experience building LFS on x86, ppc (32bit) and sparc (both 32 and 64 bit), except for the dynamic linker and the boot loader, there is little to no difference in the instructions when building on different architectures. So with minimal effort the book can be modified to apply universally. The only big issue is 32bit vs 64bit. As someone already mentioned previously in this thread, there are almost nil benefits in building a 64bit userland. Very few applications can make use of being compiled 64bit. So on ultrasparc (64bit sparc) I've always done what the ultrasparc gurus have suggested for many years, 32bit userland + 64bit cross compiler and 64bit kernel. So if you decide to support x86_64 you'll end up needing a cross compiler just for the kernel. Oh, and you don't actually need multilib glibc either if you go with pure 32bit/pure 64bit userland. Even though 64bit CPUs sold outnumber 32bit CPUs sold at the moment, the installed base of 32bit CPUs is far larger than 64bit CPUs. So I suggest LFS remain for the foreseeable future purely 32bit userland. Ideally, parts of CLFS would be merged into LFS. I never understood the need for CLFS. Presumably it was for people like me who were building LFS on non-x86 architectures. But CLFS is just complicating what is a rather simple procedure. The only useful things in it are the extra packages needed for different architectures. And the instructions to build a cross compiler. Everything else is just LFS. I understand the reluctance of the LFS devs to explicitly support non-x86 build as someone has to spend the time and effort to test the instructions on those other architectures. But I still maintain that since you're discussing the inclusion of x86_64, you might as well consider modifying the instructions minimally so that, even if the book doesn't mention non-x86, the instructions will still work. I'm talking about the dynamic linker and the sed command fixing the gcc specs. IvanK. -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
LFS LiveCD x86_64-6.3-min-pre1 Available
Hello, The LFS LiveCD team is pleased to announce a new 64bit-only CD. It is a minimal CD, meaning that it contains no X Windows System and dependent software nor any source packages. The LFS book that is included is based on the current development x86_64 branch. Be advised that as of now that book contains no instructions for building a boot loader, and some of the textual information may need adjusting. However, it will produce a working base system. Since this is a pre-release, please help test the CD and report any bugs you may find to the livecd mailing list. The link to the primary mirror is: ftp://ftp.lfs-matrix.net/pub/lfs-livecd/lfslivecd-x86_64-6.3-min-pre1.iso There is also a secondary mirror: ftp://ftp.osuosl.org/pub/lfs-livecd/lfslivecd-x86_64-6.3-min-pre1.iso Enjoy! -- JH -- http://linuxfromscratch.org/mailman/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
Re: Gnome-2.18: Ejecting CD/DVD ROMs
On 7/25/07, Randy McMurchy [EMAIL PROTECTED] wrote: Dan Nicholson wrote these words on 07/25/07 18:21 CST: I think you actually need the `eject' program. It's a runtime dep for HAL on Linux. It's called by /usr/lib/hal/hal-storage-eject, which is used by the Eject method. I get it now. Thought it's said in the gnome-mount docs that: The grand plan with gnome-mount is to get the appropriate GNOME software (such as gnome-volume-manager and gnome-vfs) to use this instead of invoking mount(1)/umount(1)/eject(1) / invoking methods on HAL directly. I got confused with that and thought that gnome-mount actually did the work. I suppose gnome-mount/gnome-volume-manager simply adds some wrappers to the HAL routines so the operations don't need root privileges. I must admit I'm weak in this area. Right. `eject' is a pretty simple program. I would have actually thought that hal would just have its own implementation, but I guess not. Although, hal does do the unmounting before hand, so all eject is doing is truly ejecting (opening/closing tray, etc.). I think part of the reason for calling the eject binary is that it's implemented on FreeBSD (Solaris too?), so there's some platform independence they don't have to recreate. Dan, care to write a paragraph that describes the relationship between HAL - gnome-mount - gnome-volume-manager I thought I understood it really well, but a little experiment I did a month or so ago meant something wasn't working quite as I expected. But, for the most part, I think I know how they work. HAL knows about how to mount and umount filesystems. The idea is to have HAL perform these actions since more policy and dynamic things can be handled there than just calling mount(8). At the next level, in order to make the call to HAL to do the mounting there's another level of abstraction with gnome-mount. This does stuff like handling per-user mount flags and mount points. It also handles the actual dbus call to HAL. The idea here is that if something in the desktop session wants a filesystem mounted, it should execute gnome-mount to handle all the details. This happens in gnome-vfs and gnome-volume-manager. An important point here is that it may still get denied when calls the Mount method (or Eject method in this case) on the HAL bus. There can be system wide HAL policy that says this volume is not mountable or only these people can mount stuff. In the desktop session, we have gnome-volume-manager, which calls out to gnome-mount when signals show up on the HAL bus that a new volume has been attached. You may notice in GNOME that if automounting fails (for whatever reason), you can browse the Computer node and force a mount. This is actually gnome-vfs calling out to gnome-mount. gnome-volume-manager is sort of a glorified preference manager where you can say what you want to happen in GNOME when certain hardware is added. David Zeuthen (the HAL dev) has said that it should be called gnome-hardware-manager since it does much more than just manage your volume prefs. So, in the typical case where you've just inserted a USB stick, the process goes something like this: - HAL receives hardware event, constructs some data on it, and sends it out on the org.freedesktop.Hal bus - gnome-volume-manager (g-v-m) sees this event on the bus. Since this is a removable volume, it checks to see that the user wants to mount them. The user does. - g-v-m executes gnome-mount. gnome-mount checks the user's per-volume prefs in GConf and builds up a method call for Hal. It calls the Mount method on the org.freedesktop.Hal.Volume bus and checks for errors returned from Hal. - gnome-mount returns status to g-v-m. g-v-m carries out any other actions it needs given success or failure of the mount. A similar chain of events happens when you remove the USB stick. There gnome-mount calling the Eject method causes /usr/bin/eject to be called by HAL. I thought gnome-mount also handled the case of reading fstab (there's code for it). I got a permission denied error, which I know comes from HAL, which assumes that if a volume is listed in fstab, mounting shouldn't be handled by HAL. But I can see the code in gnome-mount where it calls mount(8) instead of HAL in the event that it sees the device or mount point in fstab. I didn't work for me, though. Right now the text on the gnome-mount page is wrong and we need to fix it. I'm not sure what the best thing to say about its purpose is. I suppose its main two virtues are that it hides mounting details from the desktop (e.g. gnome-vfs) and that it integrates with the system-wide policy defined in HAL. That was probably way longer than it needed to be. I think I'm one of those people that learns by saying (or typing) things, and that was something of a refresher for me. -- Dan -- http://linuxfromscratch.org/mailman/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
Re: x86_64 build method
El Miércoles, 25 de Julio de 2007 19:10, Jeremy Huntwork escribió: Manuel, I'm slowly beginning to understand how the HLFS render 'magic' works. One question: would the 'condition' For LFS we should use the arch= attribute. It's more semantically correct. parameter be usable in an ENTITY declaration? If it is, the differences between the books could be even more minimal as we can set an entity for the target triplet and dynamic linker based on the arch we are building. I'm not sure what do you meant, but entities are resolved while loading the XMLs in memory and before processing the they with XSL, thus I don't see how could we say to xmllint/xsltproc that they must use one set of entities or the other at sources load time. -- Manuel Canales Esparcia Usuario de LFS nº2886: http://www.linuxfromscratch.org LFS en castellano: http://www.escomposlinux.org/lfs-es http://www.lfs-es.info TLDP-ES: http://es.tldp.org -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
On Wed, Jul 25, 2007 at 08:07:24PM +0200, M.Canales.es wrote: I'm not sure what do you meant, but entities are resolved while loading the XMLs in memory and before processing the they with XSL, thus I don't see how could we say to xmllint/xsltproc that they must use one set of entities or the other at sources load time. Thanks, that answers my question. :) -- JH -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
Re: x86_64 build method
On Wed, Jul 25, 2007 at 05:24:04PM +, Jeremy Huntwork wrote: can easily become: gcc -dumpspecs | sed -e 's@/tools@@g' \ I can't test this on x86 right atm... would anyone be able to verify that this command would also work for x86? Nevermind. I verified it. Will be adding this to the book shortly. -- JH -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page