In article <[EMAIL PROTECTED]>,
Mitchell Gil Maltenfort <[EMAIL PROTECTED]> wrote:
>I want to get the kernel source code for 2.2.0 and pgcc (the 'gcc' optimized
>for the Pentium) and compile a kernel to my specs using the pgcc.  I'll
>compile the libraries as well. Then I'll get the source code for all the
>things I like (AfterStep, scilab, tetex,lyx,etc.) and etc.

A good way to save time on that would be to grab the Red Hat SRPMS, install
all of them (better have a big disk for this ;-), and then edit each .spec
file to use pentium-gcc.  Alternatively, you can replace 'cc' and 'gcc'
with 'pgcc' and make sure you don't recompile gcc or egcs from SRPMS.

>Having done all of this, what I would expect (hope?  wish?) to have is a
>Linux system about 20% or more faster than what I've been using - which I
>want because I ultimately want to use this box as an engineering worksation.

Good luck.  Post benchmarks when you're done.  ;-)

>What I want to avoid having is a week or so of gnashing hairs and pulling
>teeth - strike that, reverse it? - and ultimately doing a reinstall of the
>old system.  

Be ready to do a reinstall of the old system anyway.  It's best if you have
a completely redundant machine for this, i.e. one that you can wipe out and
reinstall from scratch if you need to.  Reinstalling Red Hat from CD takes
20 minutes; fixing a truly hosed system will take longer.  Do the math.

You ultimately want to generate a set of RPMS that you can use to build a 
new installable Red Hat CD or FTP/NFS image; that way you can reinstall if
something nasty happens.

>I've got "Running Linux" so I plan to follow its instructions step-by-step
>in making backups of libraries and updating symlinks.  (Is there a way to
>make a list of all symlinks?)  

The simple solution is:

        find / -type l -print

If you want an archive of those symlinks:

        find / -type l -print0 | cpio -ov0Hcrc > links.cpio

To restore it:

        cpio -ivmd < links.cpio

Omit the 'v' for less output from cpio.

Don't do any of the following:  they're all traps for those with a little
knowledge, but they all have fatal flaws.

        find / -type l -printf 'ln -s %l %p\n'

        find / -type l -printf '%l\0%p\0\0' > listfile
        perl -e '$/="\0\0"; while (<STDIN>) { ($target,$linkname) = split(/\0/,$_); 
symlink($target,$linkname) or warn "link $linkname -> $target: $!"; }' < listfile

>I assume going step-by-step is my greatest
>protection against mishaps (that, and saving all existing config files on
>disk).  

Step-by-step doesn't work all that well when there are mutual dependencies.
Having the luxury of being able to reinstall quickly is IMHO more useful.
This is another reason for generating RPMS instead of building the entire
system by hand.

>Are there any other caveats I should be aware of? Is it advisable to begin
>as soon as I can get a download from ftp.kernel.org (sometime next month at
>this rate) or should I wait until a certain level of stability had been reached?

By the time you have finished rebuilding all the libraries, development
tools, and utilities, there will be several new versions of the kernel
released anyway (heck, there'll probably be a new release from either
Alan or Linus between when I send this to you and when you read it).

I would leave this for last because of the potential damage that a buggy
kernel can do.  The other alternative is to grab a particular kernel
version, test it thoroughly, and if it works, keep it while you do the
rest of the process.

-- 
Zygo Blaxell, Linux Engineer, Corel Corporation, [EMAIL PROTECTED] (work),
[EMAIL PROTECTED] (play).  It's my opinion, I tell you! Mine! All MINE!
Size of 'diff -Nurw -x CVS -x win_include -x configure winehq corel' as of
Tue Feb 2 02:14:38 EST 1999: 6285 line(s)

Reply via email to