Thank you for such a detailed explanation.
Quite a few new concepts for me, it was not boring rather a little
overwhelming but interesting stuff.
I'm defiantly going to look into this a little further. 
Probably looking at the first method, and maybe scripting an install via
Kickstart if RH 6.1 supported it back then. Still need to digest ....

I'm concerned about down time, since I don't really know what is on the
machine.
Besides the machine is at a co-location two hours away from the office :-(
Seems like unmounting the file systems and doing dd would take too much
time, according to your comment regarding reading a book :-) .


warm regards

                Richard


> -----Original Message-----
> From: Chris Watt [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 17, 2002 3:56 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Total Backup of a system (RH6.1)
> 
> 
> At 09:29 2002/06/17 -0700, Richard Wilson wrote:
> >I'm new to Linux, and have inherited responsibility for a 
> Linux system.
> >This system is important, however the guy that set up this 
> system left the
> >company. 
> >Eventually I will reverse engineer it and document what 
> exactly is on it.
> >Since we don't know what exactly is on the machine we need 
> to be able to
> >clone this system and bring it up to it's current state. 
> >
> >What is the best method for cloning a RH 6.1 machine and 
> bringing it back to
> >life in the quickest manner?
> 
> The most efficient way would be to tar with bzip2 compression 
> the contents
> of all your filesystems (except /proc) into a single file, 
> then write the
> file to a cd-r using tomstrbt (see www.toms.net) as a boot 
> image for the
> disc (that way you could boot from the cd and do a restore 
> even if you had
> totally destroyed the original hard disk contents).
> 
> That having been said, the easiest way is to get one of your 
> other machines
> to export (as a writable NFS filesystem or a Windows share) a 
> directory on
> a filesystem with a quantity of free space at least equal to 
> the total size
> of the RH6.1 box's hard disks (not the size of the contents, 
> the actual
> size of the disks), and then mount it on the RH6.1 box and do 
> a low-level
> backup of the disc partitions. Useful points are:
> 
> 1. You can find out what partitions you are using and where they are
> mounted by running "mount" or reading the /etc/fstab file.
> 
> 2. You can't safely make an image file of a partition while 
> you have it
> mounted read/write. Either re-mount all your local 
> filesystems as read-only
> (the command "mount -oro,remount mountpoint" will make a filesystem
> read-only even if it's in use, "mountpoint" will typically be 
> replaced with
> things like "/" and "/var" and "/usr": This is the info you got from
> /etc/fstab or "mount").
> 
> 3. The tool for making a low-level copy of a partition is 
> "dd". A typical
> usage might be "dd if=/dev/hda2 of=/mnt/remote/hda2_raw.img" 
> which would
> copy the second parition of your primary master IDE drive to the file
> "hda2_raw.img" in the "/mnt/remote/" directory. You would 
> simply do the
> same thing with the arguments the other way around to do a 
> restore. See
> "man dd" for details.
> 
> 4. Many file servers may have trouble dealing with files over 
> 2gb in size
> (e.g. older NFS servers on 32-bit machines or "Shared" 
> directories from
> Windows systems). If your RH6.1 box has partitions larger 
> than 2gb then you
> may need to use something like:
> 
> dd if=/dev/sda1 | split -b 650m - sda1_raw.img.
> 
> Which would create a series of safe 650mb files (which you 
> could then burn
> to cds if you felt like it). To restore from a backup like 
> this you would do
> 
> cat sda1_raw.img.* | dd of=/dev/sda1
> 
> 5. For this to work in practice (i.e. so that you can restore 
> stuff) you
> need to know which partitions to put stuff on, and you need 
> to have the
> right sized partitions. Storing a text file containing the 
> output of the
> command "fdisk -l /dev/sd? /dev/hd? /dev/md?" may be a good idea.
> 
> 6. If you've totally munged the previous filesystems and 
> partitions, you
> may need something to boot with that will actually let you do 
> a restore
> over a network. Tomstrbt (see www.toms.net) works well for 
> this on most
> systems.
> 
> 7. If you're concerned about the amount of space being taken 
> up by those
> filesystem images, you can make them much smaller. A good 
> trick is to mount
> all the filesystems and fill the free space on each one with simple
> repeating data (a good way to do this is "dd if=/dev/zero 
> of=bigzero.file"
> followed by "rm bigzero.file" as soon as dd exits because it 
> ran out of
> free space on the device), then back the file system up, 
> piping dd's output
> to bzip2 along the lines of:
> 
> dd if=/dev/md0 | bzip2 -c | split -b 650m - 
> /mnt/networkfs/md0_raw.img.
> 
> You will probably have time to go and get a cup of coffee or 
> write a novel
> while this is going on, but you should end up with a 
> compressed image which
> you can still restore from, and which should be smaller (maybe by %50
> depending on the content) than the amount of _used_ space on the
> filesystem. You would restore such a backup with something 
> along the lines of
> 
> cat /mnt/networkfs/md0_raw.img.* | bzcat | dd of=/dev/md0
> 
> If you're still reading, I hope I haven't bored you to death. 
> Next class
> we'll go into actual automated backup scripts. . . ;)
> --
> Best Viewed with Practically Anything:
> This document is formatted in 7-bit ASCII text. Anyone who has trouble
> reading the document is encouraged to view it through any 
> rendering system
> which permits distinction of 8-bit bytes (e.g. hexadecimal 
> notation) and
> apply the conversion standard defined in 
> http://www.ietf.org/rfc/rfc20.txt
> 
> 
> 
> _______________________________________________
> Redhat-list mailing list
> [EMAIL PROTECTED]
> https://listman.redhat.com/mailman/listinfo/redhat-list
> 



_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to