Re: [SLUG] Backup theory

2009-05-17 Thread Daniel Pittman
david  writes:
> Daniel Pittman wrote:
>> david  writes:
>>
>>> I've got the following:
>>>
>>> 2 x servers - single small hard drives in each
>>> 1 x desktop - four hard drives including one removeable drive in a caddy
>>> intended solely for back up purposes.

[...]

>>> What's the current best practice for back up in this kind of
>>> situation?
>>
>> It varies.  Personally, I take advantage of the fact that a Linux system
>> has no magic "metadata", so a copy of all the files is enough to perform
>> a bare-metal restore.
>
> So this suggests to me that I could make a <# cp -a> of my root/boot
> drive onto an empty drive which I then remove and take off-site,
> rsync'ing it periodically?

Yes, that would be sufficient to provide a "bare metal" recovery copy of
your system.

> Or is it necessary to use dd?

Absolutely not.

> Where does the MBR fit into this?

Ah.  Now, /that/ isn't part of the filesystem image, but is part of the
"partition, etc" part of the recovery process.

It is generally[1] sufficient to chroot into the restored system and
rerun the grub installer, or use the "fix the boot setup" option in your
rescue system.[2]


When I said that Mondo wrapped up some of this nicely, I meant that it
captures the partition map, LVM and MD configuration, and handles
reinstalling the boot loader after recovery.

None of that is strictly /hard/, but it is a set of things to learn how
to do, and something most people don't get a lot of practice in.

> The problem with any back up system is that normally you only find out
> that it works for sure when you really *need* it to work.

*nod*  You know the *really* good thing about the widespread
availability of virtual machine software?  "Free" bare metal to test
recovering systems to. :)

Regards,
Daniel

If you have a full copy of all your data, though, your worst case path
is "install a new system, copy the data back", so you can't loose /too/
badly.

Footnotes: 
[1]  As in, on any sane, modern platform, which definitely includes
 current Debian and Ubuntu, but doesn't include RHEL 4 and, IIRC, 5
 series systems.  Unless your hardware is absolutely identical.

[2]  I keep a grub boot CD in my "rescue" kit, since it can also be used
 to reinstall the system, and it can read the existing menu.lst file.

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backup theory

2009-05-17 Thread Dean Hamstead


PS: On a Mac,  you can usually take a hard drive out of one machine and 
put it in another and it will "just work". How much tweaking to get the 
same result on linux/ubuntu?


network cards and significantly different disk devices (ie pata, sata, 
some strange raid) are usually the only hurdle, but also gfx card if you 
are using X.


network cards are usually just a matter of changing the mac address, or 
some other minor changes


gfx is usually just a matter of reconfiguring X, if you are using 
anything inside the nvidia range, you can change cards without much fuss.


hard disk games with /dev/hda /dev/sda /dev/cciss 
/dev/someotherraidthing are usually just a matter of editing the fstab 
and rebooting. in this instance setting init=/bin/bash in grub/lilo is 
your friend.




Dean
--
http://fragfest.com.au
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backup theory

2009-05-17 Thread Daniel Pittman
Dean Hamstead  writes:

>> PS: On a Mac, you can usually take a hard drive out of one machine
>> and put it in another and it will "just work". How much tweaking to
>> get the same result on linux/ubuntu?
>
> network cards and significantly different disk devices (ie pata, sata,
> some strange raid) are usually the only hurdle

Not so much, these days, if you are sensible.

[...]

> network cards are usually just a matter of changing the mac address,
> or some other minor changes

This is fair.  As a side note, under /etc/udev you will find the
configuration file that binds the persistent names (eth0, etc) to your
hardware, which you may need to alter if you want to change those
persistent names.

[...]

> hard disk games with /dev/hda /dev/sda /dev/cciss
> /dev/someotherraidthing are usually just a matter of editing the fstab
> and rebooting. in this instance setting init=/bin/bash in grub/lilo is
> your friend.

Actually, these days you would have to be kind of silly to use something
other than mount-by-LABEL or mount-by-UUID[1], given the fairly dynamic
nature of device discovery.

In that case you system will just work(tm) on the new hardware, because
it identifies what to mount based on the filesystem, not the hardware it
happens to be sitting on top of.

Regards,
Daniel

Footnotes: 
[1]  I prefer the later, because the chance of a conflict is zero, while
 the former is pretty high — especially with some distributions
 naming their root partition '/' uniformly.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backup theory

2009-05-17 Thread Daniel Pittman
david  writes:

> I've got the following:
>
> 2 x servers - single small hard drives in each
> 1 x desktop - four hard drives including one removeable drive in a caddy
> intended solely for back up purposes.
>
> I run Mondo on the two servers periodically with the intention of
> being able to do a disaster [1] recovery quickly. Mondo produces 2 DVD
> images for each server. I run rsync nightly (good enough for my
> purposes) for more volatile data such as email, databases
> etc. Everything is very tidy.
>
> The desktop has about 350G of data and software. The software is
> unbelievably complicated because I use it to test server set-ups and
> odd bits of software etc. In other words, it's a dog's breakfast.
>
> I would like to run Mondo or something similar on this machine too,
> but I fear it would not be practical. At the moment I run rsync for
> the most obvious data, but that doesn't help with all the complicated
> software, and I would like to be able to recover that too in the event
> of disaster [1].
>
> What's the current best practice for back up in this kind of
> situation?

It varies.  Personally, I take advantage of the fact that a Linux system
has no magic "metadata", so a copy of all the files is enough to perform
a bare-metal restore.

I think use BackupPC[1] to provide me a space-efficient copy of all the
files.  In normal use the web interface is sufficient to recover from
most problems.

In a disaster I boot from a LiveCD, partition, format, etc, the disks,
and then use a combination of the command-line tar creation code in
BackupPC, netcat, and tar in the LiveCD to stream the data back over the
network.

This is reasonably easy to achieve, but requires a little low level
knowledge of how partitioning, etc, work under Linux.  Mondo does
capture that information much more nicely, I confess.

> PS: On a Mac, you can usually take a hard drive out of one machine and
> put it in another and it will "just work". How much tweaking to get
> the same result on linux/ubuntu?

With a recent Debian or Ubuntu, zero.[2]  Getting X running again after
doing that /might/ take a bit of work, but not much, and the basic
system itself should be good.

If you use something less capable, like older RHEL systems, a fair bit
of work is required to get it booting, but the basic process is more or
less the same.

I don't know where Fedora sits, but I presume they have also moved more
to the Debian new-style "ship all the drivers in initramfs, detect the
hardware" strategy than the older RHEL "ship exactly what is required
for the current machine, hard code everything" model.

Regards,
Daniel

Footnotes: 
[1]  http://backuppc.sf.net/

[2]  Technically, you need to ensure the CPU architecture is compatible,
 so an x86_64 deployment will not run on an i386-only host, but
 otherwise you are good to go.

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Backup theory

2009-05-17 Thread David Andresen
David

I run a backup server using systemimager (see
http://wiki.systemimager.org/index.php/Main_Page )

It is a 'killer application' as far as I am concerned.

I sleep well every night knowing I have a my servers and my desktop hard
disks imaged.

Planning to test a couple of 1TB USB external hard disk running
systemimager. Boot from USB and swap to the second drive to have one off
site.

Two backups may be better for stuff you do not want lost.

Cheers
David Andresen



"The only means of strengthening one's intellect is to make up one's
mind about nothing, to let the mind be a thoroughfare for all thoughts."
- John Keats

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backup theory

2009-05-17 Thread david

Daniel Pittman wrote:

david  writes:


I've got the following:

2 x servers - single small hard drives in each
1 x desktop - four hard drives including one removeable drive in a caddy
intended solely for back up purposes.

I run Mondo on the two servers periodically with the intention of
being able to do a disaster [1] recovery quickly. Mondo produces 2 DVD
images for each server. I run rsync nightly (good enough for my
purposes) for more volatile data such as email, databases
etc. Everything is very tidy.

The desktop has about 350G of data and software. The software is
unbelievably complicated because I use it to test server set-ups and
odd bits of software etc. In other words, it's a dog's breakfast.

I would like to run Mondo or something similar on this machine too,
but I fear it would not be practical. At the moment I run rsync for
the most obvious data, but that doesn't help with all the complicated
software, and I would like to be able to recover that too in the event
of disaster [1].

What's the current best practice for back up in this kind of
situation?


It varies.  Personally, I take advantage of the fact that a Linux system
has no magic "metadata", so a copy of all the files is enough to perform
a bare-metal restore.




So this suggests to me that I could make a <# cp -a> of my root/boot drive onto 
an empty drive which I then remove and take off-site, rsync'ing it periodically? 
Or is it necessary to use dd? Where does the MBR fit into this?


The problem with any back up system is that normally you only find out that it 
works for sure when you really *need* it to work.



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Backup theory

2009-05-17 Thread david

I've got the following:

2 x servers - single small hard drives in each
1 x desktop - four hard drives including one removeable drive in a caddy 
intended solely for back up purposes.


I run Mondo on the two servers periodically with the intention of being able to 
do a disaster [1] recovery quickly. Mondo produces 2 DVD images for each server. 
I run rsync nightly (good enough for my purposes) for more volatile data such as 
email, databases etc. Everything is very tidy.


The desktop has about 350G of data and software. The software is unbelievably 
complicated because I use it to test server set-ups and odd bits of software 
etc. In other words, it's a dog's breakfast.


I would like to run Mondo or something similar on this machine too, but I fear 
it would not be practical. At the moment I run rsync for the most obvious data, 
but that doesn't help with all the complicated software, and I would like to be 
able to recover that too in the event of disaster [1].


What's the current best practice for back up in this kind of situation?

thanks,

David

[1] Disaster such as: earthquake, fire, pestilence, inappropriate rm.

PS: On a Mac,  you can usually take a hard drive out of one machine and put it 
in another and it will "just work". How much tweaking to get the same result on 
linux/ubuntu?

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html