In a message dated: Wed, 11 Jul 2001 13:40:13 +0200
Vicente Vives said:
>My problem is very 'simple'...
>I have to dump 13 GB using 2 GB tapes.
>
>Can i do this?
>How ?
You can, you just need to be creative in how you do it. There are
several approaches you could take:
- split the 13G into smaller (<2GB) chunks and use a
different config for each one. Using a tape stacker/library
you can easily write a shell script which would run out
cron and manipulate the tapes for you, placing the next tape
in after the previous has finished.
- Manually run the level 0 dump once (week/month/year,
whatever) and run with only incrementals on a daily
(or otherwise frequent) basis. Set your bumpsize
sufficiently such that you get a huge savings in space
when bumping to the next level dump.
- stagger the introduction of the data to be backed up
across several days so the level 0 for each data set
isn't all on one day. For example, given the following
file systems to back up:
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 202220 69687 122093 36% /
/dev/hda7 101089 12285 83585 13% /home
/dev/hda5 497829 2519 469608 1% /tmp
/dev/hda2 1517952 915420 525420 64% /usr
/dev/hda3 608756 78756 499076 14% /var
/dev/sda1 26209780 25024392 919112 96% /home1
/dev/sda2 26209812 25433196 510340 98% /home2
/dev/sda5 21172948 3390312 17567528 16% /nfs
/dev/hda8 6783660 1775812 4663256 28% /usr/local
you could stagger the introduction of each to the backup
cycle like this:
Mon /, /home, /usr
Tue /var, /usr/local
Wed /home1
Thu /home2
Fri /nfs
(granted, the sizes of my file systems are such that even
doing this wouldn't help if I were limited to 2G tapes, but
it's for example only, but you get the point :)
Hope that helps some.
--
Seeya,
Paul
----
...we don't need to be perfect to be the best around,
and we never stop trying to be better.
Tom Clancy, The Bear and The Dragon
If you're not having fun, you're not doing it right!