Ernst Herzberg wrote:

>On Tuesday 24 January 2006 21:40, Jeff wrote:
>  
>
>>DUH ME! Open mouth, insert face...
>>
>>Ok, what I *meant* to say from post #1, is, the filesystem I'm
>>tarballing is quite large - 25g. The tar command should be able to
>>digest this, yes? Should I be worried?
>>    
>>
>
>Last week i back'ed up a machine with 4 80G disks as RAID5 with the method 
>mentioned before. tar-size on the desination machine was about 120GB 
>compressed (yes, one File:). Both filesystems are reiserfs.
>
>Restore again wih the same method, only the other way around, to 4 
>250GB-disks. No problems.
>
>Tip: check your destination tar file with tar -tzf ... or tar -tjf ... before 
>you delete the source. Compression is a good check that no data has been 
>changed during transfer of the data.
>
><earny/>
>  
>
Hey, I think I might have something useful to add here... :-D

To keep my tar file sizes more manageable, any directories containing
large directories have script blocks within the backup script that
create a tar file for each directory in that directory. (Whoa! Did I
just say that!?) Here's the block of code that, for example, handles my
/home directories:

<code>
dt=`date +%G%m%d-%H%M%S`
find /home/ -type s > /tmp/home-sockets.tmp
for x in `ls -lA /home/ | awk '{print $9}'`
do
        tar cpPj -X /tmp/home-sockets.tmp -f
/var/backups/home-$x-$dt.tbz2 /home/$x
done
rm -f /tmp/home-sockets.tmp
</code>

This creates a separate tar file for each directory in /home. The $dt
var isn't required, of course... I just use it to "time stamp" all of my
backup files as it makes it easier to track them.

Regardless of whether or not the kernel or file system can support the
huge tar files others have referred to, I prefer to always make things
as manageable and "modular" as possible. The smaller the files, the
easier (quicker, really) they are to work with.

Just food for thought...
-- 
gentoo-user@gentoo.org mailing list

Reply via email to