I use rsync v3 on a pool of about 280GB and about a few million files. With
rsync v3, the writes start within a few seconds of starting the sync and it
traverses the entire pool in 10-20 minutes. I only transfer about 3-4GB of
files each night with rsync reducing that to about 1GB over a T1 at 196KB/s
in about 1.5-2 hours. Considering the transfer itself is going to take 90
minutes just by bandwidth restrictions, this is not bad. I used to run this
on rsync 2.x but it took at least 3 hours to complete as the initial file
list would take ages and ages. I also began push memory off to swap to make
room for rsync which ground system performance to a halt.
I think that you can use rsync v3 (on both sides) to sync pools without
issue.
I'm assuming that you are using linux here also. With *solaris you have the
zfs option as well.
I did some research a while back to use a cluster filesystem for the storage
pool but all cluster filesystems have much lower I/O performance than an
on-disk filesystem.
Also, I did try to do some software raid mirroring over iscsi but did not do
much more that basic testing. The problem here is that the raid mirroring
is syncronous so the slow iscsi connection will effect backup performance
quite a bit. I couldnt find any info on making the linux software raid work
in async mode with the local drive being the priority drive.
Unfortunately ZFS only does 2 redundant device raid workalike and you would
want more redundancy to make this work. If you could do raidz* with any
number of redundant drives you could also put local disk cache and log
drives in place and let zfs handle the slow link on iscsi.
local | remote
disk1 disk2 disk3 | disk4 disk5 disk6
disk7=log,disk8=cache |
On Mon, Nov 17, 2008 at 4:26 PM, Ermanno Novali <[EMAIL PROTECTED]>wrote:
> > Yes use dd (or even better dd-rescue that is restartable and gives
> progress
> > indication) for big pools. For smaller pools you might use "cp -a" or
> "rsync
> > -aH" (restartable). You have to find out the practical upper limit for
> the
> > latter methods depending on your requirements.
>
> Thanks for dd-rescue suggestion, i'll take a look at it
>
>
> > Another alternative is to use at least three disks in a rotating scheme
> and
> > RAID1. (Those of you who have been reading the list for more than a few
> days are
> > getting tired of hearing this by now, I imagine...!) Say you have three
> disks
> > labeled 1, 2 and 3. Then you would rotate them according to the schedule
> below,
> > which guarantees that:
> > - there is always at least one disk in the BackupPC server.
> > - there is always at least one disk in the off-site storage.
> > - all disks are never at the same location.
> >
> > 1 2 3 (a = attached, o = off-site)
> > a o o
> > a a o -> RAID sync
> > o a o
> > o a a -> RAID sync
> > o o a
> > a o a -> RAID sync
> > . . .
> >
>
> i'll try this too, where i have a raid or i need it
> thanks!
>
> Ermanno
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's
> challenge
> Build the coolest Linux based applications with Moblin SDK & win great
> prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/