> From: discuss-bounces+blu=nedharvey....@blu.org [mailto:discuss-
> bounces+blu=nedharvey....@blu.org] On Behalf Of
> 
> Suppose you have a 1TB hard disk. How on earth do you back that up? If you
> use a 1TB USB hard disk (USB 2.0) At best case, you'll get about 30MB/s.
> (You won't get that fast, but it is a good round number for discussion)

Yes, what you need is something that does incremental snapshots.  Presently
zfs, soon btrfs, and I heard you say it can be done with LVM although I've
never seen that done.  Also, I think there's probably a solution based on
VSS.  And of course WAFL and other enterprise products.

Additionally, there are lots of "potential" solutions, for example, use FUSE
to mount a filesystem, and because it's running through fuse, you'll be able
to keep track of which blocks change and then you can perform optimal
incrementals etc.  But I'm not aware of anything that is actually
implemented this way.


> 1TB is 1,099,511,627,776 bytes. At 30MB/s that's a little over 9 1/2 hours
> to backup.
> 
> If you use a dedicated high speed SATA or SAS drive, you may get a
> whopping 160MB/s. That's 1 hour 49 minutes.

Take it for granted, you won't be stuck at USB2 speeds.  USB has universally
(to the extent I care about) been replaced by USB3, since about 2-3 years
ago.  The bottleneck is the speed of the external disk, which I typically
measure at 1Gbit/sec per disk.  (You can sometimes parallelize using
multiple disks depending on your circumstances.)  It's random IO that really
hurts you.


> With snapshots, you can back up a consistent state, as if your 2 hour
> backup happened instantaneously, but you are still writing a lot of data.

Right.  Presently at work we have a bunch of ZFS and Netapp systems, that
replicate globally every hour, and use various forms of WAN acceleration,
compression and dedup.  Although it is optimal, it is very significant,
noticeable strain on the WAN.

_______________________________________________
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to