On Thu, 25 Apr 2002, James G Smith wrote:

> What's a `very large amount of data' ?  Our NIS maps are on the order
> of 3 GB per file (>64k users).  Over a gigabit ethernet link, this
> still takes half a minute or so to copy to a remote system, at least
> (for NIS master->slave copies) -- this is just an example of a very
> large amount of data being sync'd over a network.  I don't see how
> transferring at least 3 GB of data can be avoided (even with diffs,
> the bits being diff'd have to be present in the same CPU at the same

rsync solves this problem with sending diffs between machines using a
rolling checksum algorithm.  It runs over rsh or ssh transport, and
compresses the data in transfer.

I'd be very interested to hear how well it works with a file of that size.

rsync has almost entirely replaced my use of scp.  It's even replaced a
fair portion of the times where I would have use cp because of it's
capability to define exclusion lists when doing a recursive copy of a
directory.

Andrew McNaughton

Reply via email to