Andrew McNaughton <[EMAIL PROTECTED]> wrote:
>
>
>On Thu, 25 Apr 2002, James G Smith wrote:
>
>> What's a `very large amount of data' ?  Our NIS maps are on the order
>> of 3 GB per file (>64k users).  Over a gigabit ethernet link, this
>> still takes half a minute or so to copy to a remote system, at least
>> (for NIS master->slave copies) -- this is just an example of a very
>> large amount of data being sync'd over a network.  I don't see how
>> transferring at least 3 GB of data can be avoided (even with diffs,
>> the bits being diff'd have to be present in the same CPU at the same
>
>rsync solves this problem with sending diffs between machines using a
>rolling checksum algorithm.  It runs over rsh or ssh transport, and
>compresses the data in transfer.

Yes - I forgot about that - it's been a year or so since I read the
rsync docs :/  but I do remember it mentioning that now.
-- 
James Smith <[EMAIL PROTECTED]>, 979-862-3725
Texas A&M CIS Operating Systems Group, Unix

Reply via email to