On 10/7/07, Wayne Davison <[EMAIL PROTECTED]> wrote:
> On Mon, Jan 08, 2007 at 10:16:01AM -0800, Wayne Davison wrote:
> > And one final thought that occurred to me: it would also be possible
> > for the sender to segment a really large file into several chunks,
> > handling each one without overla
On Mon, Jan 08, 2007 at 10:16:01AM -0800, Wayne Davison wrote:
> And one final thought that occurred to me: it would also be possible
> for the sender to segment a really large file into several chunks,
> handling each one without overlap, all without the generator or the
> receiver knowing that i
Evan Harris wrote:
> Would it make more sense just to make rsync pick a more sane blocksize
> for very large files? I say that without knowing how rsync selects
> the blocksize, but I'm assuming that if a 65k entry hash table is
> getting overloaded, it must be using something way too small.
rsync
On Mon, 8 Jan 2007, Wayne Davison wrote:
On Mon, Jan 08, 2007 at 01:37:45AM -0600, Evan Harris wrote:
I've been playing with rsync and very large files approaching and
surpassing 100GB, and have found that rsync has excessively very poor
performance on these very large files, and the performa
On Mon, Jan 08, 2007 at 01:37:45AM -0600, Evan Harris wrote:
> I've been playing with rsync and very large files approaching and
> surpassing 100GB, and have found that rsync has excessively very poor
> performance on these very large files, and the performance appears to
> degrade the larger th
I've been playing with rsync and very large files approaching and surpassing
100GB, and have found that rsync has excessively very poor performance on
these very large files, and the performance appears to degrade the larger
the file gets.
The problem only appears to happen when the file is