rsync as a deliberately slow copy?

2010-09-26 Thread Albert Lunde
I'm looking for a way to deliberately copy a large directory tree of files somewhat slowly, rather than as fast as the hardware will allow. The intent is to avoid killing the hardware, especially as I copy multi-gigabyte disk image files. If I copy over the network, say via ssh, I can use --bwlimi

Re: recent discussion regarding 'checksums'

2010-09-26 Thread Matt McCutchen
On Fri, 2010-09-24 at 00:31 -0400, grarpamp wrote: > [rsync -c fails to copy a file with an MD5 collision] Yes, right now "rsync -c" is not good if an attacker has had the opportunity to plant files on the destination and you want to make sure the files get updated properly, but that's an uncommon

Re: "writefd_unbuffered failed to write 4092 bytes to socket"

2010-09-26 Thread Mac User FR
Hi, If you are able to rsync over ssh, this connection may be more robust than plain rsync protocol. See http://www.mail-archive.com/rsync@lists.samba.org/msg26280.html Best regards, Vitorio -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change

Re: "writefd_unbuffered failed to write 4092 bytes to socket"

2010-09-26 Thread Matt McCutchen
On Sat, 2010-09-25 at 07:33 -0700, Joseph Maxwell wrote: > I'm attempting to maintain a mirror of a remote database, ~ 66Gb on a > FreeBSD platform. I do not have direct access to the database except by > rsync, anon. ftp etc. > > I'm running rsync nightly from crontab, with the > cmd > /usr/loca