Re: handling duplicate names deterministically and adding alternative checksum algorithms

2005-06-16 Thread Wayne Davison
On Thu, Jun 16, 2005 at 02:01:54PM -0600, Andrew Shewmaker wrote: > Is rsync going to stick with qsort as the default sorting algorithm? Yes. I don't wish to increase the memory use for really large file sets that have no need for deterministic unduplication. > Would a patch to add a --mergesort

Re: handling duplicate names deterministically and adding alternative checksum algorithms

2005-06-16 Thread Andrew Shewmaker
Thanks for the pointer to that mergesort thread and for the md5 patch. Is rsync going to stick with qsort as the default sorting algorithm? Would a patch to add a --mergesort option be accepted? Even though most of my boxes are Linux and it appears that qsort usually runs as a mergesort, I would

Re: handling duplicate names deterministically and adding alternative checksum algorithms

2005-06-13 Thread Wayne Davison
On Thu, Jun 09, 2005 at 05:24:45PM -0600, Andrew Shewmaker wrote: > Our current practices would benefit if rsync were enhanced to handle > duplicate names deterministically as described in the todo list. One proposed solution to this was to use a functioned called mergesort(). A patch was provide

handling duplicate names deterministically and adding alternative checksum algorithms

2005-06-09 Thread Andrew Shewmaker
Thanks, rsync developers, for creating such a great tool. I'm a member of a team of system administrators and integrators that use rsync with SystemImager and Cfengine to rapidly deploy and prescriptively maintain systems. Our current practices would benefit if rsync were enhanced to handle dupli