Hello all-

We are in the process of developing a system which will need to daily copy
one million small (10k) files from one directory on one server to ten
others. Many of the files will not change.

Until now we have been using the "rsync -e ssh" approach, but as we have
started to add files (we are at 75,000) the time to generate a file list +
copy the files is far too slow. Is there a way to efficiently distribute the
data to all 10 servers without building the file list ten times? Any tips
that wise gurus could share would be very appreciated. Also, is there a
performance benefit from running Rsync as a daemon? Finally, is there any
other tool we could utilize with our without Rsync to help this process go
faster. As it stands now we believe that with the files in multiple
directories the process goes faster based on our initial tests.

Thanks
Matt


Reply via email to