On Wed, Jun 27, 2001 at 12:30:24AM -0400, HP Wei wrote:
>I am looking for multi-rsync to transfer files
> from a master machines to several other machines at the
> same time. (ie. using broadcast mechanism.)
The rsync+ patch may help here. I updated it to apply against rsync version
2.4.6 and
Hi,
I am looking for multi-rsync to transfer files
from a master machines to several other machines at the
same time. (ie. using broadcast mechanism.)
rsync does not have this capability at this moment.
(1) Is the next release going to include this capability?
If yes, when is it
On Tue, Jun 26, 2001 at 06:50:28PM -0500, Alcorn, Ned <[EMAIL PROTECTED]> wrote:
| How can I setup rsync to do a comparison between two folders or servers and
| then place the files that have differences in a seperate folder?
| (..a folder other than the two I am comparing, need this for versioni
On Tue, Jun 26, 2001 at 11:26:11PM +1000, Martin Pool wrote:
> On 25 Jun 2001, Adam McKenna <[EMAIL PROTECTED]> wrote:
>
> > There are 5839 bytes waiting in the SendQ on the sending side for each
> > connection.
> >
> > 64.71.162.66.56108 206.26.162.146.22 6432 5839 24820 0
> > CLOS
How can I setup rsync to do a comparison between two folders or servers and
then place the files that have differences in a seperate folder?
(..a folder other than the two I am comparing, need this for versioning
purposes...)
All suggestions welcome!!
Ned Alcorn
Sr. Systems Analyst
Continental
On Mon, 25 Jun 2001, Andrew Tridgell wrote:
> see if we can find a solution without a buffer.
Here's a solution with a non-growing buffer. This code keeps the
receiver->generator pipe clear by reading the ints and setting redo
flags in a character array (of flist->count elements). I'm avoiding
Yes, Michael, indeed it is. Thank you. Just one unfortunate draw back:
"Unison does not currently understand hard links." (from its shortcoming
section). The ftp site to be clustered has about 250mb * 100 hard links.
That means a waste of about 25GB if implement it with unison.
SUMMARY.
Rsync is
Yes, that sounds like a pretty good plan for (say) rsync 3.0. We all
seem to be more or less on the same track as to how the protocol
should look.
Here are my feelings about the way to get there. I would be happy to
have holes picked in them:
* rsync 2.x works well, but is too crufty to be a
No, you misunderstood him: he meant that --delete would delete things, not
the lack of it. Of course if that's what you want then you need to put it
in.
Yes, your command line should work. Note, however, that when copying
between two filesystems mounted on the local machine, people have seen
ha
On 25 Jun 2001, Wayne Davison <[EMAIL PROTECTED]> wrote:
> I was wondering if the protocol should be updated to avoid ever
> assuming that an EOF on the socket was OK. The only case I know of
> where this allowed is when we're listing modules from an rsync server.
> If we modified the protocol to
On 25 Jun 2001, Adam McKenna <[EMAIL PROTECTED]> wrote:
> There are 5839 bytes waiting in the SendQ on the sending side for each
> connection.
>
> 64.71.162.66.56108 206.26.162.146.22 6432 5839 24820 0
> CLOSE_WAIT
> 64.71.162.66.56111 206.26.162.146.22 6432 5839 24820
On 30 Jul 2001, "J.Saravanan" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> bash$ rsync -a [EMAIL PROTECTED]::test test
> Password:
> @ERROR: auth failed on module test
>
> I did try this command, But the result is same as you can see above..
>
> Any other suggestion?..
What is in the server error log
A recent email from Phil Howard prompted me to think about getting rsync
to use less memory for its file list. Here's an early idea on how to
modify the protocol to not generate the file list entirely in advance.
Please feel free to poke holes in this if I'm going astray.
I envision abbreviating
On Monday, June 25, 2001 03:17:18 PM -0400 "Kovalev, Ivan"
<[EMAIL PROTECTED]> wrote:
+--
| I am doing a "poor man cluster" using rsync to synchronize content of 2
| servers each of which has its own directly attached storage. Since it is a
| cluster (load balancer on top of these 2 servers),
On Mon, 25 Jun 2001, Andrew Tridgell wrote:
> I've applied your simple nohang patch.
Cool. That's the one that affects the most people.
> Instead we need a way of reproducing the bug and see if we can find a
> solution without a buffer.
You can minimize the buffer usage by applying my move-fil
15 matches
Mail list logo