Re: rsync for windows doesn't like --rsh?
On 26/05/14 15:10, Kevin Korb wrote: It is expecting something command line compatible with [rs]sh which I am pretty sure openssl is not. I should have been more verbose. I'm not just randomly trying things. I have successfully used this under Unix (via --rsh), but the same thing doesn't work under Windows. (BTW it's openssl s_client - which acts as a I/O pipe. I have also tried socat - same problem) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
rsync for windows doesn't like --rsh?
Hi there I'm trying to get rsync for windows (tried two versions) to call --rsh so I can pipe rsync through openssl.exe. When I run it I get The process tried to write to a nonexistent pipe. So it looks like the CMD file I have that calls openssl.exe isn't joining up with rsync.exe? As rsync.exe defaults to calling ssh.exe, and I can even successfully call it via rsync.exe --rsh ssh.exe, that makes me think the problem is with either my tiny CMD file or openssl.exe not piping correctly? Buffering issues perhaps? Can anyone help this poor old Unix guy out on Windows? :-) Thanks -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 5124] Parallelize the rsync run using multiple threads and/or connections
On 26/01/14 18:03, L.A. Walsh wrote: But multiple TCP connections are not used to load a single picture. They are used for separate items on the page. A single TCP stream CAN be very fast and rsync isn't close to hitting that limit. The proof? Using 1Gb connections, smb/cifs could get 125MB/s writes and 119MB/s reads -- the writes were at theoretical speeds and were faster, because the sender doesn't have to wait for the ACKs with a large window size. A bit late but I'll add my 2c worth. bbcp - multi-tcp/threaded application. Completely nails rsync when transferring over high-bandwidth/high-latency links http://moo.nac.uci.edu/~hjm/HOWTO_move_data.html Like bittorrent, it establishes multiple TCP channels between a bbcp client and server, and I guess has a parent process that tells each child what part of the directory structure/data stream it is responsible for, and joins it all up at the other end I have tested rsync over a 100Mbs continental link and am lucky to get 10Mbs. Using bbcp with 4-6 channels, I can get 40-50Mbs (that's on a link with other real traffic on it - so it may have actually got 80-90Mbs byt itself for all I know) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Native Parallelization in rsync
On 10/09/13 03:38, ame...@gmail.com wrote: It's great you brought up bbcp. I didn't factor this into my initial email, but if we could further split up large files into N chunks and transport them concurrently, that would provide massive benefits as well. It's clear there are multiple areas for improvement that would let us do away with having to use other tools for various scenarios. If we could put some planning/development time in even the easiest-to-implement cases, it would be highly-beneficial to a large number of users. Indeed. I actually hacked together a shell script wrapper around rsync which would take the directory you were wanting to copy, tar it up and then split the resultant large file into 'N' chunks - then transfer the chunks in parallel. Using tricks with post xfer at the other end, I would automagically unpack back to the original directory structure. Major improvement in throughput - but only good for new file transfers - not for replicating deltas/etc. And before you ask, no you can't have it :-) It's part of a rather complex file replication environment we have and isn't a module that could easily be detached for redistribution To finish with, even though we can make rsync run much faster over WANs, we don't use the feature I just mentioned! Just because you can saturate your WAN pipe doesn't mean you should - typically there's other traffic that also needs to co-exist and such a hammering would have consequences - things have to be thought through. It would be nice to have the ability, but that doesn't mean everyone would use it all the time -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Native Parallelization in rsync
On 08/09/13 18:28, francis.montag...@inria.fr wrote: There is another rather common case I think where this would help: when the destination directory is located on a NAS over NFS. I think the biggest use-case is high bandwidth, high latency links. eg you have a 50Mbs WAN link to a site in another country, and can never get more than (say) 5Mbs. Parallelizing the data transfer could easily push that up to 20-30Mbs ...and there is a competitor to rsync that does this - bbcp. Mirror a directory from hostA to hostB using 'N' tcp streams. Runs like the clappers :-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Announce: Tool for replicating filesystem changes
Hi there That looks very interesting, but can I make suggestion? Don't call it should. That simply means no-one will ever be able to find it using a search engine. :-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsyncssl
Another good reason for a SSL-version of rsync: non-Unix clients... It's all well and good to talk about using vpns and ssh tunnels - but the fact is that a large percentage of rsync clients are non-Unix - like Windows - and getting them set up for ssh/etc is layering extra software on top of rsync. I'm not saying it can't work - but it's not simple. I'd love to see rsync-ssl (with the server having CRL support, client cert support, and the client/server doing cert validation of course) as for one thing I think it would make a damn fine laptop backup solution. I've run more than my share of Internet-facing services in my time and the lowest maintenance ones are the SSL/TLS services that require client certs. The bad guys cannot even knock on the door! An Internet-based rsync-ssl server that requires client certs would be brilliant for backing up laptops over the Internet: an enterprise competitor to all those cloudy services such as Dropbox/etc. :-) [well, probably need that VSS patch for rsync-win32 too ;-)] -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Is the --sparse option suitable for .dbf files
On 06/09/12 04:31, Kevin Korb wrote: Also, this is during a restore that I am talking about. I don't see a problem with using --sparse on everything during the backup. Conversely, is there any problem with NOT using sparse on sparse files? i.e. I know it will use more space - but will a sparse file copied somewhere by rsync NOT using the --sparse option be 100% equivalent to the original? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Cache file list in daemon mode?
So what you're really saying is gluster is quite slow at doing recursive directory listings, so how about just using find on the real backend bricks to find the files that have changed since last run, merge those listings together (to get rid of dupes) and then get rsync to just update those against the gluster service? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Low performance
did you try scp (although that could be CPU-bound due to crypto), ftp or wget - ie see how other TCP apps do the same job? If they all show the same speed - it's not an rsync problem -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: sync prob with big files
So you are using rsync on boxA that replicates a file mounted via CIFS on boxB to a file mounted via CIFS on boxC? How do you know this isn't a CIFS problem instead of a rsync one? Try rsyncing the file from boxB to /tmp on boxA and if that's successful, rsync it to boxC. Maybe there's a problem with one of the CIFS mounts? Jason -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync and encryption
I think Dirk was asking about securing the *DATA* on the remote server - not the *TRANSPORT* I'd recommend encfs. It has a --reverse option which allows you to mount a data tree and the new mount shows up with encrypted filenames and content. rsync that to the remote server, and even the local sysAdmins won't have access to the data i.e. encfs --reverse /home/ /tmp/crypt-view rsync -azv /tmp/crypt-view remote-server:... After a data loss, you could rsync that encrypted content back, then encfs-mount it and access the unencrypted data We use it as a mechanism of role separation: it allows our security group to use the server group infrastructure for backups/storage, without giving them access to the data... Jason -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: recent discussion regarding 'checksums'
Besides all this, what is the performance impact of -c? If it's moved from MD5 to X - will that impact performance? I've never used -c as I trust size+dates - probably like 99.9% of rsync users out there. I always ignored -c as I *assumed* it would come at an appreciable performance hit... However, I am talking about modern CPUs and running rsync over WANs - so I guess the bottleneck ain't going to ever be in the checksums ;-) I read the Fedora bug - is moving to a negotiable ranges of hashes really necessary? Why not move to SHA-256 and ignore the problem for another 10 years ;-) [sometimes more options isn't a good idea...] -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync as a deliberately slow copy?
On 09/27/2010 06:52 PM, Albert Lunde wrote: I'm looking for a way to deliberately copy a large directory tree of files somewhat slowly, rather than as fast as the hardware will allow. Just do it to localhost - that way it's still a network connection, and you can use --bwlimit. Also, you could try nice to lower the priority rsync runs at -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
is support for non-ASCII filenames in rsync complete?
Hi there Over the years there's been many people saying how files with non-ASCII charsets got their filenames corrupted moving between different systems. So I'm wondering what the current state is with rsync-3.06+? ie I want to use rsync-3.06+ to backup WinXP+ (and MacOS?) filesystems by using native versions of rsync clients to a remote Linux-based rsyncd process. If I do that, should I expect that our Chinese,French,etc users will have their backups with the correct filenames? As far as I'm aware, everything is UTF8 these days, so I shouldn't expect any issues? eg I shouldn't have to use the --iconv option (all this language-specific stuff gives me a headache ;-) Thanks -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: is support for non-ASCII filenames in rsync complete?
On 09/20/2010 09:51 AM, Matt McCutchen wrote: Is support for non-ASCII filenames in rsync complete? is a misleading question. Traditionally, rsync has always preserved filenames as byte strings and has never dealt with encoding issues, like most unix file manipulation tools. Corruption is uniformly the result of different applications interpreting those byte strings in different encodings or of conversions performed by OSes, OS emulation layers, or filesystems. The --iconv option is now offered as one way to work around some of those issues. OK - so for example you mean if someone rsync's data from one OS to another, and then exports it back via Samba - that might cause issues. But if they export it back via rsync, it *will* arrive back the same way? You can always do a test and see if the results are what you want. I didn't mention I have already done that for some random filenames I created and it's fine. But I certainly didn't do it for every language combination. There just seems to be enough historical noise around this issue that it was worth asking. -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: PST Rsync Issues
Jon Watson wrote: I'm curious if PSTs are just un-rsyncable due to their makeup or if there is something I can do to pare down this transfer and subsequent storage. Any tips or ideas are welcome. Outlook PST files are effectively database files. They can be optionally compressed and encrypted. If either of these options are enabled, I'd say it would completely screw up rsync's potential to do a differential copy - i.e. you'd transfer the lot every time. Even if those options aren't enabled, it could be that the data is altered dramatically each time a change in Outlook occurs that it still screws up rsync. Basically PST (and OST) files don't play nicely with differential copy tools. Similarly, the new Office 2007 format (basically compressed XML files) and OpenOffice format won't play nicely either - as compressed == random data - and rsync won't see linear changes in them like it can in text files/etc. (hope there's nothing too incorrect in the above. I'm sure someone will shout at me if I'm wrong ;-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Which rsync version?
Paul Slootman wrote: No, which suggests that when not preserving hardlinks, the CVS version is much faster (it starts transferring during the building of the file list, instead of waiting until both ends have generated the file list). That sounds pretty cool - will v3 also allow compression of the initial directory meta-data? (for others: currently the -z option only compresses files - not the initial data that allows the rsync client and server to compare file details - so they can decide what to send) (I saw some comment via Google from years back that the initial file listing command is nearly compressed and so doesn't need it. However, in my totally unscientific test I just sniffed a transfer, extracted the file listing data, and compressed it with gzip. Knocked 66% off the size...) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync replacement
Jamie Lokier wrote: Check out the TCP: advanced congestion control option in a 2.6 Linux kernel, and there is plenty of research on the topic. See SCTP and DSCP (among others) for the more transaction oriented side. Hi there Jamie Like yourself, our WAN (VPN over Internet) suffers majorly from large pipe, but large latency issues. e.g. we have a 10Mbs Internet link, but can only get ~2Mbs for a single TCP stream (e.g. HTTP, Rsync). Are protocols like SCTP and DSCP really capable of helping in such situations? i.e would they improve the throughput of single streamed transactions? If so, how would you manhandle rsync (a TCP app) to be able to use them? Any such thing as a TCP-to-SCTP proxy? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Reproducable failure with rsync, iptables and RHEL4
Timothy J. Massey wrote: Hello! I have a consistent, reproducable failure performing an rsync of an RHEL4 system running rsync in daemon mode with iptables enabled. With iptables disabled, or with a rule that explicitly allows all traffic, the rsync completes. However, with iptalbes enabled, the rsync starts, but will not finish. It fails after copying a seemingly random amount of data. Could it be you're hitting a iptables session timeout setting? e.g. if (during a rsync transfer) rsync hangs while reading in a large directory listing, iptables may decide that tcp session is dead. Then when tcp packets start flowing again, iptables sees them as part of a new tcp session - and they're not part of an existing session - so they're rejected. ethereal/wireshark should be able to prove that. (however, I think all the hanging rsync does is right back in the beginning - which doesn't match your symptoms) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
cannot get fuse-ssh to operate from a batch script - but does from cmd line
Hi there I am wanting to call sshfs (auth via DSA keys) via a rsync pre-xfer bash script, and cannot get something right. If I run it from the cmdline line: env - sshfs [EMAIL PROTECTED]:/share /dir/path -o -o IdentityFile=/tmp/id_dsa it mounts it just fine. (note the env - - I specifically tested with no environment to try to make the two situations identical). If I put that sole line into a /tmp/test shell script and run it from the commandline, or from a cronjob, it also works fine. However, calling /tmp/test from an rsync-triggered pre-xfer script (both as root) doesn't work. sshfs reports read: Connection reset by peer The remote ssh server shows the successful connection - it shows subsystem request for sftp followed immediately by session closed for user usern So it appears that it isn't an SSH authentication problem - but something else. This is on a FC5 box - but I have disabled SELinux, and have tried with iptables disabled on both ends too. Running sshfs with -o debug -o sshfs_debug doesn't really show anything odd. Obviously it works fine from the cmdline, but from the rsync script ends with Connection reset by peer just as before - with no good reason why. I even straced it without much to show. But I do wonder about file descriptors. Any ideas where I should look next? FC5, rsync-2.6.9, fuse-2.5.3-5.fc5, fuse-sshfs-1.7-1.fc5 PS: I have worked around it by getting the rsync xfre script to throw the sshfs mount command out as an at now script. That works fine - it's only when called directly that it fails... -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync not updating files even with checksum flag
Mark Osborne wrote: Hello everyone, I am having a problem with rsync that I hope someone can help me figure out. We are using rsync to sync up files between our staging and production ftp servers. Basically internal users are allowed to upload files via a samba share to a staging server. Those files are then synced out every 15 minutes via cron to our production ftp servers. The problem occurs when a large file is being upload from a windows machine via the samba share. If a rsync is instantiated during the time that the file is being uploaded the destination machines get the file with the correct filesize and timestamp. Unfortunately, even though the file shows the correct size it is not a good copy of the file. An md5sum of the files on both the source and destination machine returns different results. I believe this occurs because windows automatically reserves the full size of the file and fills it out with 0s and then overwrites this as it goes along copying. This wouldn't be a big deal except, subsequent runs of rsync (even with -c) fail to overwrite the file. You could do what we do. We use smbstatus first to check that there are no SMB locks present (on files only - you can ignore the ones on dirs), then rename the new files out to a staging dir - then rsync that instead. It guarantees the files have been finished, and even lets the Windows users know the files have been picked up (as they disappear) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
text file busy on cifs-mounted dir *doesn't* cause rsync error!
Hi there I am running rsync-2.6.9rc2 and am having difficulty getting rsync to report an error when I think it should I mounted (via mount -t cifs) a remote Win2K3 server and on a XP client opened a Word document. Then from Linux (FC5) I attempted a bash$ cp /tmp/other.txt file.doc cp: cannot create regular file `file.doc': Text file busy That makes sense - CIFS has the file locked. Then I tried copying the same file via rsync bash$ rsync /tmp/other.txt file.doc bash$ echo $? 0 i.e. rsync says it worked! However it didn't. I ended up with a full copy of other.txt as .file.doc.3s1d3w - but the final rename on top of the original file failed. Rsync didn't return an error. It should? Help? Is this a bug with rsync, or with Samba (perhaps it returned OK on the rename when it shouldn't have?) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: text file busy on cifs-mounted dir *doesn't* cause rsync error!
Wayne Davison wrote: On Thu, Oct 19, 2006 at 03:28:56PM +1300, Jason Haar wrote: bash$ rsync /tmp/other.txt file.doc bash$ echo $? 0 Re-run the same command under strace: strace rsync -av /tmp/other.txt file.doc Too late - I bet you to it :-) I just replied to the rsync and Samba list that it appears to be a bug with Samba I have duplicated it with smbclient as well as mount.cifs: looks like the rename operation isn't atomic, and there's some error conditions being missed. I got Ethereal up and can see Windows returning all sorts of errors about files being locked/etc - Samba just ignores them Followups on the samba-technical list :-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
rsync -z not working as expected under 2.6.8 and 2.6.9?
I got curious as to how rsync operates, and got a few tests going under ethereal. The results confused me more. I created /tmp/test-out/ containing two different text files - one named file.txt and the other data.gz. ie. data.gz wasn't actually compressed - it was actually text. I then created an empty directory on a rsync server to replicate that data to. I did a rsync -av /tmp/test-out/ rsync://server/share/test-in with ethereal running. As expected, ethereal shows the uncompressed contents of both files being sent. Then I reset and did a rsync -azv /tmp/test-out/ rsync://server/share/test-in with ethereal running. Unexpectedly there was no sign of uncompressed data in the packet trace! Looks like rsync decided to compress data.gz even though /etc/rsyncd.conf had *.gz in it's dont compress section... Looking at the packets, I see no evidence of the rsyncd server telling the client anything regarding the filenames. So is there some smoke-n-mirrors going on in there? Why did the client compress data.gz - even though it was mentioned on the server as dont compress? Enquiring minds would like to know :-) Thanks! -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync -z not working as expected under 2.6.8 and 2.6.9?
Wayne Davison wrote: On Fri, Oct 13, 2006 at 01:08:54PM +1300, Jason Haar wrote: Looks like rsync decided to compress data.gz even though /etc/rsyncd.conf had *.gz in it's dont compress section... That setting only affects files being pulled from an rsync daemon, not pushed to one. It seems that the manpage section for the option was written back in the day when an rsync daemon was always read-only, because it doesn't mention this restriction at all. I'll update the manpage. Ahhh! That makes sense. However, how do you then do the same thing from the client end? I guess you can't? There's no --dont-compress option by the looks of it. Actually I see now someone earlier commenting about ZLIB stuff. Same thing I suppose. So rsync client will attempt to compressed already compressed data... -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: sctp support for rsync?
Thanks Lawrence, you were quicker at typing than I am. Oh yeah - and the little thing about knowing more doesn't hurt either! ;-) So to come clean, we do run Cisco IPSec (in GRE tunnels so that we can run routing protocols) VPN tunnels over our Internet link, and that really whacks the hell out of the throughput. Even though we can get 9Mbs raw Internet throughput on a single (close by)TCP session, we rarely get above 30Kbs inside a tunnel :-( But it aggregates up to 3-4Mbs. I am really looking for magic ways of making more of that max aggregated bandwidth available to single sessions - because that means users and apps (such as rsync) would experience better throughput. -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: request: add TCP buffer options to rsync CLI?
Can someone tell me if such options would help the likes of ourselves with more old fashion linkspeeds? We have 1-4Mbps VPN-over-Internet links (between continents - i.e. high latency), and routinely find rsync only capable of 300Kbps - and rsync is the best performer we can find. If we parallelize several rsync jobs (i.e. start a bunch at the same time), we can certainly chew up to 80% of the max bandwidth - so the raw throughput potential is there. Would such options help single rsync jobs? Actually, are there good default options in general that we could use that might help in our high speed, high latency environment? Thanks! -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: request: add TCP buffer options to rsync CLI?
Lawrence D. Dunn wrote: Renater was using rsync to pull large amounts of data from FermiLab across a fast, long link, and was getting poor throughput (~20mbits/sec). Man - I wish I had your problem ;-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: mirror combined with 7 day incremental backup
There's also rsnapshot. Defaults to hourly and 7-day rolling backups, using hard-links to save diskspace (i.e. if files haven't changed from one run to the next). Saves a tonne of diskspace :-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
increasing throughput over slow-but-fat WAN links?
Hi there We have a world-wide WAN network that we are running rsync over. It all works - but the throughput of any one rsync connection is limited by the latency on our inter-continental links. e.g a site may have a 4Mbs link, but a single rsync job can only get 1.5Mbs of the bandwidth. Running 3-5 rsync jobs in parallel gets around that limit and obviously allows us to saturate a particular pipe (and yes - we want to :-), but that requires hand-crafting schedules of bunches of rsync jobs in order to achieve that. And we've got heaps - and want to open this up to our users to define themselves - so such hand-crafting will be going the way of the Dodo ;-) What would be better is if one rsync job could be chopped up into a bunch of mini-rsync jobs - and then a separate rsync run for each. e.g. an rsync job mirroring 10Gb of data in 1,000,000 files would be best split into 5 separate jobs - each mirroring 200,000,000 files. That way we get to saturate the pipes and get the jobs done quicker. So I'm about to see if I can figure out some way to do this in perl - but was wondering if anyone else has already done this? Perhaps doing a rsync -nv first and sorting the output, then splitting into X separate jobs? Even a rough guess at it could make a difference. -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Copying Oracle data files
Tauya Mhangami wrote: Can I copy an Oracle database by just copying that Oracle Database datafiles and moving them to another server with oracle? If possible can one back up a database using this same method. Why don't you try it and see? I don't think it'll work reliably. As you are running Oracle, I will assume that is because it is a busy system, that means it does the whole thang with transaction logs as well as the database files. That means at any one point in time, the database files don't contain an up-to-date and consistent image of the actual database (as some records are sitting in transaction logs waiting to sync with the database). That means rsync cannot just copy the files about and expect to be usable. However, if the database is quiet, and the databases are flushed so that the transaction logs are cleaned (making snapshots of databases has the same effect), then yes, you can copy them about. With MySQL servers, I just do plain old mysqldump and rsync the output file. Guaranteed consistent :-) Jason -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Rsync Performance
On Thu, Jul 22, 2004 at 07:33:21PM -0700, Wayne Davison wrote: On Fri, Jul 23, 2004 at 02:15:04PM +1200, Jason Haar wrote: is there any intention of a new improved --partial option whereby any failed uploads are kept as temp files I had been contemplating whether we need a new option for this or not. One idea would be to change the behavior when --partial was combined with --temp-dir in that the partial file would be renamed to the normal name in the temp dir on abnormal termination and the directory would be preferentially checked for the basis file (kinda like --compare-dest). Strange you should say that - I for one assumed that was what would happen if you invoked --partial --temp-dir! I don't know why - but I did. However, it might be better to create a new option named --partial-dir that would indicate that this new behavior was desired. (For instance, it might surprise someone that the --temp-dir could receive a non-temp format name in it.) Yes - that sounds good. It should pretty pretty clean: if you call it as rsync --partial-dir=/path/.rsync-partial/ then it could: 1. create a new subdir, 2. rsync to that, 3. keep files on failure, 4. carry on when called again 5. when finished, move to real location Sanity checks could include: a. deleting temp file if it's date is different than src file (means something weird has happened, so assume it's not there and start from scratch) b. --partial-dir should have to be part of the dst tree - otherwise you potentially have a security risk. e.g. --partial-dir /tmp would mean someone could create /tmp/etc/passwd and you could end up overwriting your /etc/passwd file (if that's what that particular rsync job was replicating of course!). So maybe --partial-path cannot start with ^/? or ^../?. c. on any error, delete and start from scratch again? Keen idea :-) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsycnc copies all files
On Thu, Jun 17, 2004 at 04:17:06PM -0600, Tim Conway wrote: I don't know the nature of your filesystems, but I have a guess... on at least one end is a network filesystem - NFS, SMB, NCP, AFS, something like that. rsync has the -W, or --whole-file option, which tells it that there's no point in trying to read the file for changes - if it needs transferring, just send the whole thing, because the LAN overhead will waste the savings in WAN. Last I heard, -W was going to be forced if either end was a network filesystem. Owch - is that really a good idea? We use rsync to replicate data between Windows servers over our WAN - but via Linux rsync servers. They smbfs mount the Windows servers, and rsync Linux-to-Linux. We found the Linux ipstack outperformed Win2K on our WAN, and rsync should be better on Unix as a pure, written-for-Unix product. Our Linux and Windows servers are on 100Mb Ethernet, and our WAN runs between 0.5-1.0Mbs max. So *I'd have thought* the extra latency slowdown with rsync having to do SMB i/o would still be negliable compared with the gain in doing partial data transfers over the (much slower) WAN. Does the reading of files in rsync -a mode really have that massive an impact for rsync-over-WAN? BTW: what would be the best way of running rsync for such an environment? We currently just do rsync -az src_dir/ remote::xxx/dst_dir -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: trigger command on successful upload?
On Tue, Jun 15, 2004 at 12:56:39PM -0700, Wayne Davison wrote: On Tue, Jun 15, 2004 at 11:56:12AM -0700, Robert Helmer wrote: It is a potential security problem Yup, I was thinking the same thing. One way to make your feature safer would be to turn it into a config-file setting (and leave the script This whole idea smells of the Samba postexec style feature. If done, it definitely should be only allowed to be defined within /etc/rsyncd.conf (assuming rsync transport of course). e.g. [backup] path=/var/spool/backup preexec=/usr/local/bin/initialize /var/spool/backup/ postexec=/usr/local/bin/cleanup /var/spool/backup/ uid = root That way it can run under whatever security context you define for that given rsync share. Allowing the rsync client to define what remote command to run is WAY too insecure. Obviously, if they are running rsync over rsh/ssh/other then a --trigger-script=... client option starts making sense - but I can't see the point - you should just call that script after doing the rsync job e.g. rsync -x -e ssh src_dir remote:dst_share ssh remote /usr/local/bin/cleanup what's the difference? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
How do you properly use --partial?
[background: we rsync Gbs of data over our WAN, so want to run rsync as efficiently as possible. We have Linux rsync servers that mount local Windows file servers - i.e we use Linux-rsync to replicate data between Windows file servers. (why? we found Linux IP stack to be superior over our WAN)] I know that --partial on it's own merely makes the rsync server process rename the partially uploaded tempfile back onto the actual filename - but who actually wants that? I mean - it's effectively a corrupt file. Is there a way to make rsync keep partially (down|up)loaded files, but keep them separate from the actual end dir until they are finished? I was hoping --link-dest/etc were for that, but they appear to assume you are willing to run duplicates of the entire tree you are wishing to synchronize - which isn't an option for us... I would have thought --partial could have been written so that any partially transmitted file could be kept in a dir separate from the real data, and when the transfer successfully finishes, renamed/copied into the live area... Am I missing something obvious here? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Win32 and Backing up open files
On Wed, 2004-03-03 at 07:07, Jason M. Felice wrote: I now fully and completely remember why I hate Windows. Good. Everyone needs reminding now and then :-) ... a program under Windows can _NOT_ open a file which has been opened if the original opener has not specified read sharing. No matter what. Yup. That's part of the reason why you have to use special versions of NTBackup/etc to backup Exchange and SQL servers. As these files are always in use, there is no way you can read 'em. On a Unix system, you can still access/backup your SQL databases as files - even when they are in use. Whether that's a good idea I leave for others to discover the hard way... ;-) Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Feature request: case insensitivity
On Tue, Feb 17, 2004 at 12:29:17AM -0800, jw schultz wrote: On Tue, Feb 17, 2004 at 12:58:25AM -0500, Ethan Tira-Thompson wrote: I sync files to a memory stick fairly frequently. The memory stick uses a basic FAT format, which kills case. What's more, on some platforms (Windows), the drivers make all filenames uppercase, whereas on others (linux, mac) all the filenames are lowercase. What's wrong with using check=relaxed when mounting the fat partition? Doesn't that help (see man mount) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: open files
Just a note. I think the original poster was *mounting* the Win2K box from Linux. As such, talking about Cygwin's attempts to use backup operators shouldn't help at all - as that's referring to running rsync *server* under Windows - it won't have any affect on how SMB passes locks to the SMB client... -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Smoother bandwidth limiting
On Sat, 2003-11-08 at 03:29, Jason Philbrook wrote: I too like --bwlimit. It's not perfect, but it's so easy to adjust backup/restore speed in the backup program. We use rsync primarily for offsite backups, so it's great for planning bandwidth use over limited capacity links. For normal every day use, a slow speed is fine. If there's Me too. It's useful even over the LAN. We have a spotty old 10/100M hub that gets burnt whenever I rsynced to it from a 100M host - turning on bwlimit was a MUCH easier fix than worrying about rate-limiting. I'd say leave it be. If anyone has issues with it, then THEY can do network rate-limiting :-) Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Largest file system being synced
On Thu, Jun 27, 2002 at 01:31:12PM -0700, jw schultz wrote: It doesn't work at the filesystem level. It works at the block device level. Every time a block is modified it is queued for transmission to the mirror(s). If the same block Are there any Linux users out there using the likes of RAID'ed-NBD, CODA or Intermezzo for a similar effect? The NBD (network block device) looks interesting, it allows you to mount a remote raw partition - so you can effectively RAID over the network. Supports transaction logs too (which would be necessary in a rsync-style role) -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Future RSYNC enhancement/improvement suggestions
On Mon, Apr 22, 2002 at 01:01:13PM -0400, David Bolen wrote: Unless you have quite large files, in which case there can be a lengthy period (particularly if the file is being accessed across a local network) while checksums are computed where there is no traffic at all. For a while (when we had slow drives and a 10BaseT network) we could take 20-30 minutes for checksum computation on a 500-600MB database file with 4K blocks. And our long distance dialup call was completely idle during that period. ...But then you should have a dialup timeout of 1 hour set? Even firewalls default to around 1 hour (i.e. default Cisco CBAC tcp timeout is 1 hour) I think the problem is that you're morally upset that rsync spends so much time sending no network traffic. Quite understandable ;-) What about separating the tree into subtrees and rsyncing them? That means you go from: 1 dialup connection started [quick] 2 rsync generates checksums (no network traffic) [slow] 3 rsync transmits files to: 1 dialup connection started [quick] 2 rsync generates subtree checksums (no network traffic) [quick] 3 rsync transmits files 4 rsync generates subtree checksums (no network traffic) [quick] 5 rsync transmits files ...etc That would send a little bit more network traffic, but will it take up less total dialup time? I don't know... [guess it's time for a DJB saying: don't speculate - evaluate!] -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Here's a Redhat .spec file for 2.5.4
The rsync.spec file within the rsync tar package is still broken, so here's a working rsync.spec file. Simply (as root) copy rsync-2.5.4.tar.gz into /usr/src/redhat/SOURCES then run rpm -ba rsync.spec to produce rsync-2.5.4-1.i386.rpm -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 Summary: Program for efficient remote updates of files. Name: rsync Version: 2.5.4 Release: 1 Copyright: GPL Group: Applications/Networking Source: ftp://samba.anu.edu.au/pub/rsync/rsync-%{version}.tar.gz URL: http://samba.anu.edu.au/rsync/ Packager: Andrew Tridgell [EMAIL PROTECTED] BuildRoot: /tmp/rsync %description rsync is a replacement for rcp that has many more features. rsync uses the rsync algorithm which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand. A technical report describing the rsync algorithm is included with this package. %changelog * Mon Sep 11 2000 John H Terpstra [EMAIL PROTECTED] Changed target paths to be Linux Standards Base compliant * Mon Jan 25 1999 Stefan Hornburg [EMAIL PROTECTED] quoted RPM_OPT_FLAGS for the sake of robustness * Mon May 18 1998 Andrew Tridgell [EMAIL PROTECTED] reworked for auto-building when I release rsync ([EMAIL PROTECTED]) * Sat May 16 1998 John H Terpstra [EMAIL PROTECTED] Upgraded to Rsync 2.0.6 -new feature anonymous rsync * Mon Apr 6 1998 Douglas N. Arnold [EMAIL PROTECTED] Upgrade to rsync version 1.7.2. * Sun Mar 1 1998 Douglas N. Arnold [EMAIL PROTECTED] Built 1.6.9-1 based on the 1.6.3-2 spec file of John A. Martin. Changes from 1.6.3-2 packaging: added latex and dvips commands to create tech_report.ps. * Mon Aug 25 1997 John A. Martin [EMAIL PROTECTED] Built 1.6.3-2 after finding no rsync-1.6.3-1.src.rpm although there was an ftp://ftp.redhat.com/pub/contrib/alpha/rsync-1.6.3-1.alpha.rpm showing no packager nor signature but giving Source RPM: rsync-1.6.3-1.src.rpm. Changes from 1.6.2-1 packaging: added '$RPM_OPT_FLAGS' to make, strip to '%build', removed '%prefix'. * Thu Apr 10 1997 Michael De La Rue [EMAIL PROTECTED] rsync-1.6.2-1 packaged. (This entry by jam to credit Michael for the previous package(s).) %prep %setup %build ./configure --prefix=/usr --mandir=/usr/share/man make CFLAGS=$RPM_OPT_FLAGS strip rsync %install mkdir -p $RPM_BUILD_ROOT/usr/{bin,share/man/{man1,man5}} install -m755 rsync $RPM_BUILD_ROOT/usr/bin install -m644 rsync.1 $RPM_BUILD_ROOT/usr/share/man/man1 install -m644 rsyncd.conf.5 $RPM_BUILD_ROOT/usr/share/man/man5 %clean rm -rf $RPM_BUILD_ROOT %files %attr(-,root,root) /usr/bin/rsync %attr(-,root,root) /usr/share/man/man1/rsync.1* %attr(-,root,root) /usr/share/man/man5/rsyncd.conf.5* %attr(-,root,root) %doc tech_report.tex %attr(-,root,root) %doc README %attr(-,root,root) %doc COPYING
Re: Keep one BIG file in sync
On Thu, Feb 21, 2002 at 10:37:13PM +0100, Oliver Krause wrote: My problem: I have server A which has a big (500G) database like file. On server B i Does database like mean it'll be in use when the rsync job runs? What about data in memory - that's not flushed to disk? [If you're talking M$ Windows - this just won't be possible BTW - ever hear of locking? ;-)] -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: Keep one BIG file in sync
On Thu, Feb 21, 2002 at 07:27:20PM -0500, Joseph Annino wrote: The big problem is when diffs are usually done, you need to compare every bite in both files to find the deltas. So in a network situation you wouldn't save any effort because everything would have to go across the network anyhow, so why not just copy the file? ?? That's exactly what rsync is designed to do. The rsync client does checksums on it's copy of the file, and the rsync server does checksums on it's copy - and then they use the network to send the checksums to each other. Until that happens, all I/O is local... BTW: if this is an Exchange server being talked about, NEVER, EVER, TOUCH AN EXCHANGE DATABASE WHEN IT'S RUNNING. You *will* crash it. I know: I did :-) [Apparently it's a feature...] -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Re: rsync default handling of permissions
This is a me too. We've just been stung with that too. rsync -r file remote::module/ appears to mean that the remote copy of file indeed ends up with the perms assosiated with the umask of the rsync daemon (as it should). However rsync -r dir remote::module/ appears to still transmit the permissions of the dir.. In fact, this brings up another issue. What if the rsync server administrator wants to control permissions? What about a new config option umask = 0770 for instance? That way, irrespective of what options are used by the client, the server controls the permissions; explicitly ensuring the permissions are always set to that value (setting the umask before starting rsync doesn't do the same thing, as this should be settable at the module level). At the moment doesn't there seem to be a bit too much relying on the client to do the right thing? -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417
Prefs for rsync of ssh or stunnel?
We're looking at using rsync over an encrypted link, and are debating the virtues of ssh vs stunnel. I know rsync can hammer a network-layer implementation, so have others done this, and if so, which is better - rsync over ssh or rsync over stunnel? I leaning towards stunnel as it's just a pure SSL transport layer with nothing else to worry about, whereas sshd implies accounts with password/access management,etc,etc. -- Cheers Jason Haar Information Security Manager Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417
Re: rsync problems with .iso files
On Sun, Apr 08, 2001 at 08:51:40PM +0100, M. Drew Streib wrote: I have certainly seen large files that couldn't be "repaired" by rsync and needed to be redownloaded. Indeed. In this case I wonder if the original was downloaded in ASCII mode instead of binary? That would definitely be a download-from-scratch repair... -- Cheers Jason Haar Unix/Special Projects, Trimble NZ Phone: +64 3 9635 377 Fax: +64 3 9635 417
Can rsync do this?
I'm currently using rsync to sync our anti-virus pattern files over our Email AntiVirus-servers. It works well, but from time-to-time a mail messages is being scanned as the sync occurs. The AV system (ahem, Qmail-Scanner) notices a "corruption" has occured of the pattern files, and errors - which requeues the mail. Nothing wrong with that, but I'd like to get rid of that error. I'm beginning to wonder if these AV pattern files (there are several) are "related" in such a way that if rsync has moved two into place, the AV system realises it's group of AV files are out-of-whack and calls it corruption... Can rsync copy the files over to a temp dir, and then move them live as one move? I know I could do this with ssh directly, but the "--compare-dest" and "--partial" options make me wonder if rsync can do this itself... Is it possible? -- Cheers Jason Haar Unix/Special Projects, Trimble NZ Phone: +64 3 9635 377 Fax: +64 3 9635 417
Re: issues with NT port?
New development. It does affect NT client as well as Linux rsync client. It just looks like pure luck that the NT client didn't show the same symptoms sooner... So to rewrite my original message... We're trying to use rsync-2.4.5 (client and server) to replicate data over a busy 128Kb Intercontinental Frame-Relay link from both NT and Linux clients to a NT rsync server. It appears to work for a few connections - e.g. rsync ntserver:: rsync -a dir ntserver::share/path ...but then on the 3-5 go, the NT rsync server will crash. This same NT rsync server is being 100% happily used rsync'ing to other NT clients over US-to-US T1 links, so this leads me to believe there is some NT IP-stack issue that tickles a bug in rsync only when there are large latencies (that's the only real difference I can come up with). Sound possible? [I say it's a rsync bug as obviously other NT network apps work fine over this link - maybe I should say "NT-specific rsync bug" :-)] -- Cheers Jason Haar Unix/Network Specialist, Trimble NZ Phone: +64 3 9635 377 Fax: +64 3 9635 417