Question about rsync -uav dir1/. dir2/.: possib to link?
I noticed in looking at download dirs for a project, that another mirror had "crept-in" for usage (where different mirrors are stored under mirror-URL names). To copy over the diffs, normally I'd do: rsync -uav dir1/. dir2/. (where dir1="the new mirror that I'd switched to by accident, and dir2=the original dir). The files were "smallish" so I just copied them, BUT I wass wondering if there was an option similar to using 'cp' for a dircopy, but instead of cp -a dr1 dr2 using: cp -al dr1 dr2 to just hard-link over files from "dir1" to "dir2" (both are on the same file system). I looked at (and tried) --link-dest=DIR (hardlink to files in DIR when unchanged), but either I had the syntax wrong, or didn't understand it as it didn't seem to do what I wanted: cp'ing the new files in dir1 into the orig dir). Does rsync have an option to just "copy" over the new files via a hardlink? Tnx! -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: How to manage root<-->root rsync keeping permissions?
On 2021/08/07 08:45, Chris Green via rsync wrote: Because cron/anacron isn't perfect and the machine being backed up nay not be turned on all the time so the time that it tries to backup is most definitely not fixed accurately! My *backups* of important data are incremental backups done once a day for every machine. I also do hourly incremental backups on my desktop machine but that is more for protecting myself against myself than for protecting against intruders or hardware failure. Yeah, that's why I had the 'previous versions thing working. I hope to get that working again at some point a bit more efficiently. I know I need the protection against myself too! The original point of this thread is about something closer to synchronising my (small, Raspberry Pi) DNS server so that if it fails I can get a DNS server back up and running as quickly as possible. Get a few small computers like your pi, and duplicate them. swap a new one in if there's a problem. Or boot from a DVD -- installs everything on boot, and then download variable info from your backup server using knock-knock...* so not only does someone with access to my desktop/laptop need to know the rsyncd username and password but they also cannot delete my existing backups. It runs incremental backups so nothing is ever overwritten either. BTW, incremental backups aren't really the same as 'update' backups, they keep track of the state of the file system (including files no longer there) so you can restore your desktop to a specific day before some unwanted updated was introduced and kept by an update-only backup system. Yes, exactly, or more to the point (in my case anyway) I can restore a specific file to a few hours ago after I've scrambled it in some disastrous way! :-) you too eh, what power we have! ;-) A pretty cool way to get your laptop "let in" to the backup server. Have a random sequence of port open attemps Choose a capital port, a small..oh wait, that's letters...anyway, have a prog that detects the probes. If it gets the right sequence of 10, 20, 60 probes, (whatever), then it opens up the ssh->backup port for 5 minutes or until your laptop connects, (whichever is shorter). If you didn't get in within 5 minutes, prolly need a faster computer. Be sure to make your OPIE check a range of of unused passwords in case you get out of sync. Have the probe-pattern be a 1-time use pattern and generate a few hundred of them for each computer in advance. now you have One-time use passwords just to turn on your secure backup. If someone breaks that, close up shop and move to baja calif and retire! -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: How to manage root<-->root rsync keeping permissions?
On 2021/08/07 03:44, Chris Green via rsync wrote: L A Walsh via rsync wrote: It seems to me, a safer bet would be to generate an ssh-cert that allows a passwdless login from your sys to the remote. The trouble with that is that it leaves a big security hole. If you only do backups at 1am (or whenever), why would your backup machine enable ssh outside of the range 12:59 - 01:01? If (for example) I leave my laptop turned on somewhere, or someone wanders into my study where my desktop machine is they have instant, passwordless access to the remote backup machine. If your desktop machine is that open to casual wanderers, perhaps you should enable a passwd locked screen saver activating after a few minutes? I keep my home computer unlocked all the time as well, but I don't have walk-through visitors that might mess with it. My desktop computer essentially has root access FROM the windows desktop (my normal user is a domain admin, and can alter permissions or make changes to any file on my server. In my case I regard my desktop+ server as a "split system", with the Winbox being my desktop, and the Linbox being the "backend" of my computer. The Winbox doesn't normally have direct access to the network and all of my "content" files /docs/ progs residing on my linbox. The Linbox handles backups, network access, a proxy for the winbox, incoming+outgoing email (dovecot+sendmail), etc. The linbox does daily security scans and computer maintenance tasks that I don't trust to letting Windows do it as the linbox provide better feedback. Additionally my linbox has has direct access to any file on my desktop as well, thought indirectly in that my linbox acts as a samba domain server for the desktop (thus providing single-signon for my home machines based on the linbox). Its slightly moot, in my case to worry about someone on my desktop being able to access content on my linbox, since all of the "content" files (docs dir, music, video -- all personal files on desktop) actually reside on my server where they are backed up daily via xfs_backup. They are connected via a dedicated, direct 10Gb ethernet that gives 200-400MB/s(M=2**20 bytes) nominal speed up to 600MB. I try very hard to make my backups secure from attack so that if my desktop or laptop is compromised somehow the (remote) backups are still secure. --- Excellent! In my case, my laptop/desktop (used to be a laptop) is thoroughly entwined with the server such that one has trouble functioning without the other. In your case, though, I was thinking of a backup process that would only be used when my laptop was on a secure network (like @ home). If there is risk to your laptop while @ home, hopefully it has a short-timeout that bounces it to the screen saver that requires a password to unlock?t The backup system that runs the rsync daemon has its rsync configured with 'refuse options = delete' --- Ahh...I thought you were actually trying to keep them in sync. Maybe you might think about using an actual backup prog like tar. In my case, the Users/groups are the same. Tar handles ext attrs and acls and can keep track of backing files up that have actually changed rather than relying on time/date stamps. so not only does someone with access to my desktop/laptop need to know the rsyncd username and password but they also cannot delete my existing backups. It runs incremental backups so nothing is ever overwritten either. BTW, incremental backups aren't really the same as 'update' backups, they keep track of the state of the file system (including files no longer there) so you can restore your desktop to a specific day before some unwanted updated was introduced and kept by an update-only backup system. For example. My home partition: home-210501-0-0438.dump home-210512-1-0431.dump home-210523-1-0430.dump home-210601-0-0437.dump home-210603-2-0431.dump home-210612-1-0433.dump ... home-210729-6-0430.dump home-210730-9-0430.dump home-210731-8-0430.dump home-210801-0-0438.dump home-210803-2-0430.dump home-210804-5-0430.dump home-210805-4-0430.dump home-210806-7-0430.dump home-210807-6-0430.dump Can be restored to any of the dates with a script: Display_Only=1 full_restore home restore 210716 restore home-210701-0-0442.dump to /home/cache/restore restore home-210712-1-0430.dump to /home/cache/restore restore home-210714-2-0430.dump to /home/cache/restore restore home-210716-4-0430.dump to /home/cache/restore For several months I provided a few back-weeks of 'Restore previous versions' that did checkpoints 4x/day. Constructed it using rsync, but it really was too much work for too little feature. Anyway, I'm aware of various security considerations and it seems like the best single thing would be a fast-timout screen saver that would require a password to stop (in addition to the root-ssh login)... Hope
Re: How to manage root<-->root rsync keeping permissions?
On 2021/08/03 07:09, Chris Green via rsync wrote: I already have an rsync daemon server running elsewhere, I can add this requirement to that I think. Thank you. It seems to me, a safer bet would be to generate an ssh-cert that allows a passwdless login from your sys to the remote. Then "export RSYNC_RSH=ssh" on your source before running rsync (as root). I don't use an rsyncd on the remote. Try it in some sub-dir first. Don't cross fs boundaries, so like I use flags (for xfs->xfs) like: rsync -auvxHAXOW --del /usr/local/fonts/ remotesys:/usr/local/fonts/ pathnames are finicky. While this pair works: aa/dir/ (->) bb/dir/ and I think this one does: aa/dir bb/ there are more that aren't reliable but may work occasionally (like work 1st time, but not 2nd...). Some examples: aa/dir/ bb/dir aa/dir/. bb/dir/. aa/dir bb aa/dir/ bb/ then do your rsync as normal run rsync as root to the remote as normal. Passwordless ssh logins are used where remote root and remote-passworded logins are forbidden, since with a strong key, there is no password to crack. Since you may not want remote login directly to root, you might prohibit use of passwords for root (forcing use of a secure key). There can be many caveats, so try on smaller, backed up fs's first... If you have room, transfer to a tmpdir then move into place. Good luck... -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
hardlinking missing files from src to a dest: didn't work way I thought it would.
Have a directory with a bunch rpms in it, mostly x86_64. Have another directory with a bunch, mostly 'noarch'. Some of the noarch files are already in the x86_64 dir and don't want to overwrite them. They are on the same physical disk, so really, just want the new 'noarch' files hardlinked into the destination. sitting in the noarch dir, I tried: rsync -auv --ignore-existing \ --link-dest=/tumbleweed/. . /tumbleweed/. I'm not "too" surprised since technically I asked for it to synchronize them, then link them into the same dir, but thought it would at least say something or create the link, but neither happened. I really didn't want to copy them -- I'd really prefer the link, so how do I have it only create a hard link from the source files to target DIR that don't already exist in the target? I know I can do it with a shell script, but I thought rsync might be faster...then again, if I count figuring out how to do it...not so sure How can I get rsync to do this? Thanks... -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 13645] New: Improve efficiency when resuming transfer of large files
If you are doing a local<-> local transfer, you are wasting time with checksums. You'll get faster performance with "--whole-file". Why do you stop it at night when you could 'unlimit' the transfer speed? Seems like when you aren't there would be best time to copy everything. Doing checksums will cause a noticeable impact to local-file transfers. On 10/5/2018 10:34 AM, just subscribed for rsync-qa from bugzilla via rsync wrote: https://bugzilla.samba.org/show_bug.cgi?id=13645 When transferring large files over a slow network, ... The command used is: rsync -av --inplace --bwlimit=400 hostname::module /dest When restarting the transfer, a lot of time is "wasted" while first the local system is reading the partially transferred file and sends the checksums to the remote, ... Of course these optimizations (at least #2) may actually decrease performance when the transfer is local (not over slow network) and the disk read rate is negatively affected by reading at two different places in parallel. So #2 should only be attempted when the transfer is over a network. --- Or might decrease performance on a fast network. Not sure what you mean by 'slow' 10Mb? 100Mb -- not sure w/o measuring if it is faster or slower to do checksums, but I know at 1000Mb and 10Gb, checksums are prohibitively expensive. NOTE: you also might look at the protocol you use to do network transfers. I.e. use rsync over a locally mounted disk to a locally mounted network share, and make the network share a samba one. That way you will get parallelism automatically -- the file transfer cpu-time will happen inside of samba, while the local file gathering will happen in rsync. I regularly got ~ 119MB R/W over 1000Mb ethernet. BTW, Any place I use a power-of-2 unit like 'B' (Byte), I use the power-of-two base (1024) prefix, but if I use a singular unit like 'b' (bit), then I use decimal prefixes. Doing otherwise makes things hard to calculate and can introduce calculation inaccuracies. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[Bug 5124] Parallelize the rsync run using multiple threads and/or connections
On 10/11/2018 10:51 AM, just subscribed for rsync-qa from bugzilla via rsync wrote: https://bugzilla.samba.org/show_bug.cgi?id=5124 --- Comment #7 from Luiz Angelo Daros de Luca --- I also vote for this feature. Using multiple connections, rsync can use multiples internet connections at the same time. FWIW, one of the big changes that went into SMB 3 for Win10 was adding the ability to do file transfers using more than one connection. CIFS (and windows) have traditionally been limited to 1 connection that everything was multiplexed over. However, CIFS in write/reads from a client to a linux server can easily get over 600MB/s writes, and ~275MB/s on reads. The reason it doesn't get more, is the cpu's start maxing out with processing interrupts and packets. I don't see rsync maxing out in cpu even doing a local->local copy, but I haven't done benchmarks on the newer versions of rsync, either. That said, I don't think the slow down is such that it would greatly benefit by multiple connections. My local disk can do read/writes to disk at around 1GB/s (for constant read/write). I'd be more convinced that parallel connections would benefit if there was any benchmarking done to find out where slowdowns are happening, but that's just my 2cents. :-) -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Why are system-namespaces not copied?
On 9/18/2018 7:44 AM, Frank Steiner via rsync wrote: Hi, the man page states For systems that support extended-attribute namespaces, a copy being done by a super-user copies all namespaces except system.*. That's the reason why NFAv4 ACLs are not copied as they are in the system.nfs4_acl (or system.nfs4acl) namespace. Why are those namespaces excluded? Not being able rsync ACLs von NFSv4 is a major drawback now that NFsv4 becomes standard oder v3 and ACLs are getting more widely used. Because they are storing them in the security (sometimes also called system) section and not the 'root' section (at least on XFS). The linux kernel disallows you reading ex-attrs with the Security label. I don't particularly like it for the same reasons you don't. It takes patching a linux kernel to enable them being copied. I've done it but more as proof of theory. Problem comes in when you restore attribute to a secure namespace. Are those attrs really secure when you take them "off the system". If not, you could modify them, then if they are copied to a target, you could use modified attrs to give yourself root capabilities. So...have to solve that before it can be safely allowed. NeverTheLess, it's still a potential hole to allow copying of such security attrs. Unless you want to change the way security attrs are stored to use 4k-long signing strings to ensure non-tampering, I don't see how you can do it...and doing that would be adding 4k to each attribute...UG! -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 13582] New: rsync filters containing multiple adjacent slashes aren't reduced to just one slash before matching
On 8/19/2018 10:11 PM, just subscribed for rsync-qa from bugzilla via rsync wrote: The following test script shows that attempting to exclude the file /sourcedir/a/file2 by using //sourcedir//a//file2 in the excluded files list, will silently not exclude it because of all those adjacent slashes not being reduced into just one /. This is a bad example, because the leading '//' cannot be removed without potentially changing the file's location. It's in POSIX that exactly 2 slashes should not be reduced to '1' if it is at the beginning of the path. The ones in the middle -- yes, but even if they were fixed, the two in front might not match a single -- because some OS's use // to introduce a network-located system (in cygwin on windows //remotesystem/will automatically try remotesystem). Can your exclude use a regular expression?, can you say: '/?sourcedir/*a/*file2' in the exclude patterns? (assuming a POSIX RE (not a file wildcard)). -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: rsync xattr support doesn't play nice with selinux
On 8/22/2018 2:09 PM, Shaya Potter via rsync wrote: If one is rsyncing a machine without selinux (therefore no security.selinux xattr on each file), to a system that has selinux (even in permissive mode), rsync doesn't play nice. basically selinux seems to make it appear that every file has security.selinux xattr on each file (I think this is virtually if there's no physical attribute, as if one disables selinux, the attribute disappears). --- normally you can't see root or security attributes as a normal user. on a non-security aware OS. rsync sees that on the temp file it created there is an xattr which is not on the source file and therefore tries to remove it, ... Ick. I thought there was going to be a list of attrs for utils that copy attrs to ignore? I guess you don't have an rsync that does that (if it has been done yet). SE linux has to label things when they get written to disk -- it's a mandatory action that a program can only "ignore", but not stop. FWIW many tests in perl that check unix mode bits fail on modern disks with ACL's. Of course they don't want to fix perl, as it might break some older program. It be nice if there was way to tell rsync to ignore some xattrs that might be automatically created on the destination while still allowing xattr syncing. --- I may be mistaken, but I thought it had been discussed and planned at one point (?). sigh. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 12732] New: hard links can cause rsync to block or to silently skip files
just subscribed for rsync-qa from bugzilla via rsync wrote: Hard link handling seems to be broken when using "rsync -aH --compare-dest". I found two possible scenarios: 1) rsync completes without error message and exit code 0, although some files are missing from the backup 2) rsync blocks and must be interrupted/killed Further information === This problem exists at least for rsync versions 3.1.0, 3.1.1, and 3.1.2 for different Linux varieties using various file systems: https://lists.samba.org/archive/rsync/2015-April/030092.html --- I ran rsync 3.1.1 for over a year to help generate snapshots. I can't say if it copied all the files or not, as it was backing up a large "/home" partition, BUT, it never hung. It did take 45min to a few hours to do the compare, but it was comparing a large amount of data (>750G) w/a snapshot (another 750G) to dump diffs to a third, and my /home partion has a *very* large number of hard links. So I know that hardlinks are handled 'fine' on comparing 'xfs' to 'xfs'. Latest test on openSUSE 42.2 (x86_64) on ext4 + on nfs with Ah... I'd suspect nfs... Why are you using nfs? rsync was designed to compare against local file systems. You should try running rsync directly from the nfs-host machine to the client and bypassing NFS. I.e. -- you need to bypass NFS, since local->local with hardlinks works. Just checked my /home partition. find shows 9295431 names (of any type), but du shows (using du --inodes) shows 4407458 inodes. That means over half of the filenames are hard linked. While my home partition takes up 60% more space now, even cutting those counts in half would still a large number of hard links -- and rsync didn't crash doing an rsync of the partition to an empty one, but first comparing to a previous snapshot (the empty partition ended up with differences between the main partition & the snapshot. I'd remove NFS... -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 7120] Variable bandwidth limit .. bwlimit
samba-b...@samba.org wrote: --snipp-- It seems that pv is waiting for data from rsync, and rsync is waiting for data too (stuck in select()) and not closing the input to pv. So it's a deadlock. Same happens when you substitute pv with something else (like dd). It seems that those commands just don't behave like rsync expects them to. --- Would a use of "stdbuf" (coreutils) help? It allows one to change the input and/or output buffering of the tools to from full buffered to line-buffered to unbuffered for tools normally connected via a pipe. Haven't found a workaround short of killing everything: export RSYNC_RSH="sh -c 'pv -qL10k | ssh \"\$@\" | (pv -qL11k; kill \$\$)' ssh" kill is not a solution I'd be happy with. But I haven't found another. --- Maybe a suspend/continue would be more gentle than killing things? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [PATCH v2 1/2] xattrs: Skip security.evm extended attribute
Stefan Berger wrote: The security.evm extended attribute is fully owned by the Linux kernel and cannot be directly written from userspace. Therefore, we can always skip it. --- (see below "...")... Please put this on a switch or option. The security.evm field seems only special on Mandatory Access systems (from https://lwn.net/Articles/449719/), and seems like it should be copyable by root on non-Mandatory Access systems. At the very least, a "dd" from one file system to another, would copy it, so the security doesn't rely on it being copied WITH the rest of its attrs, but with the field being a check on those fields not being modified. Reading further, a better solution might be to provide a list of extended attributes to ***exclude*** from copying, making your patch "general case", as well as an option to ONLY copy a list of xattrs, that match an expression or list. I'm against hardcoding specific cases into rsync, as they won't apply to all systems rsync runs on as well as hard-coding current trends in integrity-measurement (which may be subject to change). -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Switch to AIO instead of IO/parallize fs scan...
I was thinking about [Bug 3099]... in that while it's easy to get a 2-3x speed for the average app using parallel scans, the upper and lower bounds on that speed increase could be <1x in a worst case (very unlikely, but with primitive or constrained (in a container or VM) HW, the chances are raised. Better, with less std. deviation, I believe, might be to move I/O calls to all being AIO -- It seems that would allow them to be completed at the OS's discretion which, in the idea case would be minimal wasted disk-head. The advantage in AIO, being that OS can coalesce calls more at its leisure, vs. an upper level app algorithm, that might divide up the work fairly, but not know how much each underlying request costs in terms of wasted head movement. Is that already in there, in the works, or do you think it would avoid worst case division of file scanning based on FS-hierarchical structure vs. underlying disk layout? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: unnecessary /proc requirement in 3.1.1
Fyodorov Bga Alexander wrote: Hi. Thanks for good program. Whole /proc is serious security risk for me. Why? You could run rsync in a separate namespace (container) and only mount /proc in the new namespace -- other users wouldn't see it.. Bunch of tools 'lxc-x' URL : http://linuxcontainers.org/ Summary : Userspace tools for the Linux kernel containers Description : It provides commands to create and manage containers. It contains a full featured container with the isolation/virtualization of the pids, the ipc, the utsname, the mount points, /proc, /sys, the network and it takes into account the control groups. It is very light, flexible, and provides a set of tools around the container like the monitoring with asynchronous events notification, or the freeze of the container. This package is useful to create Virtual Private Server, or to run isolated applications like bash or sshd. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: meta bug: info on why xfer seems no longer available? (3.1.0)
Kevin Korb wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 You want less -v and more --itemize-changes. --verbose is utterly useless without --itemize-changes. Just remembered to check this group... ;-) Thanks I tried it... most were .f...a. Now I'm wondering about this... hmmmI dunno if this would be a bug or not (in xfs{d/r})... The old dir had: [u::rwx,g::rwx,o::r-x] w/user=media and group=media New dir shows: [u::rwx,u:media:rwx,g::rwx,g:media:rwx,m::rwx,o::r-x] (also w/user=media and group=media)... Most likely the dirs had default acls placed on them at some point, but not sure if that makes sense, as the new ACL would give same access to the file, so why bother with it? Weird. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
meta bug: info on why xfer seems no longer available? (3.1.0)
I just copied a file system using xfsdump|xfsrestore At least 1 new directory had been created on the source during the xfer (took 9+hours -- 7TB), so I wanted to verify I hadn't missed anything. Using rsync: rsync --version rsync version 3.1.0 protocol version 31 Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints, socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes, prealloc, SLP did: rsync -auvnHAX /oDATA/. /DATA/. got back a rather large list of 4 directories and 13708 files!... So wanted to see WHY it wanted to update them, as I thought the full xfsdump/restore should have resulted in an exact copy. Tried : rsync -auvvnHAX /oDATA/. /DATA/. which manpage said would list why... it didn't. I got an 84276 line summary, that Showed all of the files... with filename is uptodate and filename (and nothing else) on lines where it wasn't uptodate. total: matches=0 hash_hits=0 false_alarms=0 data=0 sent 4,482,129 bytes received 8,653,370 bytes 1,142,217.30 bytes/sec total size is 8,329,967,491,093 speedup is 634,156.91 (DRY RUN) I tried -vvv, but didn't see anything in the 596694 line output file that told reasons... Lots of [sender] makefile(xxcxx,*,2) [sender] pusing local filters..(by dir?) recv_filename received 5 names recv_file_list done [receiver] receiving flist for dir 14 but still no reasons (I could be missing them in all all the output, but don't see other types of lines...) Is there some other option now to determine the reason why a file was xfered? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: was code added to detect or die on sighup recently?
Kevin Korb wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Any non-daemon software is supposed to exit (and cleanup) when hit with a hangup signal. Pretty sure it isn't new to rsync. But I didn't send it a hangup signal. By calling snaprun in a subshell (snaprun $@ /dev/null ), it spins it off into a separate process group that doesn't get signaled unless I specifically address it. I did change the invoking script to force 'output_wanted' to true so it would do a tail-f of the log file to the console so I could monitor it more easily, but I wouldn't have thought that would have added any hangup signal. Strange. FWIW, it is still creating all the empty dirs Oldest Snapshot = Home-2014.08.03-02.05.53. Rsync with 9 excludes from config file... rsync took 61m, 9s Empty-directory removal took 0m, 59s Create vol. Home-2014.08.03-02.05.53, size 4.5G Copying diffs to dated static snap...Time: 0m, 6s. rsync took 61m, 9s to gen 4.5G of diffs to the diff volume, then ran through and removed all empty dirs -- took almost another full minute! Then to copy the contents to the target dir took 6s. (am sure it took longer than 6s, but copy exited after that long). -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
was code added to detect or die on sighup recently?
I have a script that normally runs my snapshot that I haven't used for the past several days because something seemed to be going wrong and I wanted to run things manually. But running the script twice today, I got: snaphome Found 15 mounted dated, snaps or snap archives »[snapper#2120]base_mp=/home 1 snap dated today. (Use: '--force=force_create_snap' to force another snap.) Checking other snaps for needed attention... Oldest Snapshot = Home-2014.08.03-02.05.53. Rsync with 9 excludes from config file...CODE(0x7f6098) at /home/perl/perl-5.16.3/lib/site/Carp.pm line 169. rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [sender=3.1.1] rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(513) [Receiver=3.1.1] But when I run the snapper script manually, I don't get such an error... Odd... The snaphome script is designed to run the snapper script (which calls rsync) and send it's log to a file and allow it to be automatically monitored... #!/bin/bash : {HOME:-/home/law} declare -i output_wanted=1 export ld=$HOME/var/log PATH=$HOME/bin:$PATH export PERL5OPT='-Mutf8 -CSA -I/home/law/bin/lib' function snaprun () { cd $ld { if [[ -e snap.log ]]; then mv snap.log snap.log-$(ShortDateTime) #7z a snap.log.7z snap.log-[0-9]*.[ fi declare cmd=nice -19 ionice -c3 snapper.pl -X x /home { echo $ld/snap.log $cmd $@ $ld/snap.log /dev/null } out=$(mksnap_links 21) } } (snaprun $@ /dev/null ) if ((output_wanted)); then tail -f $ld/snap.log fi -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
cygwin version of rsync has lost it's ext-attr copy ability
I don't know the reasons why, but for some reason the cygwin version of rsync has lost the extattr copy ability (as shown by this: rsync --version rsync version 3.1.0 protocol version 31 Copyright (C) 1996-2013 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, no xattrs, iconv, symtimes, prealloc ^ Does anyone know why they'd be turned off now when they've worked for many years? I asked on the cygwin list, but no one replied when I asked why and if it could be respun with them working again. Weird -- that's why I was wondering if anyone here knew anything. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: increasing the write block size for high latency
Dan Stromberg wrote: On Tue, Jul 8, 2014 at 8:07 AM, Adam Edgar aed...@research.att.com wrote: It seems the issue is indeed in the ssh layer. scp has the same issue and some work has been done in “fixing” that: That's a separate issue. rsync's performance WITHOUT ssh -- running locally is 100 times slower than a large buffer program. Even over ssh, one can get over 100MB/s with the stock source. Locally, it's read/writes to files and pipes that are the problem. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Concern: rsync failing to find some attributes in a file transfer?
Kevin Korb wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I wasn't objecting to the use of multiple file systems. I have a bunch of them too. I was objecting to the use of partitions to achieve multiple files systems. Logical volume management has been available for a long time and now we also have access to file systems that include such features. I use the terms synonymously. I'm doing the snapshots via lvm and rsync. Create the dynamic snapshot vol once a day, then use rsync once a day to copy all the new files off to another fixed snap that contains all of the files that changed that day. I then set that up to provide Previous Versions in the properties window on Win 7. Fixed partitioning, I still use on my system drive for for boot OS. Makes for more reliability, though systemd wants everything in /usr and /usr/share mounted at boot time along w/root, and that's causing a bit of an annoyance -- a more lovely gotcha, (my root and /usr are separate partitions), they moved mount from /bin to /usr/bin and left a 'mount' symlink to the new location on /usr. Of course little thought was given to how one would mount /usr in order to be able to access mount, but this seem typical of the thought going into the systemd changes... lvs LV VG Attr LSize Pool Origin Data% Backups Backups -wc-ao--- 10.91t Home Dataowc-aos-- 1.50t Home-2014.06.25-03.07.08 Data-wc-ao--- 3.84g Home-2014.07.03-03.07.03 Data-wc-ao--- 2.33g Home-2014.07.07-03.07.03 Data-wc-ao--- 1.37g Home-2014.07.09-03.07.03 Data-wc-ao--- 2.45g Home-2014.07.11-03.07.03 Data-wc-ao--- 5.36g Home-2014.07.13-03.07.04 Data-wc-ao--- 4.32g Home-2014.07.15-03.07.03 Data-wc-ao--- 21.59g Home-2014.07.17-03.07.03 Data-wc-ao--- 2.30g Home-2014.07.18-03.07.03 Data-wc-ao--- 2.26g Home-2014.07.19-03.07.03 Data-wc-ao--- 2.25g Home-2014.07.20-03.07.04 Data-wc-ao--- 1.71g Home-2014.07.21-03.07.03 Data-wc-ao--- 485.62g Home-2014.07.25-11.10.31 Data-wi-ao--- 656.00m Home-2014.07.25-11.14.30 Dataswi-aos-- 1.50t Home 0.11 Home.diffData-wi-ao--- 512.00g Home3Data-wc-ao--- 1.50t MediaData-wc-a 7.28t ShareData-wc-a 1.50t Squid_Cache Data-wc-ao--- 128.00g UsrShare Data-wc-ao--- 50.00g Media_Back HnS -wi-a 8.00t ShareHnS -wi-ao--- 1.50t Squid_Cache HnS -wi-a 128.00g Sys HnS -wc-a 96.00g Sysboot HnS -wc-a 4.00g Sysvar HnS -wc-a 28.00g UsrShare HnS -wi-a 50.00g Win HnS -wi-a 1.00t oHomeHnS -wi-ao--- 1.00t MediaMedia -wi-ao--- 7.28t --- So in the above, all the dated Home partitions are frozen snaps that only hold files changed on that day. The are not my backup solution, but a convenience so I can use the previous versions feature in windows. The last snap, will get used with the current base and the output sent to Home.diff, from there, the script computes the needed size, creates it, throws xfs on it, and copies the data to it. Script also prunes old snapshots keeping the last week, but going to every other day , then every 3rd and then 4th.. and that's about as far as this goes back. Daily backups using a tower of hanoi ordering are used for actual backup purposes. It was the base vol active snap writing diffs to a side partition where I got the original errors -- since it is working on the whole partition, it was running as root. Does that give enough technical detail about my use case? ;-) Oh, forgot the files at the end of the push my $rcmd = [$Rsync]; push( @$rcmd, qw( --8-bit-output --acls --archive --hard-links --human-readable --no-inc-recursive --one-file-system --prune-empty-dirs --whole-file --xattrs ), --compare-dest=$base_lvh-fs_mp/.); Should add : push @$rcmd, $OAsnap_lvh-fs_mp . /., $bdiff_lvh-fs_mp . /; for src and dest (OA=Oldest active snap (the dated active home, above, and the diff dir for the base (home.diff). Transferring with --compare-dest? I thought that the data was being moved from one filesystem to another, that seldomly calls for usage of --compare-dest. Data from the source gets moved to the diff volume after comparing it against the base I only want to copy over the diffs for a given day. It seems to me that the perl script being used is meant for another purpose, and it's being used inappropriately here. Why not just use rsync directly? That way maybe we here on the mailing list can make sense of what's actually happening. Otherwise take it up with the author of that script. ?!?!
Concern: rsync failing to find some attributes in a file transfer?
I have a regular script I run to make static snapshots of my home file system, with each being all the files that changed in the past 24 hours. I just moved my home partition to a new harddisk w/more space. I ran the util and have gotten odd results each time I ran it. This one bothers me... as I'm not sure why the attrs would be missing. How can the names be transfered but no content? Is that possible? Ideas? Thanks! Version info: rsync --version rsync version 3.1.0 protocol version 31 Copyright (C) 1996-2013 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints, socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes, prealloc, no SLP uname -a Linux Ishtar 3.15.6-Isht-Van #1 SMP PREEMPT Sat Jul 19 12:31:28 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux File system info: xfs_info /home meta-data=/dev/mapper/Data-Home isize=512agcount=32, agsize=12582896 blks = sectsz=4096 attr=2 data = bsize=4096 blocks=402652672, imaxpct=5 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 --- Command (called from a script file in perl): my $rcmd = [$Rsync]; push( @$rcmd, qw( --8-bit-output --acls --archive --hard-links --human-readable --no-inc-recursive --one-file-system --prune-empty-dirs --whole-file --xattrs ), --compare-dest=$base_lvh-fs_mp/.); output of the program: Rsync with 9 excludes from config file... Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Artists Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Avatars/Production Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/Dragonaut-The Resonance Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/HighSchoolDxD Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/I can't do H Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/Konachan Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/Maria-sama-ga-miteru Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/Miscellaneous Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/SwordArtOnline Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/To Love Ru Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/kiddy grade Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/lastfm Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/reality Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/lib/P/blib Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/lib/mem Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/lib/orig Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/lib/test Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/oldmapdrives Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/reg Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/tmp Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/law.V2/bin/vbs Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/root/1223/etc/fonts Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/root/1223/etc/local Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/root/1223/etc/samba/save0820/internals Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/root/1223/selinux Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/splunk/bin rsync took 135m, 26s Why would or how would the files and attr-names get transfered but be missing? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options:
Re: Concern: rsync failing to find some attributes in a file transfer?
Kevin Korb wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/26/2014 01:52 AM, L. A. Walsh wrote: I have a regular script I run to make static snapshots of my home file system, with each being all the files that changed in the past 24 hours. I am not clear about the nature of this script. Please provide more details. It's a script that uses the rsync command listed below. It's the rsync command below that that issued the error messages. I just moved my home partition to a new harddisk w/more space. Home Partition? Are we in 1995? Why would you have a partition mounted anywhere other than /boot ? My mom and dad put things on 1 partition, so do many non-computer types. It's not flexible or safe enough for my needs. How would you separate out programs and data? How do you upgrade your OS without destroying your data? How do you implement different backup policies for different types of data? If you want to move your home partition to a different part of the disk or with different make params or even a different file system, how do you do that? When you move your home partition, to a new disk, how do you switch out the home, or media, or whatever partition without rebooting? This isn't MS-DOS or Windows... If you have everything formatted into one partition, how do you make snapshots. If you only have 1 partition, where you do daily backups to? You DO run daily backups, don't you? I ran the util and have gotten odd results each time I ran it. What util? What results? - The results I posted below -- the util.. um... gee, let me think... I'm posting to an rsync list maybe it was visicalc?... nah... rsync! what would I be posting to this list for if this wasn't about rsync? This one bothers me... as I'm not sure why the attrs would be missing. Is it really that just extended attributes are missing? You seemed to be in a panic. Panic would be to my state like like famine to my missing my afternoon snack. Concern!=panic. How can the names be transfered but no content? Is that possible? I am uncertain what this question means. Maybe I have interpreted the rest of your email in the wrong context. Maybe not. I am not sure. Please provide technical details. I thought I did provide the tech details... file system, rsync command that produced it, kernel version. file system params...what more did you have in mind? Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/SwordArtOnline The name trusted.SGI_ACL_DEFAULT is the name of an extended attribute.. For some reason, the name is present in the index of extattrs, but the content associated with that ACL is missing. Another reason for splitting up file systems:... did you notice the execution time at the end: rsync took 135m, 26s. Do you know how long it would take if I added about 20x to that space? What's this about 1995? Do you still have the same data needs now that you did in 95? But all that's apart from the output of the util (that this list is about) with it's version number listed below even! Cripes. Ideas? Thanks! Version info: rsync --version rsync version 3.1.0 protocol version 31 Copyright (C) 1996-2013 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints, socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes, prealloc, no SLP uname -a Linux Ishtar 3.15.6-Isht-Van #1 SMP PREEMPT Sat Jul 19 12:31:28 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux File system info: xfs_info /home meta-data=/dev/mapper/Data-Home isize=512agcount=32, agsize=12582896 blks = sectsz=4096 attr=2 data = bsize=4096 blocks=402652672, imaxpct=5 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 --- Command (called from a script file in perl): my $rcmd = [$Rsync]; push( @$rcmd, qw( --8-bit-output --acls --archive --hard-links --human-readable --no-inc-recursive --one-file-system --prune-empty-dirs --whole-file --xattrs ), --compare-dest=$base_lvh-fs_mp/.); output of the program: Rsync with 9 excludes from config file... Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Artists Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Avatars/Production Missing abbreviated xattr value, trusted.SGI_ACL_DEFAULT, for /home.diff/Bliss/Documents/law/Pictures/Scans/Dragonaut-The Resonance
Re: increasing the write block size for high latency
Adam Edgar wrote: It seems the issue is indeed in the ssh layer. scp has the same issue and some work has been done in “fixing” that: http://www.psc.edu/index.php/hpn-ssh From the papers abstract: Status: O SCP and the underlying SSH2 protocol implementation in OpenSSH is network performance limited by statically defined internal flow control buffers. These buffers often end up acting as a bottleneck for network throughput of SCP, especially on long and high bandwith network links. It is *A* bottle neck over networks. look for extensions to ssh to ship unencrypted data streams. There's a patch for this @ http://www.psc.edu/index.php/hpn-ssh. However, rsync is dog slow locally as well for exactly the reasons you mention. An extract from another note on this topic (came up on suse list this week). Someone suggested compression for a speed up... I responded to that: On a local copy or local network, that usually slows down transfers. [ 1000:1 speed ratio with large vs. small io sizes):] One might ask why rsync is so slow -- copying 800G from 1 partition to another via xfsdump/restore takes a bit under 2 hours, or about 170MB/s, but with rsync, on the same partition with rsync transfering less than 1/1000th as much (700MB), it took ~70-80 minutes... or about 163kB/s. That's on the same system (local drive - another local drive) Transfer speeds depend on many factors. One of the largest is transfer size (how much transfered with 1 write /read. Transfer 1GB, 1-meg at a time, took 2.08s read, and 1.56s to write (using direct io). Transfer it at 4K: 37.28s, to read, and 43.02s to write. So 20-40x can be accounted for just on R/W size (1k buffers were 4x slower). Many desktop apps still think 4k is a good read size Over a network, causes drop from 500MB/s down to less than 200KB/s (as seen in FF and TB) -- 2500X. Optimal i/o size on my sys is between 16M-256M. So -- to answer your question, MANY things can affect speed, but I'd look at the R/W size first. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Concern: rsync failing to find some attributes in a file transfer?
Kevin Korb wrote: I ran the util and have gotten odd results each time I ran it. What util? What results? Besides the ones I included in the previous email, I ALSO experienced this: (from bug https://bugzilla.samba.org/show_bug.cgi?id=10724): The above was just a toy example designed to illustrate the issue. In practice, rsync 3.1.1 left dozens of such ghost directories inside my --backup-dir. - I ran out of space because of it... creating well over 100,000 empty directories that took up 400M space (on a 600M partition). I thought it might have been a fluke which was why I didn't bother to detail it, but seeing this report -- pretty much cinches it. Copying the command from below as run from my script: my $rcmd = [$Rsync]; push( @$rcmd, qw( --8-bit-output --acls --archive --hard-links --human-readable --no-inc-recursive --one-file-system --prune-empty-dirs --whole-file --xattrs ), --compare-dest=$base_lvh-fs_mp/.); So I am comparing a today snapshot with yesterday's and dumping the difference to a third partition. So that's the other weirdness I was seeing. Do you have a better picture now? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: increasing the write block size for high latency
Jonathan Aquilina wrote: One thing you that im not seeing factored in is rpm speed of the drives. Since my tests are run on the same machines and drives, such things factor out (as do cpu Hz, memory speeds, controller firmware, ... etc). Make sense? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Concern: rsync failing to find some attributes in a file transfer?
Wayne Davison wrote: On Fri, Jul 25, 2014 at 10:52 PM, L. A. Walsh rs...@tlinx.org mailto:rs...@tlinx.org wrote: Why would or how would the files and attr-names get transfered but be missing? Give 3.1.1 a try -- it has a fix in it for miss-sorted attr names when running as non-root. Alternately, try running (at least the receiving side) as root. Here's the NEWS entry for this fix: - Fixed a bug in the xattr-finding code that could make a non-root-run receiver not able to find some xattr numbers. Since it was generating a volume snapshot, it was already running as root -- and it was a local - local copy. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: [Bug 10637] rsync --link-dest should break hard links when encountering Too many links
samba-b...@samba.org wrote: https://bugzilla.samba.org/show_bug.cgi?id=10637 --- Comment #1 from Karl O. Pinc k...@meme.com 2014-05-28 19:05:04 UTC --- Yum is also rsync happy. That's where our --link-dest backups always break due to too many hard links. -- What would be too many? -- a few million? I have files in a test setup that have over 7000 files hard linked (only an 8 byte file, but since minimum block size is 4K, that would be 28MB w/o the hard link v. 4k with. It would NOT be good for rsync to start breaking links w/standard options. Maybe a new option to allow link breaking? -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Alternative rsync client for Windows
Donald Pearson wrote: ..backing up a complete Windows system and doing a bare metal restore.. That would really be something. Depends on what you mean bare-metal restore... if you have 'bare metal', then that would seem to mean no OS. If you have no OS, what are you running the restore on? Or are you talking about taking the image from 1 disk and copying it to another and booting from that disk? I did that when I upgraded my RAID-SSD. For my Win workstation, I use RAID-0 w/4 SSD's, which uses up 4/5 of my drive slots. So no room to dupto a similar config. I put a 2TB drive in the 5th slot and used cygwin's dd to copy to the 2TB drive. Then booted a linux rescue disk and used that to 'dd' the image on the 2TB drive to the new RAID-0. Had to have some third-party licenses reissued, but other than that, went fairly smoothly. Windows itself auto-activated via the an OEM check (Dell system). It's not exactly convenient, but for what was needed, it worked. Or are you talking doing the transfer w/no OS... um... yeah, that would be something... (Cygwin can be pretty useful sometimes). -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Re: Alternative rsync client for Windows
Kevin Korb wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I come from the Linux world. If one of my computers were to simply evaporate into nothingness or have complete storage failure then once the hardware problem is dealt with I would network boot SystemRescueCD then restore my backups that I made with rsync. I understand that things are more complicated in Windows but if say my laptop (it is the only computer I have that both boots and stores Windows) were similarly destroyed or blanked I would still network boot SystemRescueCD and restore my backups that I made with ntfsclone. My hesitation with backing up a Windows system with rsync is that I have absolutely no idea to go from I have a blank computer and a copy of all my files to I have a working computer with all my stuff. I might be asking for something as simple as Install Windows, install Acrosync, restore everything including the Windows configuration from backups or maybe some kind of rescue disc or maybe some kind of custom WinPE disc. I don't know. I know just enough about Windows to figure out how to use what I know from Linux to make things sorta work. --- I wouldn't suggest trying to restore windows w/rsync. It might be possible, but first issue is that whatever media you rsync things to, needs to support full NT security and be able to create arbitrary users/acl's to fully replicate the source. Second issue is that MS deliberately uses things the location of something on the disk as a security option. I don't know what software uses it, but I remember discussion about media licenses (❝푐표푛푡푒푛푡❞) using the feature to prohibit any copy of them from working: only the original in its original 'licensed' location would work. The whole way NTFS is designs it's locking of files is very unlike how it's done on linux/unix. When it locks a file, it isn't, like in linux, at the inode level+offset; it's at a physical location on the disk that gets locked... it's really primitive, (and is why one needs to often reboot a system to replace binaries -- because the bytes on the disk ARE the file and they are locked -- vs. on linux, usually you have an inode that points to sectors where the file is, and by changing where the inode points, you can change the content. That said, my primary concern would be the first issue (for me, not using licensed content on windows, I've not run into the problem, so that's mostly from memory about how it was implemented. VERY often, when doing copies with rsync or cp -a from one sys to another, I'll find permissions or such won't get transferred quite the same way. I have used rsync from/to the same disk to restore repair a broken windows install -- the part that has problems is storing the extended stuff and ACL's on a foreign media. (Also have to make sure on restore that rsync has all needed rightsprivileges. Cygwin takes care of alot of that -- removing a file or such that in the windows command line, you'd have a pretty hard time doing... or setting permissions on all the files in the windows/system32 dir despite not owning them - under the posix model, ownership doesn't matter for 'root'.. so cygwin tries to emulate that as much as possible -- probably why I've seen cygwin listed as a security hacking tool...;-) (really!, letting a user control their own system, how absurd!).. -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html