Re: Exclude from not excluding "My Music" or "Printhood"

2012-05-01 Thread Joachim Otahal (privat)
Go to the win 7 machine. Enter that directory using the explorer. In the 
upper bar where you can see the path, click the empty white space right 
from the path.
My machine: explorer shows ">Username>Eigene Musik". Clicking the empty 
space shows the real path: C:\Users\Username\Music
Second way: Shift-right-click the path, choose "copy path". Third way to 
find the real name instead of the friendly name: open a cmd window, drag 
and drop the directory there.


If you want to know what is really going on you'll have to set the 
according explorer options, but seeing all links windows uses to show 
the path names in you language is confusing and does not really help.
You'll get a more informational view when opening a cmd window. Then cd 
%userprofile%, and then dir /a.


Daniel Feenberg schrieb:

We have been using rsync for some time with Linux and FreeBSD, but are
just now trying to make it work with Windows. Not as easy as we hoped.

I am running the cwrsync client 3.0.6 on a new Windows 7 machine to a 
FreeBSD 8.1 server. I have an exclude-from filelist, which does seem 
to successfully exclude the directories given by many of its entries, 
including these two lines:


  Documents/My[ ]Videos/
  Nethood/

but the apparently similar (to me) directories

  Documents/My[ ]Music/
  Printhood/

just generate the error

rsync: opendir "/cygwin/c/users/feenberg/Documents/My Videos" failed: 
Permision denied (13)
rsync: opendir "/cygwin/c/users/feenberg/Documents/Printhood" failed: 
Permision denied (13)


As it happens, those directories don't really exist - at least the dir 
command won't list them and I certainly didn't do anything to create 
them. But then it doesn't list "My Videos" or "Nethood" either 
although they show up in MS explorer. They are some kind of MS damaged 
symbolic link, I suppose. I need to get rid of spurious messages - 
does anyone know how to do that?


Also, I do want the contents of "My Documents", which also gets an 
opendir error. But I think it duplicates the plain "Documents" folder, 
so this may not be a problem beyond the spurious message. I haven't 
excluded it yet.


Thanks for any help.

Daniel Feenberg
NBER


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync to a Remote NAS

2012-04-12 Thread Joachim Otahal (privat)

Chris Arnold schrieb:

Are you saying the current way we are doing it does NOT support "incremental" 
backups after the first full backup?


Oh, the rsync itself will do incremental, however if the filesystem 
below is going over VPN it won't work fast. rsync's design expects local 
and fast filesystems, direct HDD oder via direct LAN, and not mounted 
over VPN.



One of the NAS devices is a readynas duo rnd2100. In the backup section of the 
gui, it does say backup:remote::rsync but when i select that and fill in the 
info and click test connection, it does not connect. Does this one support the 
rsync daemon?


"readynas" says nothing to me.
Maybe it can play a rsync daemon, but it reads like it could backup 
itself via rsync to a remote location. You need an rsync daemon on the 
other side to receive the data or get the data from. If the other side 
is a "real" server and not just a NAS it should be easy, rsync is avail 
on nearly all existing OS-es (well, limited to those with network 
capabilities).


Joachim



----- Original Message -
From: "Joachim Otahal (privat)"
To: "Chris Arnold"
Cc: rsync@lists.samba.org
Sent: Thursday, April 12, 2012 3:28:42 PM
Subject: Re: Rsync to a Remote NAS

This is like mounting the remote drive via samba and then do a sync,
this is like doing a normal copy job without the deltra transfer
benefits of rsync.
If at all possible you should run an rsync daemon on the NAS box and
then run the rsync command on the other side of the VPN. rsync uses port
873 by default. Or use an extra box connected via LAN (not vpn) to mount
the NAS and run the rsync daemon.
If the NAS is 192.168.123.6 your command on the other side would be:
rsync  --verbose --progress --stats --compress --recursive --times
--perms --links --delete /Share/ 192.168.123.6::Backup/EdensLandCorp

You can also turn it around to let the NAS poll the backup, you need to
run an rsync server on the main site then, but only a few officially
support it.

regards,

Joachim Otahal

Chris Arnold schrieb:

Forgive me if this has been addressed here before. We have a remote office that 
we need to backup to our NAS. We have a site to site certificate VPN. The 
remote site has over 51gb that needs to be backed up to our NAS over that VPN. 
I have tried this command:
rsync --verbose --progress --stats --compress --recursive --times --perms 
--links --delete /Share/* / smb://192.168.123.6/Backup/EdensLandCorp

and it just sits there and appears to do nothing. Does rsync make a tarball 
first and then put it where it is told to put it or does it just copy the 
files/folders over? Maybe it is the smb://xx.xx.xx.xx/whatever that is breaking 
it..the bottom line is i need to copy/rsync a directory to a remote server 
through a VPN. How is this accomplished?


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Run rsync even not connected

2012-04-12 Thread Joachim Otahal (privat)

Brian K. White schrieb:


On 4/12/2012 3:36 PM, Joachim Otahal (privat) wrote:

it has not been mentioned: nohup !


Yes it was.



You are right, found your post now.
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Run rsync even not connected

2012-04-12 Thread Joachim Otahal (privat)

it has not been mentioned: nohup !
screen is a bit complicated if you never used it. It only makes sense if 
you want to check back later and see life what is going on.

at or cron would be my last restort.

nohup would be my choice
nohup (you command and options) &

Example:
x@x:~> nohup ls -la &
[1] 12865
x@x:~> nohup: ignoring input and appending output to `nohup.out'
[1]+  Donenohup ls -la

You can do a tail -f on nohup.out after you started the job to see what 
it would output on the screen.


Other example:
nohup (your command and options) >> log.rsync 2>>&1 &
will output stdout and stderr to the log.rsync

regards,

Joachim Otahal

Chris Arnold schrieb:

I hopethis hope this makes sense. How do you make rsync run even when not 
physically connected to the server? In other words, I run rsync from the 
terminal via vnc and when I log out of the connection, rsync stops running. Is 
there a script or something I can use?

Sent from my iPhone


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync to a Remote NAS

2012-04-12 Thread Joachim Otahal (privat)
This is like mounting the remote drive via samba and then do a sync, 
this is like doing a normal copy job without the deltra transfer 
benefits of rsync.
If at all possible you should run an rsync daemon on the NAS box and 
then run the rsync command on the other side of the VPN. rsync uses port 
873 by default. Or use an extra box connected via LAN (not vpn) to mount 
the NAS and run the rsync daemon.

If the NAS is 192.168.123.6 your command on the other side would be:
rsync  --verbose --progress --stats --compress --recursive --times 
--perms --links --delete /Share/ 192.168.123.6::Backup/EdensLandCorp


You can also turn it around to let the NAS poll the backup, you need to 
run an rsync server on the main site then, but only a few officially 
support it.


regards,

Joachim Otahal

Chris Arnold schrieb:

Forgive me if this has been addressed here before. We have a remote office that 
we need to backup to our NAS. We have a site to site certificate VPN. The 
remote site has over 51gb that needs to be backed up to our NAS over that VPN. 
I have tried this command:
rsync --verbose --progress --stats --compress --recursive --times --perms 
--links --delete /Share/* / smb://192.168.123.6/Backup/EdensLandCorp

and it just sits there and appears to do nothing. Does rsync make a tarball 
first and then put it where it is told to put it or does it just copy the 
files/folders over? Maybe it is the smb://xx.xx.xx.xx/whatever that is breaking 
it..the bottom line is i need to copy/rsync a directory to a remote server 
through a VPN. How is this accomplished?


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem syncing to Netapp (rsync: failed to set times on...)

2012-04-09 Thread Joachim Otahal (privat)

Stier, Matthew schrieb:

But the timestamp would not.


Be careful with that, I had cases where picture editors kept the 
timestamps even if they did change the content. Only atime was changed, 
mtime stayed. The affected users had the option selected in the program 
(they thought of it as a speed up option), once it was changed it was 
the way it should. So I run -c weekly and check how many are 
transferred. Sometimes surprises come up.


Joachim


-Original Message-
From: rsync-boun...@lists.samba.org [mailto:rsync-boun...@lists.samba.org] On 
Behalf Of Kyle Lanclos
Sent: Monday, April 09, 2012 5:06 PM
To: billdorr...@pgatourhq.com
Cc: rsync@lists.samba.org
Subject: Re: Problem syncing to Netapp (rsync: failed to set times on...)

Bill Dorrian wrote:

These are photos - I wonder what the odds are of a modified file
having the same size as the original?

If someone modifies the EXIF metadata (say, to correct a 'picture taken on'
timestamp for a camera that wasn't properly synchronized), the file size
would likely remain the same.

--Kyle


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem syncing to Netapp (rsync: failed to set times on...)

2012-04-09 Thread Joachim Otahal (privat)
PS: Another (ugly) workaround: Use two linux boxes, place then both on 
each side of the slow line. One side having the NAS mounted + running 
rsync server, the other having the netapp mounted.
Then sync between those two linux boxes. Even if you have to use -c or 
--ignore-times the "full read just for checksum" would only happen in 
the LAN of each networks while the slow line has only the burden of the 
checksums and actual transfers.


Joachim

billdorr...@pgatourhq.com schrieb:

Hey folks.

I have a machine that I use as an intermediary to rsync between a NAS 
in another building (the users there have poor bandwidth and need 
local storage) and the our Netapp located in our Datacenter. I get 
lots of the "rsync: failed to set times on..." errors, and files which 
have already been transferred just try to sync again anyway.


These are the mount options that I use for both sides:

mount -t cifs -o credentials=/root/.synccreds //nasdevice/folder /nas
mount -t cifs -o credentials=/root/.synccreds //netapp/folder /netapp


The ".synccreds" file has the credentials of an Active Directory 
Domain Admin account, which has "Full Control" on both the NAS and the 
Netapp.



Here is the command that I run to do the rsync:

rsync -rvt  --delete --progress /nas/ /netapp/

Running rsync with "-i" shows that the files are transferring because 
of timestamp differences.


I tried the "-c" option in place of the "-t", but the server doing the 
sync just hung there for literally two days without anything 
transferring and no output. I realize that the -c option is slower, 
but yikes!


Any thoughts? Suggestions?

Thanks,
Bill D.


Bill Dorrian
Network Administrator
Desk: 904-273-7625
Cell: 904-859-9471


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem syncing to Netapp (rsync: failed to set times on...)

2012-04-09 Thread Joachim Otahal (privat)

  
  
I know from a lot of NAS boxes that they tend to use their internal
time to stamp files instead of the time given by a copy job.
The easiest way to test is to deliberately set the time off by a few
hours on the box you monted the stuff on, the NAS and netapp (or the
Server accessing the netapp) and create a file from your mount
point, and check whether the time is right.

cifs isn't the fastest way in unix environments, and samba uses
_quite_ an amount of CPU power if you are above 50 MB/s.
If in any way possible you should do it more directly, having an
in-between box in the network causes (if -c is used) that all files
are read from both boxes over the network just for the checksum,
hence the bad performance.

These Netapps, are they "pure" storage and the server using them is
either windows or linux? If yes: Put rsync directly on those
servers. This also applies to most NAS boxes I know, they offer
rsync directly, most of the time as server.

Joachim

billdorr...@pgatourhq.com schrieb:
Hey folks.
  
  I have a machine that I use as an
intermediary
to rsync between a NAS in another building (the users there have
poor bandwidth
and need local storage) and the our Netapp located in our
Datacenter. I
get lots of the "rsync: failed to set times on..." errors, and
files which have already been transferred just try to sync again
anyway.
  
  These are the mount options that
I use
for both sides:
  
  mount -t cifs -o
credentials=/root/.synccreds
//nasdevice/folder /nas
  mount -t cifs -o
credentials=/root/.synccreds
//netapp/folder /netapp
  
  
  The ".synccreds" file has
the credentials of an Active Directory Domain Admin account,
which has
"Full Control" on both the NAS and the Netapp.
  
  
  Here is the command that I run to
do
the rsync:
  
  rsync -rvt  --delete --progress
/nas/ /netapp/
  
  Running rsync with "-i" shows
that the files are transferring because of timestamp
differences.
  
  I tried the "-c" option in
place of the "-t", but the server doing the sync just hung there
for literally two days without anything transferring and no
output. I realize
that the -c option is slower, but yikes!
  
  Any thoughts? Suggestions?
  
  Thanks,
  Bill D.
  
  
  Bill Dorrian
  Network Administrator
  Desk: 904-273-7625
  Cell: 904-859-9471
"
  THE PLAYERS begins May 10 on Golf's Greatest Stadium. For more
  information visit PGATOUR.COM/theplayers."

"The information contained in this transmission and any
attachments may contain privileged and confidential information.
It is intended only for the use of the person(s) named above. If
you are not the intended recipient, you are hereby notified that
any review, dissemination, distribution, or duplication of this
communication is strictly prohibited. If you are not the
intended recipient, please contact the sender by reply email and
destroy all copies of the original message."
  
  
  
  


  

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: [Bug 8566] Spotlight comments (extended attributes) are not synced

2012-03-24 Thread Joachim Otahal (privat)

samba-b...@samba.org schrieb:

https://bugzilla.samba.org/show_bug.cgi?id=8566

--- Comment #9 from Stefan Nowak  2012-03-24 14:45:18 UTC ---
mdutil -E /Volumes/Destination

Re-indexing the destination volume did not help either.
xattr -rl showed that all comment data is still there.



Sorry to make fun of it, but reading here that the Mac finder has 
exactly the same problem as Windows Search (as long as indexing is not 
deactivated) is just funny. You see it right before you eyes in a shell 
or in the explorer, but searching denies its existence until it has been 
indexed, which can take hours or even days. They reduced that type of 
problems in W7, but they still exist there just as your problem in Mac OS/X.


The workaround seems the same for both OS: Use a shell and let the 
search run there.
(In case of W7: deactivating the indexing-trash solves it too at the 
cost of slower search, doesn't help in Vista or XP)


Joachim Otahal
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Batch mode creates huge diffs, bug(s)?

2012-03-20 Thread Joachim Otahal (privat)

  
  
Matt Van Mater schrieb:
Let me restate my last email regarding rdiff:
  
  All of my image files are from the same Windows XP VM, created
  using FOG/partimage.  Image1 is the "baseline", Image2 is Image1 +
  the WinSCP binary downloaded (not even installed).


Use your virtualisation to create the difference / snapshot, and
transfer what your virtualisation spits out in the diff disk image,
and the let it merge on the target.
How well that works depends on the bitchiness of your virtualisation
*g*.

Good luck!

Joachim Otahal


  
  I am not imaging an Ubuntu machine.  I am using the Ubuntu machine
  as a means of creating the batch file for rsync and/or rdiff.  I
  chose that platform since it is a common distribution used by many
  and would be easy for others to reproduce my problem.
  
  I agree the 400 MB still looks big, but no the ONLY intentional
  difference between image1 and image2 is the 2.9 MB WinSCP binary i
  downloaded.  My guess is the difference is 1) due partially to the
  default block size rdiff uses (512b?) AND 2) the fact that the
  Windows XP VM image source only had 256 MB RAM and that by default
  Windows XP creates a pagefile of 1.5 x RAM size = 384 MB.  That is
  close enough to 400 MB for me.
  
  I am currently running rdiff with a smaller blocksize to test #1
  above, hopefully that will force the delta to get smaller (at the
  expense of longer computation time).
  
  Matt
  
  On Tue, Mar 20, 2012 at 3:41 PM, Joachim
    Otahal (privat) <j...@gmx.net> wrote:
Matt Van
  Mater schrieb:
  

  
  Alternate assessment - I ran a similar comparison against
  the two image files using rdiff that comes with Ubuntu
  10.04.4 LTS (shown up as librsync 0.9.7) and have a
  significantly smaller delta file (closer to what i
  expect).


  
  Just plain luck. If ubuntu wrote the most new files close to
  the last used blocks and only changes a few bytes (this time
  literally) in the middle then the desync happens later. The
  400 MB delta still looks big, or did you install something big
  like libreoffice?
  
  regards,
  
  Joachim Otahal
  

  
  


  

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Batch mode creates huge diffs, bug(s)?

2012-03-20 Thread Joachim Otahal (privat)

  
  
Maybe don't sync one big file, hack the image up in small chunks,
then whatever the gap size is rsync might have a bigger chance to
resync with including --fuzzy. Though it might not help at all since
the number of files would be large.

IF it is only a "once every few month" thing you could: Defrag the
machine (virtual or real, no difference) including the MFT and let
the defrag tool put everything at the start of the drive.
Ultradefrag and Auslogics (free) Defrag are both usable programs,
though the latter does not defrag the MFT, but is _way_ more
efficient for the rest..
THEN create the "before" image, install the program, and create the
"after" image.

For everything beyond that you need a different approach, but that
requires a _lot_ more knowledge about your setup, how those XP
machines run etc.

regards,

Joachim Otahal

Matt Van Mater schrieb:
I agree with your assessment somewhat Joachim and
  think you're following the same line of reasoning as I am.  Some
  details I did not include in my first post:
  
  FOG/partimage does indeed only capture the used blocks in its
  images when you select "ntfs - resizable".  So running a clean
  utility (e.g. writing zeros to free space) will not make an impact
  because partimage does not copy those blocks anyway.  However, the
  technique you describe would be useful if I was using dd to
  capture the image.  I am unsure how large a block size partimage
  uses when copying only the used blocks, so it takes some trial and
  error to determine the appropriate block size within rsync/rdiff.
  
  Regarding the size of the delta, I had the same exact thought... I
  have a hunch that the new file I downloaded was included in the
  middle of the partimage image file and that rsync somehow was not
  able to associate the last 6.9 GB after the "gap" as existing
  content.
  
  Regarding the out of memory error, this occurs immediately after
  executing the command, it does not run for a while and then fail. 
  It is one reason I gave my VM a very large amount of RAM to
  compute the deltas; to ensure that it did not run out due to a
  memory leak or something like that.  The command dies so quickly I
  am confident that it couldn't even have a chance to consume the
  entire 16 GB of RAM... it isn't running out of memory, but seems
  to be some other memory allocation error.
  
  I don't think the fuzzy option will help me, but it is on my list
  of options to try.  Unfortunately any test I perform takes a long
  time to complete due to the size of the image, so it will be a
  little while before i can report the results of the test.
  
  And in case someone asks "why don't you use rdiff if that seems to
  work for you?", I would have to install that software on over 325
  remote servers over satellite.  I would MUCH prefer to not touch
  the remote servers and be able to use the existing rsync software.
  
  Matt
  
  On Tue, Mar 20, 2012 at 3:10 PM, Joachim
Otahal (privat) <j...@gmx.net> wrote:

   Matt Van Mater schrieb:

  

  

  image1 size in bytes: 17,062,442,700
  image2 size in bytes: 16,993,256,652

  

  
  

about 70 MB of change between a boot with a small program
install. That is realistic. This also means: FOG/Partimage
only captures the used sectors.
IF you would capture ALL sectors (used and unused) the rsync
difference would be those about 70 MB. You shuld run a
"clean slack" utility before imaging though, like the
microsoft precompact.exe (supplied with Virtual PC 2007).

But here it looks like this: about the first half of the
image contain sectors which were not changed between the
reboots.
Then, in the middle of the image, a few bytes (~70 MB) got
added, and rsync cannot get a match across that 70 MB gap
and therefore treats everything after that as "new".

  
  
  

  Command:
  
rsync --block-size=512
  –only-write-batch=img1toimg2_diff image2 image1
  
  Error message:
  
 ERROR: Out of memory in receive_sums [sender]
   

Re: Batch mode creates huge diffs, bug(s)?

2012-03-20 Thread Joachim Otahal (privat)

Matt Van Mater schrieb:


Alternate assessment - I ran a similar comparison against the two 
image files using rdiff that comes with Ubuntu 10.04.4 LTS (shown up 
as librsync 0.9.7) and have a significantly smaller delta file (closer 
to what i expect).


Just plain luck. If ubuntu wrote the most new files close to the last 
used blocks and only changes a few bytes (this time literally) in the 
middle then the desync happens later. The 400 MB delta still looks big, 
or did you install something big like libreoffice?


regards,

Joachim Otahal

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Batch mode creates huge diffs, bug(s)?

2012-03-20 Thread Joachim Otahal (privat)

  
  
Matt Van Mater schrieb:

  

  
image1 size in bytes: 17,062,442,700
image2 size in bytes: 16,993,256,652
  

  


about 70 MB of change between a boot with a small program install.
That is realistic. This also means: FOG/Partimage only captures the
used sectors.
IF you would capture ALL sectors (used and unused) the rsync
difference would be those about 70 MB. You shuld run a "clean slack"
utility before imaging though, like the microsoft precompact.exe
(supplied with Virtual PC 2007).

But here it looks like this: about the first half of the image
contain sectors which were not changed between the reboots.
Then, in the middle of the image, a few bytes (~70 MB) got added,
and rsync cannot get a match across that 70 MB gap and therefore
treats everything after that as "new".



  
Command:

  rsync --block-size=512 –only-write-batch=img1toimg2_diff
image2 image1

Error message:

  
ERROR: Out of memory in receive_sums [sender]
  
rsync error: error allocating core memory buffers (code 22)
at util.c(117) [sender=3.0.7]

  


A block size below the cluster size doesn't make much sense, it only
wastes your memory. Hence the out of memory problem, let your
taskmanager run while doing that and you'll see. AFAIK rsync adjusts
the block size dynamically, uses large blocks (several MB) if there
is no change, and switches down to small blocks then there is a
change to keep the amount of data to transfer low.

What I cannot tell: A option to tell rsync to try harder to search
for a match within one big file, across a larger desynced region. I
only know and use "--fuzzy" which only helps on large amounts of
files, and only makes sense on slow connections.

Joachim
  

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: cwRsync got killed...

2012-03-08 Thread Joachim Otahal (privat)

br...@aljex.com schrieb:

Not that I have any say but I agree on both counts.

That is, I think it's ok for the 4.2.0 source not to be provided by them now, 
if they are not supplying the 4.2.0 binaries now, but at least at the time they 
were providing 4.2.0 binaries under gpl, then at that time at least they were 
obligated to make the matching source available. I don't know how long those 
obligations last after the fact. I can't imagine that you are obligated for 
example to provide a web host and all that that implies for the rest of your 
life just because you once wrote a hello world under gpl.



Well, I don't agree on them not providing their last version.
Bus since we have version 4.20 of both in binary, client and server (at 
least I have them, thx to the donator) aaand their nsis script, I'd say 
it is not worth going the full way. It was fine as long as they got 
provided, it was a nice service, and it shouldn't be that difficult to 
create a package on my own, as soon as I get my ass up 
*g*^d^d^d^d^d^d^d^d^d^d^d^d^d^d^d^d^d^d^d have time.


Jou
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Detection of permission changes

2012-03-02 Thread Joachim Otahal (privat)

Nope.

Available line speed: Sending 5 MBit, receiving 6 MBit. "real" line 
speed - well, it is a VPN over Internet, very "controlled" speed.. 
All files are already sync.


Fileset: about 3.31 GB, 3146 files, several runs.
min time/max time/mean time

rsync -rtvvzPc --compress-level=9 --fuzzy --delete-delay 
192.168.250.68::d/bootcd/ /cygdrive/e/bootcd/

sent 10497 bytes  received 145562 bytes  2039.99 bytes/sec
total size is 3559041255  speedup is 22805.74
75.4/96.3/80.8 seconds

rsync -rtvvzP --ignore-times --compress-level=9 --fuzzy --delete-delay 
192.168.250.68::d/bootcd/ /cygdrive/e/bootcd/

sent 8885025 bytes  received 207541 bytes  33613.92 bytes/sec
total size is 3559041255  speedup is 391.42
182/250/195 seconds

-c wins clearly over --ignore-times, about 5:2 (more or less).

--ignore-times would win if: some files with large size, change every 
time they are synced, but only 'cause -c only saves time on files that 
are already snyc. In my sync case > 95% of the files are already sync 
(except for my testing when everything was sync).


regards,

Joachim Otahal

Kevin Korb schrieb:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Try --ignore-times instead of --checksum.  It will appear to do more
since it will actually re-delta xfer everything but in my experience
that is faster than --checksum almost all of the time.

On 03/02/12 02:07, Joachim Otahal (privat) wrote:

Kevin Korb schrieb:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

I am not much of a programmer so I know I could never take over
rsync development but if I could boss such people around here are
the new directions I would take:

1. --itmize-changes is eliminated and becomes part of --verbose

100% agree there.


5. I am almost tempted to say I would remove --checksum because
95% of the times I have seen someone using it they did so to
their own detriment.  But I have seen at least 2 actual valid use
cases for it to exist so I would only add an extreme disclaimer
to the man page

Naaa, please not. I rsync some sets across a slower VPN line, and
due to different OS-es and filesystems on both ends I cannot rely
on things like timestamp. Checking filesize changes is not enough,
since quite some files (a few hundred of several thousands) change
without changing the size, and less than ten files (but too many to
ignore) get modified without changing the (a|c|m)time. This leaves
me the last resort -c to make 100% sure every change is detected,
but only changed files are synced.

If -c would not exist I would be forced to use something
completely different, first sync "the usual way" based on filesize
and timestamp. I would not need rsync for that, simpler tools which
don't require a daemon can do the same. And in a second run do a
crc32 (or md5 whatever) recursive, check for crc differences and
transfer those which crc's still differ. Would work, but ugly. -c
is better and my absolute winner.


Unfortunately I know that such fundamental changes would create
a backlash.  So maybe I wouldn't actually do them if I had the
authority.  But I am pretty sure they are all a good idea.

and of course now we are way beyond the scope of your question
and into the realm of the opinion of someone who has been using
rsync as the low level tool of a backup system for more than a
decade and who regularly helps out on #rsync.

Oh yes indeed, your answers show a lot of experience fighting
with/against the rsync dragon.

Joachim Otahal




--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Detection of permission changes

2012-03-01 Thread Joachim Otahal (privat)

Kevin Korb schrieb:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am not much of a programmer so I know I could never take over rsync
development but if I could boss such people around here are the new
directions I would take:

1. --itmize-changes is eliminated and becomes part of --verbose


100% agree there.


5. I am almost tempted to say I would remove --checksum because 95% of
the times I have seen someone using it they did so to their own
detriment.  But I have seen at least 2 actual valid use cases for it
to exist so I would only add an extreme disclaimer to the man page


Naaa, please not. I rsync some sets across a slower VPN line, and due to 
different OS-es and filesystems on both ends I cannot rely on things 
like timestamp. Checking filesize changes is not enough, since quite 
some files (a few hundred of several thousands) change without changing 
the size, and less than ten files (but too many to ignore) get modified 
without changing the (a|c|m)time.
This leaves me the last resort -c to make 100% sure every change is 
detected, but only changed files are synced.


If -c would not exist I would be forced to use something completely 
different, first sync "the usual way" based on filesize and timestamp. I 
would not need rsync for that, simpler tools which don't require a 
daemon can do the same. And in a second run do a crc32 (or md5 whatever) 
recursive, check for crc differences and transfer those which crc's 
still differ.

Would work, but ugly.
-c is better and my absolute winner.


Unfortunately I know that such fundamental changes would create a
backlash.  So maybe I wouldn't actually do them if I had the
authority.  But I am pretty sure they are all a good idea.

and of course now we are way beyond the scope of your question and
into the realm of the opinion of someone who has been using rsync as
the low level tool of a backup system for more than a decade and who
regularly helps out on #rsync.


Oh yes indeed, your answers show a lot of experience fighting 
with/against the rsync dragon.


Joachim Otahal
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Fwd: Re: cwRsync got killed...

2012-02-16 Thread Joachim Otahal (privat)

AGH !
I postet the wrong checksums, I posted the ones for 
cwRsync_4.2.0_Installer.zip instead of cwRsyncServer_4.2.0_Installer.zip


cwRsyncServer_4.2.0_Installer.zip:
MD5: c787dfa854775793d1a1b5c3502b57b5
SHA-256: 21e608caed9e5e7e1f1f9881729eab0a8fce6e1ff31d85dcb7759d502478160c

The checksums are still right.

Jou

Joachim Otahal (privat) schrieb:

A bit late, but someone (anonymous) provided a link to download.
the md5 is correct, it matches the last sourceforge state. And the md5 
and sha256 mentioned at https://www.itefix.no/i2/node/12862 - before 
he gave up the project.

MD5 c787dfa854775793d1a1b5c3502b57b5
sha256 5abeec588e937bd749456ddb347e4116b0f8407e15f412281fc64c763d1de62d

For obvious reasons you should check the md5 / sha256 after 
downloading, the file might change without notice : ) .


Content looks good and original, so it is original (with a very high 
probability).


Side question: Does anyone know the probability of generating a file 
with the same md5 and sha256 which is still looks valid with the 
expected content (including the manually unpacked nullsoft installer 
inside the zip)?


Jou

 Original-Nachricht 
Betreff: Re: cwRsync got killed...
Datum: Thu, 16 Feb 2012 16:43:29 +0200
Von: John Doe <**@gmail.com>
An: Joachim Otahal (privat) 



Real name? No, it's not, obviously.
For some reason I couldn't post this on the rsync forum, where I found 
your request. Would you, kindly, consider adding the link there?


On Wed, Feb 15, 2012 at 10:41 PM, Joachim Otahal (privat) <mailto:j...@gmx.net>> wrote:


   John Doe schrieb:

   I still have the cwRsyncServer_4.2.0_Installer.zip file.
   I've uploaded it here:
   
http://www.2shared.com/file/212w-aDp/cwRsyncServer_420_Installer.html



   Thanks.
   Is this your real name?

   kind regards,

   Joachim Otahal




--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Fwd: Re: cwRsync got killed...

2012-02-16 Thread Joachim Otahal (privat)

A bit late, but someone (anonymous) provided a link to download.
the md5 is correct, it matches the last sourceforge state. And the md5 
and sha256 mentioned at https://www.itefix.no/i2/node/12862 - before he 
gave up the project.

MD5 c787dfa854775793d1a1b5c3502b57b5
sha256 5abeec588e937bd749456ddb347e4116b0f8407e15f412281fc64c763d1de62d

For obvious reasons you should check the md5 / sha256 after downloading, 
the file might change without notice : ) .


Content looks good and original, so it is original (with a very high 
probability).


Side question: Does anyone know the probability of generating a file 
with the same md5 and sha256 which is still looks valid with the 
expected content (including the manually unpacked nullsoft installer 
inside the zip)?


Jou

 Original-Nachricht 
Betreff:Re: cwRsync got killed...
Datum:  Thu, 16 Feb 2012 16:43:29 +0200
Von:John Doe <**@gmail.com>
An: Joachim Otahal (privat) 



Real name? No, it's not, obviously.
For some reason I couldn't post this on the rsync forum, where I found 
your request. Would you, kindly, consider adding the link there?


On Wed, Feb 15, 2012 at 10:41 PM, Joachim Otahal (privat) <mailto:j...@gmx.net>> wrote:


   John Doe schrieb:

   I still have the cwRsyncServer_4.2.0_Installer.zip file.
   I've uploaded it here:
   http://www.2shared.com/file/212w-aDp/cwRsyncServer_420_Installer.html


   Thanks.
   Is this your real name?

   kind regards,

   Joachim Otahal


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Static server side listing

2011-12-22 Thread Joachim Otahal (privat)

Joachim Otahal (privat) schrieb:
When it is OK to let the users have an 24h old filelist, is it at the 
same time OK if the user gets only up to 24h old files?

Whoops, I _hope_ you know I meant "get the files up to 24h late?".
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Static server side listing

2011-12-22 Thread Joachim Otahal (privat)

Mark Constable schrieb:

I've looking for a solution for this and no amount of googling has
come up with anything.

Is it possible to provide a static listing on a server, say every
24 hours, that a standard end-user rsync can pull and use?

I have a lot of files to provide and the idea of every request
dynamically providing a file list in real time is killing my
server and is simply not needed. I am quite prepared to swap in
(atomically) an alternate file tree every 24 hours as long as I
can also provide a static file list. I know the files will not
change for 24 hours and could easily handle a 10Kb to 100Kb static
list being downloaded, plus the actual delta downloads, but adding 
100/sec listings of 20,000 files is a killer.


How many are requesting and how often, and does the server do more than 
providing files?
Taking 100 seconds to do a 2 files list looks slow to me, more like 
time to upgrade RAM and/or HDD.
When it is OK to let the users have an 24h old filelist, is it at the 
same time OK if the user gets only up to 24h old files? If no then it 
can't be helped, if yes another solution would be to set up a seperate 
server for that high number of rsync requests which provides a 
once-per-day or once-per-hour rsynced copy of the main files. For that 
separate dedicated server a pre-generated list would be unneeded since 
the OS would cache the never-changing files.

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: sync prob with big files

2011-12-09 Thread Joachim Otahal (privat)

fuzzy_4711 schrieb:

Hi list.

rsync --omit-dir-times --size-only -avzAX \
--bwlimit=$KILOBYTES_PER_SECOND --devices --specials --numeric-ids \
--delete $BACULA_BACKUP_DIR $MOUNT_DIR/$SYNC_DIR


I miss -c (or --checksum) there.
You never know whether the filesize changed, or whether the time is correct.
I am much to paranoid to do it without -c, I'd recomment using it at 
least on weekends when noone is working to make sure the data is right, 
since calculating the checksum does some HD accessing such a huge file.


Jou
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


cwRsync got killed...

2011-11-25 Thread Joachim Otahal (privat)

Last cwrsync was 4.1.0, current is 4.2.0.

It was avail on sourceforge.

Itefix.no decided "we want money for coding and support" - that itself 
is not wrong. Though _I_ never needed any rsync help on neither linux 
and windows (including mixed) scenarios.
But they killed their sourceforge downloads, all, including past 
versions of cwrsync, including source.


The 4.2.0 client is still available on www.chip.de, but the server has 
no alternative DL. We still have the filesize and the MD5 sum of 
sourceforge.


I'd vote for updating the download page 
http://rsync.samba.org/download.html.


Apart from that: Anybody got lucky on 
cwRsyncServer_4.2.0_Installer.zip 3961867 bytes MD5 
c787dfa854775793d1a1b5c3502b57b5 ?


Jou
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html