Re: Rsync Runs Out of Space Because of Temp File

2009-04-09 Thread Fabian Cenedese
At 10:19 08.04.2009 -0700, philvh wrote:

>There are still problem to work out for example where data in the
>destination are moved, and those data needs to be moved first before the
>transfer of data take place.  This will ensure that data is not lost and
>only the same space as the source is needed. This probably is the same as
>doing a defragment of a drive with limited extra space.

You didn't say what kind of virtual machine it is, but with vmware you can
convert it to be split into 2GB chunks of data instead of one big file. Then
rsync won't have any problems as it syncs each file separately and 10GB
is enough to create a new 2GB file. Of course you first need another
50GB for the conversion...

bye  Fabi

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: order transfers by file size

2009-04-09 Thread Felipe Alvarez
From: "Yan Seiner" 
To: "Victoria Muntean" 
Date: Wed, 8 Apr 2009 08:32:38 -0700 (PDT)
Subject: Re: order transfers by file size

On Wed, April 8, 2009 8:19 am, Victoria Muntean wrote:
> Is it possible to have rsync order transfers by file size (smallest
> files first) ?

> Ooooh, I like that.  I have a client that has a bad habit of creating  a
> 5GB zipfile, that, of course, fails to rsync across 3,000 miles.  Since
> it's a zip file, rsync can't diff the old and new versions; it ends up
> trying to send the whole thing and the connection just isn't reliable
> enough.  It would be nice to be able to transfer everything else first.

> As long as we're on that topic, a size limit on file size to be
> transferred would be nice.
If you are on Unix or GNU, you can try "split" command and break up
your 5GB into 500MB chunks. Reassemble them with "cat" when transfer
is (are) complete.

Felipe
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Help creating incremental backups using --backup-dir.

2009-04-09 Thread David Miller
Normally I would use the --link-dest option to do this but I can't  
since I'm rsyncing from a Mac to a Samba share on a Linux box and hard  
links don't work. What I want to do is create a 10 day rotating  
incremental backup. I used the first script example on the rsync  
examples page as a template. The only thing I changed was the  
destination to be a local directory and paths for the other variables.  
when I run the script nothing gets copied into the directories named  
by the day of the week. Each day when the script runs the directory  
with the name of the current week day is created but everything just  
goes into current. and stays there. Can someone post an example that  
does work for what I'm trying to do? Below is the script I'm using.



#---
# directory to backup
BDIR=$HOME/Documents

BACKUPDIR=`date +%A`
OPTS=" -aX --force --progress --ignore-errors --delete --backup -- 
backup-dir=/$BACKUPDIR"


# the following line clears the last weeks incremental directory
[ -d $HOME/emptydir ] || mkdir $HOME/emptydir
/usr/local/bin/rsync3.0.5 --delete -a $HOME/emptydir/ /Volumes/SAMBA/ 
$BACKUPDIR/

rmdir $HOME/emptydir

# now the actual transfer
/usr/local/bin/rsync3.0.5 $OPTS $BDIR /Users/Shared/current
#---

Thanks.
David.
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: order transfers by file size

2009-04-09 Thread Wayne Davison
On Wed, Apr 08, 2009 at 06:19:52PM +0300, Victoria Muntean wrote:
> Is it possible to have rsync order transfers by file size (smallest
> files first) ?

The only thing that is currently possible is to combine --min-size and
--max-size to do multiple transfers with different ranges of file sizes.

> Would it be a big patch ?

As long as you're willing to disable the incremental file-list feature,
someone could write a modified generator process that would either sort
the files into a custom processing order (with all dirs needing to be
sorted early in the list) without affecting the numbering of the files
(which can be done in a similar way as the iconv support), or an easier
approach would be to have the code internally perform multiple passes
over the file-list and process/ignore files by ranges of size.  The
latter might be pretty simple by using the existing min/max size options
and just adding an extra loop into the generator code (as well as
disabling incremental recursion). This is not something I plan to write,
though.

..wayne..
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


DO NOT REPLY [Bug 5695] improve keep-alive code to handle long-running directory scans

2009-04-09 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=5695





--- Comment #4 from wwen...@sbcglobal.net  2009-04-09 17:49 CST ---
I echo the "thanks for all your hardword making rsync available to the masses".

I'd just like to add to this defect that I have a system where disk writes can
be very slow when my client is very loaded.  In my system, rsync gets lowest
priority to use disks on the client.  Some very large files in my directories
can take a long, long time to create (longer than the 10 minute timeout I need
to use on client).  I am not using --copy-files but I am getting a local copy
because my very large files are often mostly unchanged (just a byte in a
gigantic file) or only a timestamp change.   So the receiver is creating the
temp file from an existing file which is usually 100% or 99.99% the same.  
I am stuck in the file creation loop in receiver.c/fileio.c for more than 10
minutes because write() is slow but it does make slow progress.

Just for fun, I simulated my problem on a fast Linux client by hacking in a
msleep(20*1000) in the flush_write_file() loop on the client rsync so that each
local file "copy update" takes much longer than the client timeout.  The server
times out on the client in this test scenario too.

I would love it if someday you could make a keep alive, or equivalent, for file
creation and not just for directory listing.  I'll do a workaround by making a
huge timeout on the server /etc/rsyncd.conf so that it does not timeout on the
client.

By the way, here is my rsync command line; nothing fancy here except the
timeout.  Yes, I'm using a slow FAT drive (I have no choice here):
rsync --modify-window=3602 -ptO -L --delete-during -v -ii --progress --port=
873 -z -r --bwlimit=0 --timeout=600 "myserver::rtdata/mydir" "destdir"


-- 
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the QA contact for the bug, or are watching the QA contact.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Help creating incremental backups using --backup-dir.

2009-04-09 Thread David Miller


Ok, I figured out the problem. I had to put in the full path for the -- 
backup-dir option. However, I have ran into another problem that makes  
doing this just about useless. If I rsync to an HFS+ volume it works  
correctly. If I rsync to a Samba share it gives me errors and puts  
files it thinks have been modified at the time of sync into the -- 
backup-dir directory. It is going through and deleting all the ._  
files. The errors I'm seeing are as such.


rsync: get_xattr_names: llistxattr("Documents/web server diagram/ 
web.graffle/._image2.jpg",1024) failed: Operation not permitted (1)

deleting Documents/web server diagram/web.graffle/._image2.jpg

I have checked the Samba server and the files are being set with the  
correct owner, group, permissions.


Are there any filesystems under linux that allow the proper  storage  
of the Mac metadata? I have tried XFS, ext3 and ext4 with no luck. I  
even tried creating a sparse disk image and mounting that from a Samba  
share but it is too unreliable. If there is a connection loss  while  
data is writing to the image it corrupts the image more often than not.


David.
On Apr 9, 2009, at 11:11 AM, David Miller wrote:

Normally I would use the --link-dest option to do this but I can't  
since I'm rsyncing from a Mac to a Samba share on a Linux box and  
hard links don't work. What I want to do is create a 10 day rotating  
incremental backup. I used the first script example on the rsync  
examples page as a template. The only thing I changed was the  
destination to be a local directory and paths for the other  
variables. when I run the script nothing gets copied into the  
directories named by the day of the week. Each day when the script  
runs the directory with the name of the current week day is created  
but everything just goes into current. and stays there. Can someone  
post an example that does work for what I'm trying to do? Below is  
the script I'm using.



#---
# directory to backup
BDIR=$HOME/Documents

BACKUPDIR=`date +%A`
OPTS=" -aX --force --progress --ignore-errors --delete --backup -- 
backup-dir=/$BACKUPDIR"


# the following line clears the last weeks incremental directory
[ -d $HOME/emptydir ] || mkdir $HOME/emptydir
/usr/local/bin/rsync3.0.5 --delete -a $HOME/emptydir/ /Volumes/SAMBA/ 
$BACKUPDIR/

rmdir $HOME/emptydir

# now the actual transfer
/usr/local/bin/rsync3.0.5 $OPTS $BDIR /Users/Shared/current
#---

Thanks.
David.
--
Please use reply-all for most replies to avoid omitting the mailing  
list.

To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem with extended ACLs in 3.0.4?

2009-04-09 Thread Wayne Davison
On Sun, Nov 02, 2008 at 07:18:39PM +, Andrew Gideon wrote:
> The previous copy of the file has the correct/complete ACL, and the
> link-dest logic sees this as different from the "new" copy result so
> a new copy of the file - with the wrong ACL - is written.

Rsync was of the belief that a mask was only needed if an ACL had named
values, otherwise it tried to simplify the ACL to mask off the group
mode and dropped the mask.  I've checked in a change that makes it keep
whatever mask value is specified, so the ACLs should be identical now.
This will get released in 3.0.6.

..wayne..
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html