[BackupPC-users] RsyncP problem

2009-10-22 Thread Harald Amtmann
My problem is still that rsyncP with rsyncd as client still retransmits 
unchanged files. I reduced the testcase:

1) Full Backup. All files are transmitted, This is the logoutput from the 
client:

2009/10/22 21:35:44 [3820] connect from UNKNOWN (192.168.5.9)
2009/10/22 21:35:55 [3820] rsync on . from bag...@unknown (192.168.5.9)
2009/10/22 21:35:56 [3820] send unknown [192.168.5.9] docsnsettings (baggub) 
.musikproject/musikCube_u.ini 1913 http://portal.gmx.net/de/go/maxdome01

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Harald Amtmann
BackupPC server. This will keep on happening until after you have made a 
full back-up of the files in the new location. 
"


 Original-Nachricht 
> Datum: Thu, 22 Oct 2009 22:31:32 +0200
> Von: "Harald Amtmann" 
> An: backuppc-users@lists.sourceforge.net
> Betreff: [BackupPC-users] RsyncP problem

> My problem is still that rsyncP with rsyncd as client still retransmits
> unchanged files. I reduced the testcase:
> 
> 1) Full Backup. All files are transmitted, This is the logoutput from the
> client:
> 
> 2009/10/22 21:35:44 [3820] connect from UNKNOWN (192.168.5.9)
> 2009/10/22 21:35:55 [3820] rsync on . from bag...@unknown (192.168.5.9)
> 2009/10/22 21:35:56 [3820] send unknown [192.168.5.9] docsnsettings
> (baggub) .musikproject/musikCube_u.ini 1913  2009/10/22 21:35:57 [3820] send unknown [192.168.5.9] docsnsettings
> (baggub) .musikproject/musik_collected_u.db 157696  2009/10/22 21:39:32 [3820] send unknown [192.168.5.9] docsnsettings
> (baggub) .musikproject/musik_u.db 28868608  2009/10/22 21:39:32 [3820] sent 28836048 bytes  received 61235 bytes 
> total size 29028217
> 
> As you can see, roughly 30 MB are transmitted.
> 
> 2) Incremental backup:
> 
> 2009/10/22 21:40:46 [3940] 192.168.5.9 is not a known address for
> "localhost": spoofed address?
> 2009/10/22 21:40:46 [3940] connect from UNKNOWN (192.168.5.9)
> 2009/10/22 21:40:57 [3940] rsync on . from bag...@unknown (192.168.5.9)
> 2009/10/22 21:40:57 [3940] sent 212 bytes  received 674 bytes  total size
> 29028217
> 
> Almost nothing is transmitted, as the client only checks the timestamps.
> 
> 3) Another full backup: This looks exactly like the output to 1). All data
> is sent over the wire again. Rsync summary states that about 30MB are
> transmitted.
> 
> 4) Experiment:
> 
> For testing, I added "--checksum" to the {$RsyncArgs}. I rerun a Full
> Backup again:
> 
> 2009/10/22 21:55:09 [2172] rsync on . from bag...@unknown (192.168.5.9)
> 2009/10/22 21:55:10 [2172] send unknown [192.168.5.9] docsnsettings
> (baggub) .musikproject/musikCube_u.ini 1913  2009/10/22 21:55:11 [2172] send unknown [192.168.5.9] docsnsettings
> (baggub) .musikproject/musik_collected_u.db 157696  2009/10/22 21:55:11 [2172] sent 158068 bytes  received 762 bytes  total
> size 29028217
> 
> Interestingly, this time, only the two small files get retransmitted, the
> big one is left out.
> 
> I then restored my configuration to include the complete client pc,
> keeping the --checksum parameter. Sadly, now all I get is fileListReceived 
> errors
> on the server, so this didn't help either.
> 
> And for the records, I tried both rsync 2.6.8 and 3.0.4 on the client.
> 
> Craig, is this expected behaviour? Why does the full backup retransmit
> everything everytime?
 

-- 
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/atbrowser

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Les Mikesell
Harald Amtmann wrote:
> So, for anyone who cares (doesn't seem to be anyone on this list who 
> noticed), I found this post from 2006 stating and analyzing my exact problem:
> 
> http://www.topology.org/linux/backuppc.html
> On this site, search for "Design flaw: Avoidable re-transmission of massive 
> amounts of data."

It's documented behavior, so not a surprise.

>5. Now I make a second incremental back-up of home and home1. Since I have 
> already backed up these two modules, I expect them both to be very quick. But 
> this does not happen. In fact, all of home1 is sent in full over the LAN, 
> which in my case takes about 10 hours. This is a real nuisance. This problem 
> occurs even if I have this in the config.pl file on server1:
>   $Conf{IncrFill} = 1;

You have the wrong expectations. Do you have a reasonably current 
version, and did you read the section on $Conf{IncrLevels} in 
http://backuppc.sourceforge.net/faq/BackupPC.html?  You can also just do 
full runs instead of incrementals - they take a long time as the target 
has to read the files to verify the block checksums, but not a lot of 
bandwidth.

> The cure for this design flaw is very easy indeed, and it would save me 
> several days of saturated LAN bandwidth when I make back-ups. It's very sad 
> that the authors did not design the software correctly. Here is how the 
> software design flaw can be fixed.
> 
>1. When an rsync file-system module module1 is to be transmitted from 
> client1 to server1, first transmit the hash (e.g. MD5) of each file from 
> client1 to server1. This can be done (a) on a file by file basis, (b) for all 
> the files in module1 at the same time, or (c) in bundles of say, a few 
> hundred or thousand hashes at a time.

The rsync binary on the target isn't going to do that.

>2. The BackupPC server server1 matches the received file hashes with the 
> global hash table of all files on server1, both full back-up files and 
> incremenetal back-up files.

Aside from not matching rsync, the file hashes have expected collisions 
that can only be resolved by a full data comparison.  And there's no 
reason to expect all of the files in the pool to have been collected 
with an rsync transfer method.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Harald Amtmann

 Original-Nachricht 
> Datum: Mon, 07 Dec 2009 13:08:52 -0600
> Von: Les Mikesell 
> An: "General list for user discussion,questions and support" 
> 
> Betreff: Re: [BackupPC-users] RsyncP problem

> Harald Amtmann wrote:
> > So, for anyone who cares (doesn't seem to be anyone on this list who
> noticed), I found this post from 2006 stating and analyzing my exact problem:
> > 
> > http://www.topology.org/linux/backuppc.html
> > On this site, search for "Design flaw: Avoidable re-transmission of
> massive amounts of data."
> 
> It's documented behavior, so not a surprise.

"With the rsync transfer method the partial backup is used to resume the next 
full backup, avoiding the need to retransfer the file data already in the 
partial backup."

This is also from the docs and doesn't work. I have 40 GB of data and do a 
first full backup. It gets interrupted. I start it again and all data is 
retransmitted. Does the "rsync transfer method" not include rsyncd method which 
I am using?


-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Les Mikesell
Harald Amtmann wrote:
>  Original-Nachricht 
>> Datum: Mon, 07 Dec 2009 13:08:52 -0600
>> Von: Les Mikesell 
>> An: "General list for user discussion,   questions and support" 
>> 
>> Betreff: Re: [BackupPC-users] RsyncP problem
> 
>> Harald Amtmann wrote:
>>> So, for anyone who cares (doesn't seem to be anyone on this list who
>> noticed), I found this post from 2006 stating and analyzing my exact problem:
>>> http://www.topology.org/linux/backuppc.html
>>> On this site, search for "Design flaw: Avoidable re-transmission of
>> massive amounts of data."
>>
>> It's documented behavior, so not a surprise.
> 
> "With the rsync transfer method the partial backup is used to resume the next 
> full backup, avoiding the need to retransfer the file data already in the 
> partial backup."
> 
> This is also from the docs and doesn't work. I have 40 GB of data and do a 
> first full backup. It gets interrupted. I start it again and all data is 
> retransmitted. Does the "rsync transfer method" not include rsyncd method 
> which I am using?

It applies to full rsync or rsyncd backups.  An interrupted full should 
be marked as a 'partial' in your backup summary - and the subsequent 
full retry should not transfer the completed files again although it 
will take the time to to a block checksum compare over them.  I don't 
think it applies to incomplete files, so if you have one huge file that 
didn't finish I think it would retry from the start.   This and 
Conf{IncrLevels} are fairly recent additions - be sure you have a 
current backuppc version and the code and documentation match.   Even 
the current version won't find new or moved content if it exists in the 
pool, though.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Harald Amtmann

> Conf{IncrLevels} are fairly recent additions - be sure you have a 
> current backuppc version and the code and documentation match.   Even 
> the current version won't find new or moved content if it exists in the 
> pool, though.

Are you referring to 3.2.0 beta 1 or 3.1.0 as recent version? I am using 3.1.0 
from Debian.



-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-07 Thread Les Mikesell
Harald Amtmann wrote:
>> Conf{IncrLevels} are fairly recent additions - be sure you have a 
>> current backuppc version and the code and documentation match.   Even 
>> the current version won't find new or moved content if it exists in the 
>> pool, though.
> 
> Are you referring to 3.2.0 beta 1 or 3.1.0 as recent version? I am using 
> 3.1.0 from Debian.
> 
> 
> 
 From the changelog here 
http://sourceforge.net/project/shownotes.php?release_id=673692 I'd say 
the features should be in 3.1.0 but there could have been bugs with 
subsequent fixes.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-09 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 14:11:12 -0600 on Monday, December 7, 2009:
 > It applies to full rsync or rsyncd backups.  An interrupted full should 
 > be marked as a 'partial' in your backup summary - and the subsequent 
 > full retry should not transfer the completed files again although it 
 > will take the time to to a block checksum compare over them.  I don't 
 > think it applies to incomplete files, so if you have one huge file that 
 > didn't finish I think it would retry from the start.   This and 
 > Conf{IncrLevels} are fairly recent additions - be sure you have a 
 > current backuppc version and the code and documentation match.   Even 
 > the current version won't find new or moved content if it exists in the 
 > pool, though.

Is there any reason the rsync option --partial couldn't be implemented
in perl-File-RsyncP (if not already there)? This would presumably
allow partial backups of single files to be resumed. Not sure how hard
it would be but intuitively, I wouldn't think it would be too hard.

This could be important when backing up large files (e.g., video,
databases, isos) and in particular over a slow link.







--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-14 Thread Jeffrey J. Kosowsky
Harald Amtmann wrote at about 19:29:07 +0100 on Monday, December 7, 2009:
 > So, for anyone who cares (doesn't seem to be anyone on this list who 
 > noticed), I found this post from 2006 stating and analyzing my exact problem:

You are assuming something that is not true...

 > 
 > http://www.topology.org/linux/backuppc.html
 > On this site, search for "Design flaw: Avoidable re-transmission of massive 
 > amounts of data."
 > 
 > 
 > For future reference and archiving, I quote here in full:
 > 
 > "2006-6-7:
 > During the last week while using BackupPC in earnest, I have
 > noticed a very serious design flaw which it totally avoidable by
 > making a small change to the software. First I will describe the
 > flaw with an example.
 
 details snipped

> 
 > The design flaw here is crystal clear. Consider a single file
 > home1/xyz.txt. The authors has designed the BackupPC system so that
 > the file home1/xyz.txt is sent in full from client1 to server1
 > unless 
 > 
 details snipped
 > 
 > The cure for this design flaw is very easy indeed, and it would
 > save me several days of saturated LAN bandwidth when I make
 > back-ups. It's very sad that the authors did not design the
 > software correctly. Here is how the software design flaw can be
 > fixed. 

This is an open source project -- rather than repetitively talking
about "serious design flaws" in a very workable piece of software (to
which I believe you have contributed nothing) and instead of talking
about how "sad" it is that the authors didn't correct it, why don't
you stop complaining and code a better version.

I'm sure if you produce a demonstrably better version and test it
under a range of use-cases to validate its robustness that people
would be more than happy to use your fix for this "serious" design flaw.

And you win a bigger bonus if you do this all using tar or rsync
without the requirement for any client software of any other remotely
executed commands...

 > The above design concept would make BackupPC much more efficient
 > even under normal circumstances where the variable
 > $Conf{RsyncShareName} is unchanging. At present, rsyncd will only
 > refrain from sending a file if it is present in the same path in
 > the same module in a previous full back-up. If server1 already has
 > the same identical file in any other location, the file is sent by
 > rsyncd and then discarded after it arrives.

It sounds like you know what you want to do so start coding and stop
complaining...

 > If the above serious design flaw is not fixed, it will not do much
 > harm to people whose files are rarely changing and rarely
 > moving. But if, for example, you move a directory tree from once
 > place to another, BackupPC will re-send the whole lot across the
 > LAN, and then it will discard the files when they arrive on the
 > BackupPC server. This will keep on happening until after you have
 > made a full back-up of the files in the new location.  "

No one is stopping you from fixing this "serious design flaw" which
obviously is not keeping the bulk of us users up at night worrying.

And for the record, I don't necessarily disagree with you that there
are things that can be improved but your attitude is going to get you
less than nowhere. Also, the coders are hardly stupid and there are
good reasons for the various tradeoffs they have made that you would
be wise in trying to understand before disparaging them and their
software.

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-14 Thread Harald Amtmann
 
> And for the record, I don't necessarily disagree with you that there
> are things that can be improved but your attitude is going to get you
> less than nowhere. Also, the coders are hardly stupid and there are
> good reasons for the various tradeoffs they have made that you would
> be wise in trying to understand before disparaging them and their
> software.

Hi I didn't want to sound rude. This was my 6th mail regarding this problem (5 
to this list, 1 personally to Craig) I think. In the first 5 mails I was 
reporting my observtaions asking whether what I am seeing is expected behaviour 
or an error on my part, each mail providing more detail as I was trying to find 
the source of the problem. In my personal mail to Craig I stated the same 
question and asked for pointers as to where in RsyncP might be the problem so 
that I can start working on a fix (if possible). Not one single one of the 
mails got a reply, so I kept looking myself for an answer, both in Google and 
the source code. This last mail was just me being happy that I found out that 
this is indeed expected behaviour, that I can stop looking for problems in my 
setup and as a record for any future users who observe this behaviour.

Regards
Harald





-- 
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/atbrowser

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncp problem --prune-empty-dirs

2008-02-24 Thread jm rouet

Hi all,

This message is an answer to a quite old topic: see below.
The main concern is that the current version of File::RsyncP does not 
support the "--prune-empty-dirs" option passed to an rsynd server.
However this option is quite usefull when one wants to limit a backup to 
specific set of file types, because it makes the rsyncd daemon forget 
about all the directories that don't contain any of the required files 
and therefore would be empty.


As far as BackupPC is concerned I found a way to avoid the backup of 
those empty directories. It is based on a DumpPostShareCmd script.
Of course, a patch of File::RsyncP to support the --prune-empty-dirs 
option would be better, because it would reduce the communication 
overhead between the client and the daemon, but I didn't dare working on 
that !


I'm not sure that my script is the best way of avoiding those empty 
dirs, and moreover I cannot ensure that there are no sideeffects, but I 
post it here so that you can give me some feedbacks. Basically, this 
script parses the temporary "TopDir/pc/$host/new/f$share" directory and 
simply deletes each directory that is empty or simply contain the single 
"attrib" file.


This script should be called in the DumpPostShareCmd as 
"/path/to/RemoveEmptyDirs.pl $host $share"


#! /usr/bin/perl -W
use File::Find;

my $base="/home/backuppc/data/pc"; # <-- to be adapted according to 
your  local settings

my $host=shift @ARGV;
my $share=shift @ARGV;

my $dir="$base/$host/new/f$share";

if (-d $dir)
{
   finddepth( {
   no_chdir => 1,
   wanted => sub {
   $d = $_;
   return unless -d $d;

   opendir D, $d;
   my @list = readdir D;
   closedir D;

   return if @list > 3;
   unlink "$d/attrib" if ( @list == 3 && -f "$d/attrib" );

   rmdir $d;
 }
   }, $dir );
}

Hope this little trick will help other users like me.
Regards,
Jean-Michel.


 Re: [BackupPC-users] rsyncp problem --prune-empty-dirs

Bernhard Ott
Sun, 25 Feb 2007 14:10:47 -0800

Holger Parplies wrote:
 presuming I had a new enough version of rsync for the man page to
include an

explanation of what '--prune-empty-dirs' does, I'd probably be asking, why
you would want to use that.

It's the only way (as far as I understood the rsync-man page) to include
a directory recursively. The downside is, that the whole tree is
included (containing only directories but no files) which makes it
difficult to find the backup files for recovery.
I tried the $Conf{BackupFilesOnly} first, but that didn't work.


Generally speaking, you can't just add any option your client side rsync
might support. Some options might work, some might be silently ignored,
others will break things. Is '--prune-empty-dirs' a request to the server
side rsync process (modifying the file list) or to the client side
(File::RsyncP in this case), or does it even affect the protocol exchange
between both? File::RsyncP is known not to support all rsync options, much
less recent extensions.

I was afraid to hear that;-)


This seems to indicate you are not running the latest version of
File::RsyncP. Which version are you running?

Debian says 0.64-1, backuppc is 2.1.2pl1



Sending args: --server --sender --numeric-ids --perms --owner --group -D 
--links --times --block-size=2048 --recursive --prune-empty-dirs -D 
--ignore-times . .




This does not seem to agree with your config file.

you're right - I have to check that ...



Are you sure your --include and --exclude options are compatible with what
BackupPC generates? Are '--include=**/' and '--prune-empty-dirs' compatible?
Syntax is from manpage and, as mentioned above, the only way to solve my 
"problem" including a specific pattern (directory) wherever it shows up 
in the tree.
rsync -avm [EMAIL PROTECTED]::share --include=*/ --include=MIT_ALLES/* 
--exclude=* works as expected.
The main problem for me was to find out how backuppc and the different 
transfer methods deal with $Conf{BackupFilesOnly}-values: I still have 
to work on that next week ;-) Unfortunately I deleted the complete 
pc-directory (including the log files), so I have to set up a new host.pl.


Kind regards,
Bernhard

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncp problem --prune-empty-dirs

2007-02-15 Thread Bernhard Ott
Hi,
there seems to be a problem using the rsync --prune-empty-dirs
(-m) option with backuppc (see log-file/config).
The rsync command/options works with all clients invoked via shell (and, 
of course, without the -m option), but not via rsyncp. Seems like rsync 
"reads" all the directories and filters them afterwards, so it might be 
a timeout-issue?
Or am I missing something?

Regards,
Bernhard


### log
Connected to 192.168.x.x:873, remote version 29
Negotiated protocol version 26
Connected to module Ddrive
Sending args: --server --sender --numeric-ids --perms --owner --group -D 
--links --times --block-size=2048 --recursive --prune-empty-dirs -D 
--ignore-times . .
Read EOF:
Tried again: got 0 bytes
Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes)
Backup aborted (Unable to read 4 bytes)



### Rsync Args of host.pl
$Conf{RsyncArgs} = [
 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '--devices',
 '--links',
 '--times',
 '--block-size=2048',
 '--recursive',
'--prune-empty-dirs',
 '--checksum-seed=32761',
 # Add additional arguments here
 #
 '-D',

'--include', '**/',
'--include', '**/[mM][iI][tT]_[aA][lL][lL][eE][sS]/*',
'--exclude', '*',

];

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncp problem --prune-empty-dirs

2007-02-19 Thread Holger Parplies
Hi,

Bernhard Ott wrote on 16.02.2007 at 01:21:56 [[BackupPC-users] rsyncp problem 
--prune-empty-dirs]:
> there seems to be a problem using the rsync --prune-empty-dirs
> (-m) option with backuppc (see log-file/config).

presuming I had a new enough version of rsync for the man page to include an
explanation of what '--prune-empty-dirs' does, I'd probably be asking, why
you would want to use that.

Generally speaking, you can't just add any option your client side rsync
might support. Some options might work, some might be silently ignored,
others will break things. Is '--prune-empty-dirs' a request to the server
side rsync process (modifying the file list) or to the client side
(File::RsyncP in this case), or does it even affect the protocol exchange
between both? File::RsyncP is known not to support all rsync options, much
less recent extensions.

> ### log
> Connected to 192.168.x.x:873, remote version 29
> Negotiated protocol version 26

This seems to indicate you are not running the latest version of
File::RsyncP. Which version are you running?

> Sending args: --server --sender --numeric-ids --perms --owner --group -D 
> --links --times --block-size=2048 --recursive --prune-empty-dirs -D 
> --ignore-times . .

This does not seem to agree with your config file.

> ### Rsync Args of host.pl
> $Conf{RsyncArgs} = [
>  '--numeric-ids',
>  '--perms',
>  '--owner',
>  '--group',
>  '--devices',
^
>  '--links',
>  '--times',
>  '--block-size=2048',
>  '--recursive',
>   '--prune-empty-dirs',
>  '--checksum-seed=32761',
^
>  # Add additional arguments here
>  #
>  '-D',
> 
> '--include', '**/',
> '--include', '**/[mM][iI][tT]_[aA][lL][lL][eE][sS]/*',
> '--exclude', '*',
> 
> ];

Are you sure your --include and --exclude options are compatible with what
BackupPC generates? Are '--include=**/' and '--prune-empty-dirs' compatible?

Regards,
Holger

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncp problem --prune-empty-dirs

2007-02-25 Thread Bernhard Ott
Holger Parplies wrote:
  presuming I had a new enough version of rsync for the man page to
include an
> explanation of what '--prune-empty-dirs' does, I'd probably be asking, why
> you would want to use that.
It's the only way (as far as I understood the rsync-man page) to include
a directory recursively. The downside is, that the whole tree is
included (containing only directories but no files) which makes it
difficult to find the backup files for recovery.
I tried the $Conf{BackupFilesOnly} first, but that didn't work.
> 
> Generally speaking, you can't just add any option your client side rsync
> might support. Some options might work, some might be silently ignored,
> others will break things. Is '--prune-empty-dirs' a request to the server
> side rsync process (modifying the file list) or to the client side
> (File::RsyncP in this case), or does it even affect the protocol exchange
> between both? File::RsyncP is known not to support all rsync options, much
> less recent extensions.
I was afraid to hear that;-)

> This seems to indicate you are not running the latest version of
> File::RsyncP. Which version are you running?
Debian says 0.64-1, backuppc is 2.1.2pl1

> 
>> Sending args: --server --sender --numeric-ids --perms --owner --group -D 
>> --links --times --block-size=2048 --recursive --prune-empty-dirs -D 
>> --ignore-times . .

> 
> This does not seem to agree with your config file.
you're right - I have to check that ...
> 
> 
> Are you sure your --include and --exclude options are compatible with what
> BackupPC generates? Are '--include=**/' and '--prune-empty-dirs' compatible?
Syntax is from manpage and, as mentioned above, the only way to solve my 
"problem" including a specific pattern (directory) wherever it shows up 
in the tree.
rsync -avm [EMAIL PROTECTED]::share --include=*/ --include=MIT_ALLES/* 
--exclude=* works as expected.
The main problem for me was to find out how backuppc and the different 
transfer methods deal with $Conf{BackupFilesOnly}-values: I still have 
to work on that next week ;-) Unfortunately I deleted the complete 
pc-directory (including the log files), so I have to set up a new host.pl.

Kind regards,
Bernhard


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/