Re: [BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-28 Thread Bedynek, Matthew J.
On Apr 21, 2017, at 7:39 PM, Les Mikesell 
> wrote:

If you think about what rysnc is supposed to do, it doesn't make much
sense to run both ends locally accessing data over NFS.

I would think it depends.  I have found many cases in environments where 
running rsync between local mounts is preferable to using cp or a tar pipe.

I would have to look at the Rsync implementation with a little more scrutiny 
but I suspect rsync provides greater flexibility in tuning its behavior on what 
to copy.  To be clear, it is a very minor concern for my purpose and nothing 
worth getting in a twist over.  The type of directories I backup tend to only 
grow between incrementals.  It wasn’t really a concern but more of a question.

I did however run into another problem in that my backups using Tar have been 
freezing.  I do not know if I hit some size limitation except that the same 
jobs configured on version 3 which use Rsync work fine.

For any file that is not skipped by the timestamp/length check in
incrementals, you are going to read the entire file over NFS so rsync
can compute the differences (where the usual point is to only send the
differences over the network).

I seem to recall there was a change in how files are compared but also know 
that incrementals with Rsync on V3 ran in a fraction of time as a full backup.  
So it must have had some way to skip doing a full file comparison.


Is there any way you can run rsync remotely against the NFS host instead?

I backup from two types of hosts: Lustre and NFS. I figured that it made far 
more sense to provide a direct mount rather than use Rsync over SSH or stand up 
Rsync.  I would also run the risk of having to have hosts

matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-24 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2017-04-21 18:39:59 -0500 [Re: [BackupPC-users] BackupPC 
4.1.1 and NFS shares]:
> On Fri, Apr 21, 2017 at 5:09 PM, Bedynek, Matthew J. <bedyne...@ornl.gov> 
> wrote:
> > With version 3 I am using Rsync instead of tar to backup a NFS share which
> > the backupPC host has direct access to. [...] with version 4, there have
> > been changes to rsync such that am forced to use tar for a local copy.

if that is really the case, I would consider it a bug. However, I would
suspect that you could set

$Conf {RsyncSshArgs} = [ '-e', '/usr/bin/sudo -u username -p' ];

(this is an untested hack ... I'm guessing rsync will append a hostname which
the sudo '-p' option will silently swallow) to get the equivalent of your
V3 settings:

> > [...]
> > $Conf{RsyncClientCmd} = 'sudo -u username $rsyncPath $argList+';
> > $Conf{RsyncClientRestoreCmd} = 'sudo -u username $rsyncPath $argList+???;

If that doesn't work, you could use a script instead and modify the
arguments in any way you need to.

> > I believe the RsyncClientCmd and RsyncClientRestoreCmd are gone in V4.

Correct.

> > I did get Rsync to work with V4 but it seems to ssh to localhost which
> > consumes additional host resources.

Yes, and there might be other valid reasons not to want that (e.g. not running
sshd on the host).

> > Rsync isn???t a big deal [...]

Well, I would think it is ... as you say yourself ...

> > [...] but am I correct in reading that Rsync might be better for
> > incremental backups in terms of handling deletions?

Yes. It handles them. tar doesn't. Period. And, more important, it handles
files not present in your reference backup (or modified since then) that can't
be caught by comparing timestamps (renamed, moved into the backup set,
extracted from an archive, included by changing in-/excludes, 'touch'ed to a
past date, ...). I wouldn't want to go back to tar any more than use SMB ...

> If you think about what rysnc is supposed to do, it doesn't make much
> sense to run both ends locally accessing data over NFS.

I tend to disagree. As far as I have understood Matthew's situation, rsync
is *supposed to* give more exact backups than tar, which it will do just
fine running both ends locally accessing data over NFS. And, I believe,
we're talking about his application of rsync here, not yours.

> For any file that is not skipped by the timestamp/length check in
> incrementals, you are going to read the entire file over NFS so rsync
> can compute the differences (where the usual point is to only send the
> differences over the network).

This is worth pointing out, but, again, there may be reasons to do this.
tar certainly won't do any better - it will also read the complete content
of any file not skipped over NFS. And the 'usual point' of running rsync
locally instead of tar is getting more exact incremental backups, not saving
bandwidth.

> Is there any way you can run rsync remotely against the NFS host instead?

This would save bandwidth, and it would spread some of the load over two
machines, which is either good or bad, depending on whether you want the
extra CPU load on your NFS server or not. During your backup window, this
is likely not an issue, but your mileage may vary. In any case, if you could
spare the bandwidth with BackupPC V3, there is no reason to get overly worried
now. You can try to tune your backup system to better performance or leave it
as it is.

Don't get me wrong - I'm not advising *against* changing to rsync over ssh to
the NFS server. I've been there myself. I've gone from (tar over NFS) ->
(rsync over NFS) -> (rsync over ssh). I'm just saying it doesn't seem to be
*essential*, as you didn't state any problems you had with BackupPC V3.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-21 Thread Les Mikesell
On Fri, Apr 21, 2017 at 5:09 PM, Bedynek, Matthew J.  wrote:
> All,
>
> With version 3 I am using Rsync instead of tar to backup a NFS share which 
> the backupPC host has direct access to.  This has worked great with the 
> exception that linking takes forever if there are a large number of files.   
> I simply didn’t have much luck getting version 3 to work with tar and got 
> rsync to work rather easily so stopped there.
>
> In the version 4 change log I noted that there were some improvements to 
> linking so decided to give that a try.  However, with version 4, there have 
> been changes to rsync such that am forced to use tar for a local copy.
>
>
> My V3 rsync config looked like:
>
> $Conf{ClientNameAlias} = 'localhost';
> $Conf{BackupFilesExclude} = {
>   '*' => [
> ‘/x/y/SEQ/IPTS-*'
>   ]
> };
> $Conf{RsyncClientCmd} = 'sudo -u username $rsyncPath $argList+';
> $Conf{RsyncClientRestoreCmd} = 'sudo -u username $rsyncPath $argList+’;
>
>
> I believe the RsyncClientCmd and RsyncClientRestoreCmd are gone in V4.  I did 
> get Rsync to work with V4 but it seems to ssh to localhost which consumes 
> additional host resources.
>
> Rsync isn’t a big deal since I have tar working now but am I correct in 
> reading that Rsync might be better for incremental backups in terms of 
> handling deletions?
>

If you think about what rysnc is supposed to do, it doesn't make much
sense to run both ends locally accessing data over NFS.For any
file that is not skipped by the timestamp/length check in
incrementals, you are going to read the entire file over NFS so rsync
can compute the differences (where the usual point is to only send the
differences over the network).   Is there any way you can run rsync
remotely against the NFS host instead?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-21 Thread Bedynek, Matthew J.
All,

With version 3 I am using Rsync instead of tar to backup a NFS share which the 
backupPC host has direct access to.  This has worked great with the exception 
that linking takes forever if there are a large number of files.   I simply 
didn’t have much luck getting version 3 to work with tar and got rsync to work 
rather easily so stopped there.

In the version 4 change log I noted that there were some improvements to 
linking so decided to give that a try.  However, with version 4, there have 
been changes to rsync such that am forced to use tar for a local copy.


My V3 rsync config looked like:

$Conf{ClientNameAlias} = 'localhost';
$Conf{BackupFilesExclude} = {
  '*' => [
‘/x/y/SEQ/IPTS-*'
  ]
};
$Conf{RsyncClientCmd} = 'sudo -u username $rsyncPath $argList+';
$Conf{RsyncClientRestoreCmd} = 'sudo -u username $rsyncPath $argList+’;


I believe the RsyncClientCmd and RsyncClientRestoreCmd are gone in V4.  I did 
get Rsync to work with V4 but it seems to ssh to localhost which consumes 
additional host resources.  

Rsync isn’t a big deal since I have tar working now but am I correct in reading 
that Rsync might be better for incremental backups in terms of handling 
deletions?

-matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/