Re: [BackupPC-users] Issues with a restore

2013-10-07 Thread David Williams

Thanks, that did work and backups are now running :)



 Dave Williams

On 10/2/2013 11:15 AM, David Williams wrote:
Actually, it isn't resolved. I just haven't had the time to look into 
it.  I will try out the sudo option as you mention and maybe that will 
help :)



David Williams

On 10/2/2013 11:05 AM, Holger Parplies wrote:

Hi,

this matter is probably resolved by now, but for the archives (and
consideration by David):

John Rouillard wrote on 2013-09-09 21:23:35 + [Re: [BackupPC-users] Issues 
with a restore]:

On Mon, Sep 09, 2013 at 02:21:27PM -0400, David Williams wrote:

On 9/9/2013 1:53 PM, John Rouillard wrote:

On Mon, Sep 09, 2013 at 12:07:03PM -0400, David Williams wrote:

I will want to restore a whole
bunch of files.  Does the backuppc documentation explain how to set
up the ssh key for backuppc to execute as root on the target (which
is in fact one and the same machine)?

  

Then why bother with ssh at all?

[...]

Ok, so what would go into the restore command then? [...]

basically, you replace the ssh part with a corresponding sudo part. You also
need to remove one level of quoting. And set up sudo.

For example, the default RsyncClientRestoreCmd is (or was at some point)

$sshPath -q -x -l root $host $rsyncPath $argList+

which would become

/usr/bin/sudo $rsyncPath $argList

(note the missing + behind $argList). Replace RsyncClientCmd accordingly if
you also want to continue doing backups (without ssh).

You need an entry something like

backuppcALL=NOPASSWD: /usr/bin/rsync --server *

in /etc/sudoers (use 'visudo' to edit that).

The same principal is also applicable to tar backups.


[...]

Ah sorry for confusing the issue. I forgot that you can do direct
restores via the web interface. I run with that feature disabled for
security [...]

So do I. For that case, you'd want to additionally enforce the --sender
command line option in the sudoers entry:

backuppcALL=NOPASSWD: /usr/bin/rsync --server --sender *

or even

backuppcALL=NOPASSWD: /usr/bin/rsync --server --sender 
--numeric-ids --perms --owner --group -D --links --times --block-size=2048 
--recursive *

(adapt to the command actually run by BackupPC - possibly consult your
auth.log to see the command actually passed to sudo).


As Les followed up with, my method bypasses the web interface totally.

Yes, but you *can* use sudo instead of ssh for escalating privileges on local
backups, and it does make sense, because you avoid the overhead of encryption.
In fact, for security reasons I use an 'ssh -l backuppc remote_host sudo rsync'
type of setup even for remote backups rather than allowing passwordless root
access with an unencrypted ssh key. Thus sudo is also a *more secure* option
for local backups. Someone should really put that into the wiki if it's not
already there ;-) (yes, I promise to really *try* to remember to do so ...).

Hope that helps someone :-).

Regards,
Holger




--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Issues with backing up Windows laptop with SMB

2013-10-07 Thread David Williams
Got this working too, but some other issues now which I need to look 
into.  For some reason certain directories under my C:/users/dwilliams 
folder are not getting backed up.  Not sure why that would be but will 
try and take a look at this and see what I can find out.  However, 
backups are working now so that's great :)


Thanks for the help.



 Dave Williams

On 10/1/2013 8:42 AM, David Williams wrote:
Thank you for the quick response :) My config files are from several 
years ago so I guess that this is why, probably my smbbackup command 
is out of date.  Will remove the -N as instructed and give it another try.



David Williams

On 10/1/2013 8:36 AM, Holger Parplies wrote:

Hi,

David Williams wrote on 2013-10-01 07:52:03 -0400 [Re: [BackupPC-users] Issues 
with backing up Windows laptop with SMB]:

Can anyone provide any insight to this? What else can I do to troubleshoot?

sure. Sorry for being mean, but this used to be an FAQ ;-).


[...]

Running: /usr/bin/smbclient dwlaptop\\Documents -U dwilliams -E -N
-d 1 -c tarmode\ full -Tc -

You're supplying a '-N' switch to smbclient, thus requesting it not to pass a
password, and that's exactly what it's doing.

The behaviour changed some versions ago. Remove the '-N'.

Regards,
Holger

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project:http://backuppc.sourceforge.net/




--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tar method - deleted files workaround

2013-10-07 Thread Holger Parplies
Hi,

Craig Barratt wrote on 2013-10-06 17:08:36 -0700 [Re: [BackupPC-users] Tar 
method - deleted files workaround]:
 Chris,
 
 I've never looked into the --listed-incremental option for GNU tar.  This
 might do something similar to what you want.
 
 http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html

from what I read there, surprisingly, tar files seem to be able to contain
file deletions (i.e. extracting the archive will *delete* a file in the file
system). This would mean it could actually work, at least in theory.

On second reading, the documentation somewhat contradicts itself, so it's not
really clear whether this is true. *Without* deletions being represented in
the *tar file*, the whole exercise is somewhat pointless.

(The contradiction I see is that the documentation clearly states that the
 snapshot file is not needed for restoration, yet GNU tar attempts to restore
 the exact state the file system had when the archive was created. In
 particular, it will delete those files in the file system that did not exist
 in their directories when the archive was created. - which it can't do
 without the snapshot file; it could only delete those files that the
 incremental run had detected had been removed since the baseline backup,
 provided this information is present in the incremental tar file.)

Let's assume (and verify) they are, else I can delete what I've already
written: ;-)

 I also don't know what is required to support it in BackupPC.

The one problem I see is that you have a file with metadata (snapshot file)
in addition to the tar stream. While you *could* just keep that file at the
remote end (on the backup client), there would need to be some preprocessing,
i.e. copying the file for independent incrementals. This would also mean that
BackupPC would be keeping part of its state on the client machine, which would
be new (and probably undesired). Alternatively, the file could be copied
between BackupPC server and client, perhaps in DumpPre/PostUserCmd. All of
this means that the administrator of BackupPC needs to know much more about
the backup process and the client machines (where may we put the snapshot
file?).
Currently, we have default configuration values for tar backups over ssh that
should mostly work. I doubt that would remain possible if this were to become
default mode of operation.

That doesn't mean it can't be done. It just means part of the process would
need to be implemented by the (expert) BackupPC administrator. And for *local*
backups (where BackupPC server == client), native support would be possible.


Aside from that, we'd probably need support for file deletions in the BackupPC
code. The rsync XferMethod already has that capability, so it shouldn't be too
hard, I suppose. Providing this capability should be transparent for anyone
not wanting to use --listed-incremental.

Some new variables might also be needed both in the *Pre/PostUserCmds and,
perhaps, TarClientCmd and/or TarFullArgs/TarIncrArgs, for instance the number
of the baseline backup and the incremental level.

Hmm. How do we *store* the snapshot file(s) in our pool FS? If the UserCmds
need to access them, we'd either need some kind of hook, or they could just
access $TopDir/pc/... directly (which is sort of ugly).


Is anyone actually interested in experimenting with this option?

Regards,
Holger

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tar method - deleted files workaround

2013-10-07 Thread Les Mikesell
On Sun, Oct 6, 2013 at 7:08 PM, Craig Barratt
cbarr...@users.sourceforge.net wrote:

 I've never looked into the --listed-incremental option for GNU tar.  This
 might do something similar to what you want.

 http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html


 I also don't know what is required to support it in BackupPC.

Amanda has used that for many years - it just needs a file maintained
on the target hosts to track things..  If the file you specify with
--listed-incremental does not exist, you get a full backup and the
file is created listing the directories traversed with their
timestamps.   If the file does exist you get an incremental based on
the timestamp in that file in 'gnudump' format taking everything (even
old files) under new directories.  The file is then modified in place
for subsequent higher incremental levels.  If you want to make
additional incrementals based on the previous full, you have to copy
the file before doing the backup so you will have an unmodified
version later.  The gnudump format includes the contents of
directories so that restores can optionally delete files that weren't
present at the time of the backup.

Or at least that's the way it worked the last time I used amanda,
which was several years ago.  Now that I think about it, it would be
kind of neat to have the gnudump format available from
BackupPC_tarCreate and the archive host wrapper so you could make a
reasonable series of incrementals for offsite or long term storage.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tar method - deleted files workaround

2013-10-07 Thread Les Mikesell
On Mon, Oct 7, 2013 at 2:30 PM, Holger Parplies wb...@parplies.de wrote:

 http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html

 from what I read there, surprisingly, tar files seem to be able to contain
 file deletions (i.e. extracting the archive will *delete* a file in the file
 system). This would mean it could actually work, at least in theory.

 On second reading, the documentation somewhat contradicts itself, so it's not
 really clear whether this is true. *Without* deletions being represented in
 the *tar file*, the whole exercise is somewhat pointless.

It doesn't 'represent deletions', it stores the current full directory
listing for each directory, even in an incremental run with each entry
marked as included in this archive or not.   During the restore, you
have the option to delete anything that was not present when the
backup was taken.

 The one problem I see is that you have a file with metadata (snapshot file)
 in addition to the tar stream. While you *could* just keep that file at the
 remote end (on the backup client), there would need to be some preprocessing,
 i.e. copying the file for independent incrementals. This would also mean that
 BackupPC would be keeping part of its state on the client machine, which would
 be new (and probably undesired). Alternatively, the file could be copied
 between BackupPC server and client, perhaps in DumpPre/PostUserCmd. All of
 this means that the administrator of BackupPC needs to know much more about
 the backup process and the client machines (where may we put the snapshot
 file?).

Amanda has done this more or less forever and the admin doesn't need
to know anything about it.  It does use a client agent to do some of
the grunge work, but root-ssh can do anything a local agent can do.
The main job of that independent agent is to let all of the targets
send size estimates (done with dump or the trick with gnutar where if
it's output device is /dev/null it doesn't bother doing the work of
writing the archive so getting --totals is pretty cheap) so amanda can
schedule the right mix of fulls and incrementals to fill your tape -
something we don't need to worry much about.

 That doesn't mean it can't be done. It just means part of the process would
 need to be implemented by the (expert) BackupPC administrator. And for *local*
 backups (where BackupPC server == client), native support would be possible.

No, you'd just need a writable space to hold the files.

 Aside from that, we'd probably need support for file deletions in the BackupPC
 code. The rsync XferMethod already has that capability, so it shouldn't be too
 hard, I suppose. Providing this capability should be transparent for anyone
 not wanting to use --listed-incremental.

That part becomes a little more complicated - unless there is already
a perl module that understands gnudump format - and even then it would
have to be modified to process deletions in the archive instead of a
filesystem.

 Some new variables might also be needed both in the *Pre/PostUserCmds and,
 perhaps, TarClientCmd and/or TarFullArgs/TarIncrArgs, for instance the number
 of the baseline backup and the incremental level.

I don't think those concepts would change.

 Hmm. How do we *store* the snapshot file(s) in our pool FS? If the UserCmds
 need to access them, we'd either need some kind of hook, or they could just
 access $TopDir/pc/... directly (which is sort of ugly).

I've forgotten the exact details about how amanda handles these files
(there will be overlapping sets so you can chose the level of your
next run).  It's not technically necessary to have the listing files
on the central server at all, but it might make management easier.

 Is anyone actually interested in experimenting with this option?

I don't currently have anything where gnutar would be a better option
than rsync - but the handling did seem sensible back when people
actually put stuff on media other than hard drives.

-- 
Les Mikesell
  lesmikes...@gmail.com

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/