Re: [BackupPC-users] restricting cgi users restore to their own files, or how to handle many users.

2007-09-28 Thread Rob Owens

Craig Barratt wrote:
 Ronny writes:
 
 I am taking backup of a directory /home, containing ~1000 users.
 And i want to allow each of the users access to restore his own files.
 But NOT to read/restore files that he normaly would not.

 Example: user1 have a file in /home/user1/private.txt that have 600
 permissions. I dont want user2 to be able to read this thru the backuppc
 cgi.

 i have tested this with a line in hosts that say
 server  0   rootuser1,user2

 and it seams to me that user2 can read all files of the backup, even
 files he normaly would have no access to.

 So how others solve this problem ?
 must you have 1000 lines in hosts, one line for each homedir ?  Or are
 there a different way where i can have backuppc check the orginal
 permissions and deny restore if the user in question dont have the right
 access.
 
 BackupPC doesn't provide a mechanism to have fine-grained
 per-user permissions when browsing backups.  The host file
 users have permissions for the entire host: browsing, editing
 the configuration, starting and canceling backups, etc.
 
 Enforcing permissions is a bit difficult since apache doesn't
 provide the uid and gid - just the username - and the backups
 just contain the client uid/gid.  There is no guarantee that
 user names and uid/gids are common between the server and
 client.
 
 Perhaps we could have a new config variable which forces the
 browse path for non-admin users, eg:
 
 $Conf{CgiUserBrowseChroot} = {
 'user1' = '/home:/user1',
 'user2' = '/home:/user2',
 };
 
 (/home is the share, and /user1 is the path relative to
 that share)
 
 There could also be a wildcard form that allows any user to
 browse their folder:
 
 $Conf{CgiUserBrowseChroot} = {
 '*' = '/home:/*',
 };
 
 One drawback is this host won't appear in the pulldown in
 the navigation bar, since that is based on the hosts file.
 So the user has to navigate to their host by knowing the
 correct URL.
 
 Craig

I would absolutely love to have the cgi interface give appropriate
permissions to users like the original poster is asking for.  Even if it
required LDAP or something.  This is one of the things I really miss
about rsnapshot, which I used prior to BackupPC.

Rsnapshot preserves all permissions and ownership of files.  You can
then export the backup tree via NFS or similar, with read-only
permissions.  Every user can browse that NFS share, subject to their
user and group permissions, and restore their own backups.

Here is a link describing how it's done in rsnapshot.
http://www.rsnapshot.org/howto/1.2/rsnapshot-HOWTO.en.html#restoring_backups
I realize that it's not this easy to do with BackupPC, but I think it's
something worth striving for.

By the way, rsnapshot has some drawbacks compared to BackupPC.  For
instance, there is no file pooling, no web interface, and the backup
tree can be a little confusing for beginners to navigate.

-Rob

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] NT_STATUS_LOGON_FAILURE

2007-09-28 Thread Tony Molloy
On Friday 28 September 2007 12:23, Regis Gras wrote:
 Craig Barratt wrote:
 Regis writes:
 I'am using BackupPC-2.1.0
 
 With the same config.pl,
 Start Full Backup works fine
 Start Inc Backup fails every time with the message
  NT_STATUS_LOGON_FAILURE
 
 I believe there was a version (or set of samba versions) that had
 this bug.  I think it was 3.0.23 or so.
 
 There was a discussion of this around bug a year ago on the mail list.
 Google backuppc incremental NT_STATUS_LOGON_FAILURE.
 
 Try an older or newer version fo samba.
 
 Craig

 Thank you for your help.
 Indeed, I am using samba-client-3.0.23c-2.el5.2.0.2
 samba-3.0.23c-2.el5.2.0.2
 samba-common-3.0.23c-2.el5.2.0.2

I can confirm this. Ever since upgrading to Centos-5 from Centos-4.5 I've been 
experiencing the same problem. Full backups appear to work to the C$ share. 
But incremental backups or backups of other shares don't appear to work.

At the moment  for testing, I've got scheduled backups tomorrow morning of C$ 
share from one machine and D$ from another. I'll let you know what happens.


Tony

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] NT_STATUS_LOGON_FAILURE

2007-09-28 Thread Craig Barratt
Regis writes:

 I'am using BackupPC-2.1.0
 
 With the same config.pl,
 Start Full Backup works fine
 Start Inc Backup fails every time with the message NT_STATUS_LOGON_FAILURE

I believe there was a version (or set of samba versions) that had
this bug.  I think it was 3.0.23 or so.

There was a discussion of this around bug a year ago on the mail list.
Google backuppc incremental NT_STATUS_LOGON_FAILURE.

Try an older or newer version fo samba.

Craig

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] NT_STATUS_LOGON_FAILURE

2007-09-28 Thread Regis Gras
I'am using BackupPC-2.1.0

With the same config.pl,
Start Full Backup works fine
Start Inc Backup fails every time with the message NT_STATUS_LOGON_FAILURE

Could some one help me ?

-- 
==
| Régis Gras | http://dcm.ujf-grenoble.fr|
|   D.C.M.   | mailto:[EMAIL PROTECTED] |
| 301, rue de la chimie  | --|
| DU BP 53   | Tel 04 76 51 41 76|
| 38041 Grenoble Cedex 9 | Fax 04 76 51 40 89|
==


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] multiple IP for the same host

2007-09-28 Thread Sébastien Barthélemy

Le jeudi 27 septembre 2007 à 09:54 -0500, Jack Coats a écrit :
On Thu, 2007-09-27 at 16:41 +0200, Sébastien Barthélemy wrote:
  Hello everybody.
  
  I use backuppc at home to backup 4 computers, including a laptop. All of
  them are on a private network behind a wireless router (provided by my
  ISP). The laptop can be connected by wire (with IP 192.168.15.5) or by
  wifi (with IP 192.168.15.6).
  
  I would like backuppc to backup the laptop when it is on the network,
  whatever it is by wire or wifi.
  
  Is it possible ?
  
  Let's also say that I have no DNS on the private network, I resolve
  names through /etc/hosts

 The short answer is, yes.

ok. Thanks Jack !

The next (short) question is how ?

cheers

Sebastien



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc not backing expected directories onLinux Client

2007-09-28 Thread Snider, Tim
 
Lost my head and pasted the wrong stuff in the file. Here's the correct
conf.pl. This is the config file for this host.  It appears like the
files were transferred - at least according to the log file. Maybe the
cgi file is messed up?

Contents of file /var/lib/backuppc/pc/172.22.14.166/XferLOG.3.z,
modified 2007-09-26 15:59:27

Running: /usr/bin/ssh -q -x -n -l root 172.22.14.166
/usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C / --totals .
Xfer PIDs are now 2013,2012
./
./boot/
... (lot of files not shown)
/bin/tar: ./sys/module/dm_mirror/sections/.strtab: File shrank
by 4085 bytes; padding with zeros
./sys/module/dm_mirror/sections/.symtab
/bin/tar: ./sys/module/dm_mirror/sections/.symtab: File shrank
by 4085 bytes; padding with zeros

... (lot of files not shown) - /usr doesn't show up in web GUI.

./sys/module/dm_mirror/sections/.bss
./usr/X11R6/include/X11/Xaw/MultiSinkP.h
./usr/X11R6/include/X11/Xaw/SmeBSB.h
./usr/X11R6/include/X11/Xaw/Template.c
Tail of the xfer file:
tarExtract: Got file './sys/module/obdfilter/', mode 0755, size
0, type 5
  create   755   0/0   0 sys/module/obdfilter
tarExtract: Got file './sys/module/obdfilter/sections/', mode
0755, size 0, type 5
  create   755   0/0   0
sys/module/obdfilter/sections
tarExtract: Got file './sys/module/obdfilter/sections/.strtab',
mode 0444, size 4096, type 0
  pool 444   0/04096
sys/module/obdfilter/sections/.strtab
tarExtract: Done: 0 errors, 38 filesExist, 8780858 sizeExist,
6906083 sizeExistComp, 38 filesTotal, 8780858 sizeTotal

Error file snip - head:
Running: /usr/bin/ssh -q -x -n -l root 172.22.14.166
/usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C / --totals .
Xfer PIDs are now 2013,2012
[ skipped 38 lines ]
tarExtract: Got file './', mode 0755, size 0, type 5
[ skipped 1 lines ]
tarExtract: Got file './boot/', mode 0755, size 0, type 5
[ skipped 1 lines ]
...
tail of err file:
tarExtract: Got file './sys/module/obdfilter/', mode 0755, size
0, type 5
[ skipped 1 lines ]
tarExtract: Got file './sys/module/obdfilter/sections/', mode
0755, size 0, type 5
[ skipped 1 lines ]
tarExtract: Got file './sys/module/obdfilter/sections/.strtab',
mode 0444, size 4096, type 0
[ skipped 1 lines ]
tarExtract: Done: 0 errors, 38 filesExist, 8780858 sizeExist,
6906083 sizeExistComp, 38 filesTotal, 8780858 sizeTotal


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Frans
Pop
Sent: Friday, September 28, 2007 9:52 AM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Backuppc not backing expected directories
onLinux Client

On Friday 28 September 2007, Snider, Tim wrote:
 My conf.pl files don't have any exclude entries in them.
 I've attached conf.pl, environment, and shell settings.

The config file you include is incomplete: multi line entries are not
fully included because of the grep you use. So can't tell much from
that.
Do you also have a separate config file for the client? If so, that
could be a possible explanation.

In general you should look at the XferLOG files for the backups which
should show you exactly what is happening. These are available through
the client Home page for that particular system in the CGI interface.

Cheers,
Frans Pop


-
This SF.net email is sponsored by: Microsoft Defy all challenges.
Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


conf.pl
Description: conf.pl
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc not backing expected directories on Linux Client

2007-09-28 Thread Snider, Tim
Here's a rookie question - I couldn't find anything in the doc or FAQ.
Why hasn't backuppc (version 2.1.2-6)  has backed all root directories
on my Linux server?
/bin, /dev, /home, /sbin and other directories (shown below) weren't
backed up during a full backup. backuppc is logging in as root on the
client. (I don't expect /dev, /proc ... to be backed up).
 
I don't see this behavior on a windoze client back up. 
 
Root directory contents:
 [EMAIL PROTECTED] /]# ls -alL
total 712
drwxr-xr-x   25 root root   4096 Sep 18 12:56 .
drwxr-xr-x   25 root root   4096 Sep 18 12:56 ..
-rw-r--r--1 root root  0 Sep 18 12:56 .autofsck
drwxr-xr-x2 root root   4096 Jul  4  2005 .automount
drwxr-xr-x2 root root   4096 Apr 13 12:18 bin
drwxr-xr-x3 root root   4096 Sep 24 12:47 boot
drwxr-xr-x8 root root   5480 Sep 21 09:30 dev
drwxr-xr-x  106 root root  12288 Sep 28 04:03 etc
drwxr-xr-x2 root root   4096 Aug 12  2004 home
drwxr-xr-x2 root root   4096 Aug 12  2004 initrd
drwxr-xr-x   10 root root   4096 Aug  7 04:03 lib
drwx--2 root root  16384 Apr 13 02:30 lost+found
drwxr-xr-x4 root root   4096 Sep 18 12:57 media
drwxr-xr-x2 root root   4096 Jul 11  2006 misc
drwxr-xr-x4 root root   4096 Sep 20 12:49 mnt
drwxr-xr-x2 root root   4096 Aug 12  2004 opt
dr-xr-xr-x  314 root root  0 Sep 18 07:55 proc
drwxr-x---   26 root root   4096 Sep 24 11:01 root
drwxr-xr-x2 root root  12288 Aug  7 04:04 sbin
drwxr-xr-x2 root root   4096 Apr 13 07:31 selinux
drwxr-xr-x2 root root   4096 Aug 12  2004 srv
drwxr-xr-x9 root root  0 Sep 18 07:55 sys
drwxr-xr-x3 root root   4096 Apr 13 09:01 tftpboot
drwxrwxrwt6 root root 524288 Sep 28 04:02 tmp
drwxr-xr-x   15 root root   4096 Apr 13 07:43 usr
drwxr-xr-x   25 root root   4096 Apr 13 09:00 var
[EMAIL PROTECTED] /]#

And what backuppc indicates it's backed up from the backup page.
/
|- .automount 
|- boot
|- media
|- misc
|- sys

My conf.pl files don't have any exclude entries in them. 
I've attached conf.pl, environment, and shell settings.

Timothy Snider 
Storage Architect
Strategic Planning, Technology and Architecture

LSI Logic Corporation
3718 North Rock Road
Wichita, KS 67226
(316) 636-8736 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]  



conf.pl
Description: conf.pl
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] failed partial backups deleting files from previous partials

2007-09-28 Thread Rob Owens

Craig Barratt wrote:
 Les writes:
 
 But will you still be able to restore the file state as of each separate
 run?  Sometimes the reason you are restoring is that the current version
 is a mess.  The issue could probably be avoided with a complete tree
 link copy before starting followed by in in-place update, but that is
 probably even less efficient.
 
 Yes, the exact state of each older backup can be re-created.
 
 As each backup occurs, two trees are operated on:
 
   - the in-place update of the newest complete backup
 
   - a new tree is created with the deltas to get to the
 original, now older, backup.  Eg: if a new file replaces
 an old one, the old file is moved to this tree.
 
 This new tree becomes the second-most-recent backup, stored
 as a set of deltas (changes) relative to the newest complete
 backup.
 
 Each older backup in turn would be represented on disk as a set of
 deltas from the more recent one.  The only filled backup is the
 newest one. To view or restore a particular backup, you start with
 the most recent (complete) backup, and successively apply the deltas
 to get to the older backup.
 
 It's just the time-reverse of what BackupPC does already: to view
 or restore an incremental, it starts with the next older full and
 applies the deltas (forward in time) for each incremental level.
 
 The new, proposed, layout would completely decouple the storage
 from the backup type (ie: whether it was an incremental or full
 the storage method would be the same).
 
 As I mentioned earlier, you most often view and restore the
 most recent backup.  So the penalty doing the delta merges
 for old backups is minor.
 
 You can delete the oldest backup at any time, since nothing
 depends on it.  No longer will there be the issue of keeping
 a full around because incrementals depend on it.  There would
 be no difference between expiry of incrementals and fulls - they
 would be treated the same since they are stored the same way.
 
 As discussed earlier, the only tricky part is deleting a backup
 that isn't the oldest (as needed for exponential expiry).  That
 requires you to merge two deltas into one.
 
 Craig
 

Very interesting discussion.  Here's a link to an article on backups
that you've probably all seen, but it's very interesting:
http://www.mikerubel.org/computers/rsync_snapshots/

-Rob

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] NT_STATUS_LOGON_FAILURE

2007-09-28 Thread Regis Gras
Craig Barratt wrote:

Regis writes:

  

I'am using BackupPC-2.1.0

With the same config.pl,
Start Full Backup works fine
Start Inc Backup fails every time with the message NT_STATUS_LOGON_FAILURE



I believe there was a version (or set of samba versions) that had
this bug.  I think it was 3.0.23 or so.

There was a discussion of this around bug a year ago on the mail list.
Google backuppc incremental NT_STATUS_LOGON_FAILURE.

Try an older or newer version fo samba.

Craig

  

I compiled samba-3.0.26, and now,  backuppc incremental works fine.
Thank you for your help

-- 
==
| Régis Gras | http://dcm.ujf-grenoble.fr|
|   D.C.M.   | mailto:[EMAIL PROTECTED] |
| 301, rue de la chimie  | --|
| DU BP 53   | Tel 04 76 51 41 76|
| 38041 Grenoble Cedex 9 | Fax 04 76 51 40 89|
==


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] NT_STATUS_LOGON_FAILURE

2007-09-28 Thread romain . pichard
Hello,

I'm a new user of BackupPC and I'm trying to configure it correctly.

I've got several default for the moment and I would like to know if you've 
got the solutions to solve my problems.
For your information I'm using samba-client 3.0.24-7 on a Fedora Core 6 in 
order to backup Windows clients.

For example, when I try to do a full backup ths is the message :


Running : /usr/bin/smbclient CLIENT_NAME\\SHARE_NAME -U USER_NAME -E 
-N- d 1 -c tarmode\ full -Tc /data/BackupPC/pc/CLIENT_NAME/BACKUP_NAME.tar 
 - FOLDER_NAME
full backup started for share NAME_SHARE
Xfer PIDs are now 4370, 4369
Domain=[DOMAIN_NAME] OS=Windows 5.1] Server=[Windows 2000 LAN MANAGER]
tarmode is now full, system, hidden, noreset, verbose
NT_STATUS_NO_SUCH_FILE listing \ -
NT_STATUS_NO_SUCH_FILE listing \ FOLDER_NAME
tar : dumped 15 files and directories
Total bytes written : 11550720
tarExtract : Done 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
filesTotal, 0 sizeTotal


So, first I would like to know if I obliged to insert in the command 
/data/BackupPC/pc/CLIENT_NAME/BACKUP_NAME.tar because if I don't insert 
this, the backup doesn't work !
With this, the backup works, that is to say files which are on the Windows 
client are copy on the BackupPC server but this severals defaults.

Then, I don't understant why I've got the 2 lines :
NT_STATUS_NO_SUCH_FILE listing \ -
NT_STATUS_NO_SUCH_FILE listing \ FOLDER_NAME

And, finally why I've got every 0 in  Done 0 errors, 0 filesExist, 0 
sizeExist, 0 sizeExistComp, 0 filesTotal, 0 sizeTotal.


Moreover, when I try to do a inc backup with the command line : Running : 
/usr/bin/smbclient/   CLIENT_NAME\\SHARE_NAME  -U  USER_NAME  -E  -N 
-d  1  -c  tarmode\  inc  -TcN 
/data/BackupPC/pc/CLIENT_NAME/timeStamp.level0  -  FOLDER_NAME

I'v got this message : 


Running : /usr/bin/smbclient/   CLIENT_NAME\\SHARE_NAME  -U  USER_NAME 
 -E  -N  -d  1  -c  tarmode\  inc  -TcN 
/data/BackupPC/pc/CLIENT_NAME/timeStamp.level0  -  FOLDER_NAME
incr backup started back to 2007-09-28 13:23:53 (backup#25) for share 
SHARE_NAME
Xfer PIDs are now 4401, 4400
Error settings newer-than time
Usage: smbclient [-?]  ...
...
...
tarExtract: Done: 0 errors, 0filesExists,  0 sizeExist, 0 sizeExistComp, 0 
filesTotal, 0 sizeTotal
Got fatal error durinf xfer ( [-P] - - machine-pass] service 
password)
Backup aborted (  [-P | - - machine-pass]  service password)


I use the same user as the full backup and with full backups, it works 
correctly.
Folder rights are normally good.

Thank you in advance for your help.
Regards,

Romain





Tony Molloy [EMAIL PROTECTED] 
Envoyé par : [EMAIL PROTECTED]
28/09/2007 14:18

A
backuppc-users@lists.sourceforge.net
cc

Objet
Re: [BackupPC-users] NT_STATUS_LOGON_FAILURE






On Friday 28 September 2007 12:23, Regis Gras wrote:
 Craig Barratt wrote:
 Regis writes:
 I'am using BackupPC-2.1.0
 
 With the same config.pl,
 Start Full Backup works fine
 Start Inc Backup fails every time with the message
  NT_STATUS_LOGON_FAILURE
 
 I believe there was a version (or set of samba versions) that had
 this bug.  I think it was 3.0.23 or so.
 
 There was a discussion of this around bug a year ago on the mail list.
 Google backuppc incremental NT_STATUS_LOGON_FAILURE.
 
 Try an older or newer version fo samba.
 
 Craig

 Thank you for your help.
 Indeed, I am using samba-client-3.0.23c-2.el5.2.0.2
 samba-3.0.23c-2.el5.2.0.2
 samba-common-3.0.23c-2.el5.2.0.2

I can confirm this. Ever since upgrading to Centos-5 from Centos-4.5 I've 
been 
experiencing the same problem. Full backups appear to work to the C$ 
share. 
But incremental backups or backups of other shares don't appear to work.

At the moment  for testing, I've got scheduled backups tomorrow morning of 
C$ 
share from one machine and D$ from another. I'll let you know what 
happens.


Tony

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/



 
SC2N -S.A  Siège Social : 2, Rue Andre Boulle - 94000 Créteil  - 327 153 
722 RCS Créteil

 

This e-mail message is intended only for the use of the intended
recipient(s).
The information contained therein may be confidential or privileged, and
its disclosure or reproduction is strictly prohibited.

Re: [BackupPC-users] multiple IP for the same host

2007-09-28 Thread Sébastien Barthélemy

Le vendredi 28 septembre 2007 à 09:55 -0500, Jack Coats a écrit :
 yes, expounded upon :) --- my wife gets on my case for answering the
 question she asked rather than what she meant!
 
 the facility to detect a machine by name rather than just IP works well.
 I would just try it first.  Don't put the laptop in your /etc/hosts
 files, and see it you can get to it from the server using nmblookup. (I
 forget the full syntax right now, sorry)

Thank you for your suggestion,

nmblookup does work for a windows computer but not for my ubuntu laptop
(nor for a MacOS X laptop I tried a few month ago).

Is there a simple and secure way to enable such reply ? (if yes, what is it ?) 
Will it work on mac ?

(I'm rather suspicious on that topic: last time I enabled samba sharing on 
ubuntu, people where able to see all the computer user names)


cheers




-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] multiple IP for the same host

2007-09-28 Thread Jack Coats
You are on the right track.  I would suggest enable samba network
sharing but first, go in and edit your smb.conf (/etc/samba/smb.conf on
my machine) to disable all of the open network sharing. ... share only
what you want, if anything).  The important thing is that samba is
running so it could be detected by your backuppc server.

Other folks, please speak up in case I am spouting information 'from
where I do not know' :) 

On Fri, 2007-09-28 at 19:15 +0200, Sébastien Barthélemy wrote:
 Le vendredi 28 septembre 2007 à 09:55 -0500, Jack Coats a écrit :
  yes, expounded upon :) --- my wife gets on my case for answering the
  question she asked rather than what she meant!
  
  the facility to detect a machine by name rather than just IP works well.
  I would just try it first.  Don't put the laptop in your /etc/hosts
  files, and see it you can get to it from the server using nmblookup. (I
  forget the full syntax right now, sorry)
 
 Thank you for your suggestion,
 
 nmblookup does work for a windows computer but not for my ubuntu laptop
 (nor for a MacOS X laptop I tried a few month ago).
 
 Is there a simple and secure way to enable such reply ? (if yes, what is it 
 ?) Will it work on mac ?
 
 (I'm rather suspicious on that topic: last time I enabled samba sharing on 
 ubuntu, people where able to see all the computer user names)
 
 
 cheers
 
 
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scripts for VSS backups of XP

2007-09-28 Thread Stephen Joyce

VSS works fine for me on XP pro sp2 (as administrator).

Make sure that MS Software Shadow Copy Provider and Volume Shadow Copy 
are both automatic/started. Also make sure you're using the correct 
vshadow.exe as there are different versions for each OS (ie, 2003 and XP 
are different from each other and different from vista)!


C:\c:\unison\bin\vshadow.exe c:

VSHADOW.EXE 2.2 - Volume Shadow Copy sample client
Copyright (C) 2005 Microsoft Corporation. All rights reserved.


(Option: Create shadow copy set)
(Gathering writer metadata...)
(Waiting for the asynchronous operation to finish...)
Initialize writer metadata ...
Discover directly excluded components ...
- Excluding writer 'MSDEWriter' since it has no selected components for 
restore.


Discover components that reside outside the shadow set ...
... snipped for brevity ...
   - Shadow copy device name: 
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1

   - Originating machine: (null)
   - Service machine: (null)
   - Not Exposed
   - Provider id: {b5946137-7b9f-4925-af80-51abd60b20d5}
   - Attributes:  Auto_Release

- Mark all writers as succesfully backed up...
Completing the backup (BackupComplete) ...
(Waiting for the asynchronous operation to finish...)
(Waiting for the asynchronous operation to finish...)

Snapshot creation done.
C:\

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I C
Physics  Astronomy Department Physics  Astronomy
University of North Carolina at Chapel Hill Network Infrastructure
voice: (919) 962-7214and Computing
fax: (919) 962-0480   http://www.panic.unc.edu

 Some people make the world turn and others just watch it spin.
   -- Jimmy Buffet

On Fri, 28 Sep 2007, Tomas Florian wrote:


I was curious about the vshadow functionality in all of this so I tried
it.  It works fine on Windows 2003, but not on Windows XP SP2 (I tried 4
different machines including one installed from scratch in a VM)  I get
the same error for the following commands:

vshadow -q
vshadow -p -nw C:


VSHADOW.EXE 2.2 - Volume Shadow Copy sample client
Copyright (C) 2005 Microsoft Corporation. All rights reserved.


(Option: Query all shadow copies)
- Setting the VSS context to: 0x

ERROR: COM call m_pVssObject-SetContext(dwContext) failed.
- Returned HRESULT = 0xc005
- Error text: Unknown error code
- Please re-run VSHADOW.EXE with the /tracing option to get more details



the tracing option didn't help because all it says is Unknown error
code as well.  All of the XPs I tried it on were professional and some
of them had most recent updates while others did not.

You mentioned having trouble with Vista which is not such a big
surprise, but what about XP?   Or am I missing some prerequisite when
running vshadow by itself?

I saw that my Shadow Service in services wasn't started.  So I started
that on the XP machines but I got the same result.  On Win 2003 SBS
server it worked the first time with no trouble at at all.

Regards,
Tomas


Tomas Florian wrote:

Thanks Rod,

I haven't had a chance to test it yet.  But I looked inside your xp
script - at the way you do the shadow copy.  I've been looking for
something like that for quite a while.  This is really good.

Regards,
Tomas

Rod Dickerson wrote:

Well, I am calling it quits because of Vista. I have a pre-script that works
with WinXP, which does the following:

1. Check to see if client (rsync) is installed, and if not, install it.
2. Open firewall ports for rsync
3. Create a snapshot and mount it to X:
4. Start rsyncd, exporting X:
5. Check to make sure that processes are running
6. Turn over to BPC for backup

After the backup, the post-script does the following:

1. Tear down the snapshot (kill rsync processes)
2. Remove the X: drive
3. Close firewall ports for rsync

The install_client script does the following:

1. Verifies that the target hostname matches the requested hostname. This is
important because of the next step.
2. Establishes a unique password for rsync for the machine. This is put into
rsyncd.secrets and hostname.pl for the client. This is to ensure that BPC
will not back up machine X thinking it is machine Y. I know there is some
name checking when the backup starts, but dynamic name registration is not
always perfect.
3. Copies rsync and other binaries to c:\rsyncd on the client. You need to
put the rsyncd.tar.gz files in /home/backuppc/backuppc_client or some other
directory (but change the scripts accordingly). You also need to get the
rest of rsync from the backuppc.sourceforge.net page; this tar only includes
my config files for reference. I couldn?t include everything because the
file size is too big for the list.
4. Sets the host password in /etc/BackupPC/pc/hostname.pl

You will also need the following things installed on the server, which you
may not have:

Apg ? password generator. Used by install_client. 

Re: [BackupPC-users] Scripts for VSS backups of XP

2007-09-28 Thread Tomas Florian
I was curious about the vshadow functionality in all of this so I tried
it.  It works fine on Windows 2003, but not on Windows XP SP2 (I tried 4
different machines including one installed from scratch in a VM)  I get
the same error for the following commands:

vshadow -q
vshadow -p -nw C:


VSHADOW.EXE 2.2 - Volume Shadow Copy sample client
Copyright (C) 2005 Microsoft Corporation. All rights reserved.


(Option: Query all shadow copies)
- Setting the VSS context to: 0x

ERROR: COM call m_pVssObject-SetContext(dwContext) failed.
- Returned HRESULT = 0xc005
- Error text: Unknown error code
- Please re-run VSHADOW.EXE with the /tracing option to get more details



the tracing option didn't help because all it says is Unknown error
code as well.  All of the XPs I tried it on were professional and some
of them had most recent updates while others did not.

You mentioned having trouble with Vista which is not such a big
surprise, but what about XP?   Or am I missing some prerequisite when
running vshadow by itself?

I saw that my Shadow Service in services wasn't started.  So I started
that on the XP machines but I got the same result.  On Win 2003 SBS
server it worked the first time with no trouble at at all.

Regards,
Tomas


Tomas Florian wrote:
 Thanks Rod,
 
 I haven't had a chance to test it yet.  But I looked inside your xp
 script - at the way you do the shadow copy.  I've been looking for
 something like that for quite a while.  This is really good.
 
 Regards,
 Tomas
 
 Rod Dickerson wrote:
 Well, I am calling it quits because of Vista. I have a pre-script that works
 with WinXP, which does the following:

 1. Check to see if client (rsync) is installed, and if not, install it.
 2. Open firewall ports for rsync
 3. Create a snapshot and mount it to X:
 4. Start rsyncd, exporting X:
 5. Check to make sure that processes are running
 6. Turn over to BPC for backup

 After the backup, the post-script does the following:

 1. Tear down the snapshot (kill rsync processes)
 2. Remove the X: drive
 3. Close firewall ports for rsync

 The install_client script does the following:

 1. Verifies that the target hostname matches the requested hostname. This is
 important because of the next step.
 2. Establishes a unique password for rsync for the machine. This is put into
 rsyncd.secrets and hostname.pl for the client. This is to ensure that BPC
 will not back up machine X thinking it is machine Y. I know there is some
 name checking when the backup starts, but dynamic name registration is not
 always perfect. 
 3. Copies rsync and other binaries to c:\rsyncd on the client. You need to
 put the rsyncd.tar.gz files in /home/backuppc/backuppc_client or some other
 directory (but change the scripts accordingly). You also need to get the
 rest of rsync from the backuppc.sourceforge.net page; this tar only includes
 my config files for reference. I couldn¹t include everything because the
 file size is too big for the list.
 4. Sets the host password in /etc/BackupPC/pc/hostname.pl

 You will also need the following things installed on the server, which you
 may not have:

 Apg ­ password generator. Used by install_client. This should be available
 in apt, yum, etc.
 Winexe ­ allows remote command execution on windows hosts from Linux.
 http://eol.ovh.org/winexe/

 I really wanted to get kerberos authentication working, but this is where I
 got stuck. It seems to work fine when running BackupPC_dump when su¹d to
 backuppc, but when I run it from the daemon (via the CGI page), it hangs
 when running winexe. There seems to be some issue with winexe when it is ran
 this way, and I am not sure if it is the way that Perl calls shell commands,
 or if it is BackupPC. I tried to debug it, but I am not a programmer (as you
 will see in my scripting genius) nor am I am Perl guru. I am submitting my
 work so that others may be able to figure it out, because I am stuck and at
 this point I can¹t continue using BPC (sadly). One main reason why I wanted
 to use Kerberos is because it doesn¹t require the backuppc user¹s
 credentials to be stored in clear text on the file system, and also because
 Vista seems to have some issue with using NTLM. I found that when I tried to
 connect to Vista over a WAN connection using NTLM sometimes it would time
 out, yet using Kerberos (from the shell) worked every time. So there must be
 something strange about how Vista uses NTLM or something, because I never
 had these problems with XP. But if you are going to back up local Vista
 clients and don¹t mind saving credentials on the local filesystem in clear
 text (which, by the way, must have local admin rights on the hosts that it
 backs up), then you can use this with NTLM. If you are in a Windows AD
 environment and want to try Kerberos auth, you will need to use this guide:
 http://technet.microsoft.com/en-us/library/Bb742433.aspx, specifically the
 section about ³creating a service instance account.²

 There is one additional 

Re: [BackupPC-users] Backuppc not backing expected directories onLinux Client

2007-09-28 Thread Frans Pop
On Friday 28 September 2007, Snider, Tim wrote:
 Lost my head and pasted the wrong stuff in the file. Here's the correct
 conf.pl. This is the config file for this host.

It _cannot_ be the config file for this host, because it has:
 $Conf{XferMethod} = 'rsync';

and this backup is clearly using 'tar' as the backup method:

   Running: /usr/bin/ssh -q -x -n -l root 172.22.14.166
 /usr/bin/env LC_ALL=C /bin/tar -c -v -f - -C / --totals .

So basically there must be a separate config file for this host that is 
overruling settings from the master configuration file.
On the other hand, that command looks like it should do a full backup.

From this line it seems that only 38 files are being transfered:

   tarExtract: Done: 0 errors, 38 filesExist, 8780858 sizeExist,
 6906083 sizeExistComp, 38 filesTotal, 8780858 sizeTotal

Does that match the number of files you see in the CGI interface?

I suggest you do some more digging based on this info.

If you do need more help, please send the _full_ relevant information, for 
example in a compressed (!) tar archive. Include at least:
- the _full_ conf.pl
- the _full_ host configuration file
- the _full_ log files for the client

If you want others to analyze your setup partial information only confuses 
things. Make sure you check the files for any confidential information!

Cheers,
Frans Pop

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] restricting cgi users restore to their own files, or how to handle many users.

2007-09-28 Thread Ronny Aasen
On Wed, 2007-09-26 at 04:31 -0700, Craig Barratt wrote:
 Ronny writes:
 
  I am taking backup of a directory /home, containing ~1000 users.
  And i want to allow each of the users access to restore his own files.
  But NOT to read/restore files that he normaly would not.
  
  Example: user1 have a file in /home/user1/private.txt that have 600
  permissions. I dont want user2 to be able to read this thru the backuppc
  cgi.
  
  i have tested this with a line in hosts that say
  server  0   rootuser1,user2
  
  and it seams to me that user2 can read all files of the backup, even
  files he normaly would have no access to.
  
  So how others solve this problem ?
  must you have 1000 lines in hosts, one line for each homedir ?  Or are
  there a different way where i can have backuppc check the orginal
  permissions and deny restore if the user in question dont have the right
  access.
 
 BackupPC doesn't provide a mechanism to have fine-grained
 per-user permissions when browsing backups.  The host file
 users have permissions for the entire host: browsing, editing
 the configuration, starting and canceling backups, etc.
 
 Enforcing permissions is a bit difficult since apache doesn't
 provide the uid and gid - just the username - and the backups
 just contain the client uid/gid.  There is no guarantee that
 user names and uid/gids are common between the server and
 client.

that's not a guarantee, but when you have ldap/sql/nis user-uid
mapping it's quite commonly so. 

I assume one could deny access if the user didn't map to a uid.
mapping a user to the wrong uid would be hard to detect. But it's not
your stock configuration anyway so some prerequisites like common user
database can be expected.

 Perhaps we could have a new config variable which forces the
 browse path for non-admin users, eg:
 
 $Conf{CgiUserBrowseChroot} = {
 'user1' = '/home:/user1',
 'user2' = '/home:/user2',
 };
 
 (/home is the share, and /user1 is the path relative to
 that share)
 
 There could also be a wildcard form that allows any user to
 browse their folder:
 
 $Conf{CgiUserBrowseChroot} = {
 '*' = '/home:/*',
 };
 
 One drawback is this host won't appear in the pulldown in
 the navigation bar, since that is based on the hosts file.
 So the user has to navigate to their host by knowing the
 correct URL.


So there is no way to do this currently. Having 1000 hostlines is not
that big a problem for the user. Since it's the admin that have to live
with a _Lng_ dropdown box. 

would backuppc deal with a hostsfile of ~1000 lines and 1000 files
saying server-user[].pl


-- 
mvh
Ronny Aasen -- 41616155 -- [EMAIL PROTECTED] --
Datapart AS -- 57682100 --  www.datapart-as.no  --



signature.asc
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup 3 times in a day and 1 times in a 7 days

2007-09-28 Thread rajnish kumar
On 9/28/07, rajnish kumar [EMAIL PROTECTED] wrote:
 On 9/28/07, Ronny Aasen [EMAIL PROTECTED] wrote:
  On Thu, 2007-09-27 at 18:11 +0530, rajnish kumar wrote:
   Dear sir
My question is, I don't required fixed backup only i want
   incremental backup should be run hourly mens per hour basis. Because I
   have seen it says Next Wakeup time is suppose 1:00 pm but backup not
   start at 1:00 Pm
  
   with regd
   rajnish
 
  the wakeup is just to see if any machines need backup yet.
  the variable
  $Conf{IncrPeriod} = 0.97;
  specify how long in days, it should go between each bincremental backup.
 
  if you want to run incremental backup more ofte, reduce this number.
 
  [snip]
 
  Ronny
 
 
 Dear friends
 I have reduce this number like this
 1.  $Conf{IncrPeriod} = 0.40   Result is - bakcup start hourly but
 till 1:00 PM means 10:00 AM, 11:00 AM , 12:00 PM, 1:00 Pm after 1 pm
 bakcup is not happening.

 2. $Conf{IncrPeriod} = 0.1   Result is - bakcup start hourly but till
 1:00 PM means 10:0ou0 AM, 11:00 AM , 12:00 PM, 1:00 Pm after 1 pm
 bakcup is not happening.

 3. $Conf{IncrPeriod} = 0.1   Result is - bakcup start hourly but till
 1:00 PM means 10:00 AM, 11:00 AM , 12:00 PM, 1:00 Pm after 1 pm bakcup
 is not happening

 Any no you have given it backups hourly but till 1:00 PM

 with regd
 rajnish


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync incremental backup (Exclude)

2007-09-28 Thread Frans Pop
On Thursday 27 September 2007, Egenius wrote:
 As you see, relative ways since absolute to work here are registered
 did not become.
 Absolute path for exclude is: /home/disk1 , /home/fsbackup,
 /home/homes, /home/samba

 Help me! What do I do incorrectly?

Why don't you use the BackupFilesExclude configuration option instead of 
adding these excludes manually?

Should look something like:
$Conf{BackupFilesExclude} = {
'/home' =
['/homes/', '/disk1/', '/samba/', '/fsbackup/'],
'other share' =
[ ... ],
};

Cheers,
Frans Pop

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] BACKUPPC Child exist with XP

2007-09-28 Thread tuxi
hello

after trying to backuped my windows machine , i log a strange thing...
for backup of few files , all work

but for 5112 files (10 GO)

during the manual backuppc_dump i have this error :

Sending empty csums for 
TomTom/HOME/Backups/Palm/Backup01/Storage_1/FR_plus_major_roads_of_WE-Map/RF-050.ov2
Read EOF:
Tried again: got 0 bytes
Child is sending done
Got done from child
Got stats: 0 0 0 0 ('errorCnt' = 0,'ExistFileSize' = 0,'ExistFileCnt' 
= 0,'TotalFileCnt' = 0,'ExistFileCompSize' = 0,'TotalFileSize' = 0)
Child is aborting
Got exit from child
Sending empty csums for 
TomTom/HOME/Backups/Palm/Backup01/Storage_1/FR_plus_major_roads_of_WE-Map/RF-070.bmp
Parent read EOF from child: fatal error!
Sending empty csums for 
TomTom/HOME/Backups/Palm/Backup01/Storage_1/FR_plus_major_roads_of_WE-Map/RF-070.ov2
Sending csums, cnt = 270, phase = 1
Sending empty csums for 
TomTom/HOME/Backups/Palm/Backup01/Storage_1/FR_plus_major_roads_of_WE-Map/RF-080.bmp
Done: 0 files, 0 bytes
Got fatal error during xfer (Child exited prematurely)
cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 vaio
cmdSystemOrEval: finished: got output PING ...


at begin , the system
launch GOT FILE 
after
SENDING EMPTY CSUMS ...

and it expire ... is it a amout of ram problem ? or a bug in backuppc ? 
a timeout problem ?
i backup only windows data ..
i use cp1252 in configuration and my system have 100 MO of RAM and big 
hard drive . i use your rsyncd 2.6.8

the system is in wifi too but , i don't think the problem come's from 
here because other backup work.

Thanks in advance, Best Regards, Olivier






-- 
No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.33/1034 - Release Date: 27/09/2007 
17:00


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Rsync incremental backup (Exclude)

2007-09-28 Thread Egenius
Hello, backuppc-users!

I have some problem whith rsync backup.
When i run full backup, it's work fine - I have full backup and no
have excluded files.

Then i run incremental backup, I  get copy of all files, that
currently have in full backup and files, that marked as excluded. I
don't need this files, but I get it, and my backup is too large.

This is my hosts conf:

$Conf{RsyncShareName} = ['/etc', '/root', '/var/spool/cron', '/home', 
'/var/streaming', '/boot'];
$Conf{RsyncArgs} = [
'--numeric-ids',
'--perms',
'--owner',
'--group',
'-D',
'--links',
'--times',
'--block-size=2048',
'--recursive',
'--exclude', '/disk1/*',
'--exclude', '/fsbackup/*',
'--exclude', '/homes/*',
'--exclude', '/samba/*',
];

As you see, relative ways since absolute to work here are registered
did not become.
Absolute path for exclude is: /home/disk1 , /home/fsbackup,
/home/homes, /home/samba

Help me! What do I do incorrectly?


-- 
С уважением,
 Egenius  mailto:[EMAIL PROTECTED]


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc not backing expected directories on Linux Client

2007-09-28 Thread Frans Pop
On Friday 28 September 2007, Snider, Tim wrote:
 My conf.pl files don't have any exclude entries in them.
 I've attached conf.pl, environment, and shell settings.

The config file you include is incomplete: multi line entries are not fully 
included because of the grep you use. So can't tell much from that.
Do you also have a separate config file for the client? If so, that could be 
a possible explanation.

In general you should look at the XferLOG files for the backups which should 
show you exactly what is happening. These are available through the client 
Home page for that particular system in the CGI interface.

Cheers,
Frans Pop

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] multiple IP for the same host

2007-09-28 Thread Jack Coats
yes, expounded upon :) --- my wife gets on my case for answering the
question she asked rather than what she meant!

the facility to detect a machine by name rather than just IP works well.
I would just try it first.  Don't put the laptop in your /etc/hosts
files, and see it you can get to it from the server using nmblookup. (I
forget the full syntax right now, sorry)

On Fri, 2007-09-28 at 11:30 +0200, Sébastien Barthélemy wrote:
 Le jeudi 27 septembre 2007 à 09:54 -0500, Jack Coats a écrit :
 On Thu, 2007-09-27 at 16:41 +0200, Sébastien Barthélemy wrote:
   Hello everybody.
   
   I use backuppc at home to backup 4 computers, including a laptop. All of
   them are on a private network behind a wireless router (provided by my
   ISP). The laptop can be connected by wire (with IP 192.168.15.5) or by
   wifi (with IP 192.168.15.6).
   
   I would like backuppc to backup the laptop when it is on the network,
   whatever it is by wire or wifi.
   
   Is it possible ?
   
   Let's also say that I have no DNS on the private network, I resolve
   names through /etc/hosts
 
  The short answer is, yes.
 
 ok. Thanks Jack !
 
 The next (short) question is how ?
 
 cheers
 
 Sebastien
 
 
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] If anyone is interested....

2007-09-28 Thread James Kyle
I've completed my script that automatically configures OSX Tiger  
clients for backuppc. I can post it to the list if anyone thinks it'd  
be of use.

-james

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] If anyone is interested....

2007-09-28 Thread James Kyle
BackupPC OSX Tiger Client Auto-Configure script=

The script is below, it assumes an ssh pub/priv key with rsync setup  
for backups. It was written for OSX Tiger clients using system  
installed binaries for portability. Not tested on Panther or earlier,  
YMMV.

*TIP*
This script must be run as root. I highly advise doing a
$ sudo -u root -s

to gain a root shell. I say this because though the script doesn't do  
anything that would make the system broken, if it puts a syntax  
error in /etc/sudoers you must be root to manually correct it.  
Otherwise you have to boot into single user mode mount -o  
remount,rw / and then edit to correct. . . which can be a headache.  
Yeah, it happened to me and I corrected the error...but you never  
know. ;)

Usage: backuppc_config.rb [options]
Specific options:
-h, --help   Display this help/usage message
-r, --revert  Undo the changes made by script


What the script does:

1. Creates a non-admin backuppc user on the client machine that will  
not appear in either the login window or in the Preferences-Accounts  
pane.
2. Creates entry in /etc/sudoers which allows the backuppc user to  
sudo rsync without a password
ckkk:3. creates a backuppc_check.rb script in /usr/local/bin which  
will verify the command source and content for the ssh rsync command  
from the backuppc server and will not allow shell access. (in other  
words *only* the backuppc user from the backuppc machine can use this  
key to execute sudo rsync and nothing else)
4. Can revert all changes done to the system
5. Performs file locking when necessary to ensure no race conditions  
on system files (a la visudo)
6. Creates all necessary directories, sub-directories, and files to  
perform the above tasks with the correct permissions.
7. At each step, error checking is performed and actions verified. If  
anything glitches, the program will raise an error with an  
appropriate message.

TODO:
-) Auto-configure the backuppc server for said client. I think this  
could be done by generating a run once script for each client or  
(this would be more robust), create an OCAML file which would hold  
multiple client information that is then fed to a script on the  
server. The idea would be you could walk around to clients with a usb  
drive and run the config script, each time the client info would be  
added to the OCAML 'db' in a file on the disk, then just copy the  
file to the server, run once, and clients are configured

-) Create more command line options instead of prompting



 #!/usr/bin/ruby -w
 #
 # To customize the below script you must edit values for options:
 # base_network_ip
 # public_key
 # backuppc_ip
 #
 require 'fileutils'
 require 'optparse'
 require 'ostruct'
 require 'pp'

 # set up some defaults
 options = OpenStruct.new
 options.backuppc_ip = 192.168.1.2
 options.public_key = %Q!ssh-dss B3NzaC1kc3MAAACBAIh/m// 
 TVrn72tSBpAv+oCGeVb9DDPAiN7xe4UH955vKThS8p9ZGatS07n1IRQ4qdu 
 +AvElHulMKXw2jS3g5PAc1USvT9z1MJ8D87wBBTBHieSO4CyYX/ 
 9gIEaIbX69cD7lcSHpVAyrv+N87QFUEaVXaqkP/T/ 
 aZhQBDsxbCocNlFQDpwMFEARtPSALRaKKNtc6o0ZRiHwAAAIAmmeQh/vsdZutPQ 
 +lNzo25/L5GKUvwDNxzZLTX7MPeTWV8nAPyIhFj 
 +W3mGFxNcO3IeDplVU6U0og84VXlIrSuVqUVXGHeVqrXOLD7M30YSo4lAcq79c3LFR30fV 
 TRAOoVOlrLU7BG1QfRrbsubznNN+75X2Jm/Qx/nuyLRLEBvQAAAIBgPjsmXh1Pdq0e/ 
 kb8svRL0ALcsiToNvlpCHtFgc6WcIOUSVOnLb68ujzWYF6Ew3s8yG2f5UbAVF2gTPuUIJL 
 2V5H97EotXcCtn91O7zdJMOApxoOlrKgp2LNnq31i9LR3zOgJzH0SEukU0OcHnNJsVdapF 
 23iVatndQSKME/mNw== [EMAIL PROTECTED]
 options.base_network_ip = 192.168.1
 options.verbose = false
 options.revert = false
 options.user = {}
 options.user['name'] = 'backuppc'
 options.admin_id = 80
 options.backup_check = '/usr/local/bin/backup_check.rb'
 options.ip = ''


 ##BEGIN SCRIPT METHODS###
 def create_keys2(user,backuppc_ip,public_key)
   authorized_keys2 = %Q!from=#{backuppc_ip},no-pty,command=/usr/ 
 local/bin/backup_check.rb ssh-dss #{public_key}!
   begin
 if !File.exists?(/Users/#{user['name']}/.ssh) then
   FileUtils.mkdir_p /Users/#{user['name']}/.ssh, :mode = 0700
   File.chown user['uid'], nil, /Users/#{user['name']}/.ssh
 end
   rescue
 raise IOError, Failed to create /Users/#{user['name']}/.ssh
   end

   begin
 File.open(/Users/#{user['name']}/.ssh/authorized_keys2, 'a+')  
 {|f|
   f.puts authorized_keys2
   File.chown user['uid'], nil, /Users/#{user['name']}/.ssh/ 
 authorized_keys2
   f.chmod(0600)
 }
   rescue
 raise IOError, Failed to create /Users/#{user['name']}/.ssh/ 
 authorized_keys2
   end
 end

 def create_backup_check(admin_id,backup_check)
   ssh_verification_script = %Q!#\!/usr/bin/ruby
 #
 command = ENV['SSH_ORIGINAL_COMMAND']

 if command.nil? || command \!~ /^\\/usr\\/bin\\/sudo \\/usr\\/bin\\/ 
 rsync/
   puts Access Denied
 elsif
   system(command)
 end
   !
   begin
 if not File.exists?('/usr/local/bin') then
   FileUtils.mkdir_p 

Re: [BackupPC-users] If anyone is interested....

2007-09-28 Thread Scott

On Sep 28, 2007, at 6:13 PM, James Kyle wrote:

 I've completed my script that automatically configures OSX Tiger
 clients for backuppc. I can post it to the list if anyone thinks it'd
 be of use.

Yes please.

--
Scott [EMAIL PROTECTED]
AIM: BlueCame1


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scripts for VSS backups of XP

2007-09-28 Thread Rod Dickerson
Thomas,

I didn¹t have any problems with vshadow on the XP systems that I was using
to test, but that doesn¹t mean there isn¹t something wrong. You can get it
from here:

http://www.microsoft.com/downloads/details.aspx?FamilyID=0b4f56e4-0ccc-4626-
826a-ed2c4c95c871DisplayLang=en

The problems that I had with Vista were with winexe and/or perl/backuppc,
not with vshadow. Although I don¹t know if there is a vshadow for Vista, I
was using vssadmin to discover snapshots and mount the latest one for
backups. Hope this helps..

Rod



On 9/28/07 12:52 PM, Tomas Florian [EMAIL PROTECTED] wrote:

 I was curious about the vshadow functionality in all of this so I tried
 it.  It works fine on Windows 2003, but not on Windows XP SP2 (I tried 4
 different machines including one installed from scratch in a VM)  I get
 the same error for the following commands:
 
 vshadow -q 
 vshadow -p -nw C:
 
 
 VSHADOW.EXE 2.2 - Volume Shadow Copy sample client
 Copyright (C) 2005 Microsoft Corporation. All rights reserved.
 
 
 (Option: Query all shadow copies)
 - Setting the VSS context to: 0x
 
 ERROR: COM call m_pVssObject-SetContext(dwContext) failed.
 - Returned HRESULT = 0xc005
 - Error text: Unknown error code
 - Please re-run VSHADOW.EXE with the /tracing option to get more details
 
 
 
 the tracing option didn't help because all it says is Unknown error
 code as well.  All of the XPs I tried it on were professional and some
 of them had most recent updates while others did not.
 
 You mentioned having trouble with Vista which is not such a big
 surprise, but what about XP?   Or am I missing some prerequisite when
 running vshadow by itself?
 
 I saw that my Shadow Service in services wasn't started.  So I started
 that on the XP machines but I got the same result.  On Win 2003 SBS
 server it worked the first time with no trouble at at all.
 
 Regards, 
 Tomas 
 
 
 Tomas Florian wrote:
  Thanks Rod, 
  
  I haven't had a chance to test it yet.  But I looked inside your xp
  script - at the way you do the shadow copy.  I've been looking for
  something like that for quite a while.  This is really good.
  
  Regards, 
  Tomas 
  
  Rod Dickerson wrote:
  Well, I am calling it quits because of Vista. I have a pre-script that
 works 
  with WinXP, which does the following:
  
  1. Check to see if client (rsync) is installed, and if not, install it.
  2. Open firewall ports for rsync
  3. Create a snapshot and mount it to X:
  4. Start rsyncd, exporting X:
  5. Check to make sure that processes are running
  6. Turn over to BPC for backup
  
  After the backup, the post-script does the following:
  
  1. Tear down the snapshot (kill rsync processes)
  2. Remove the X: drive
  3. Close firewall ports for rsync
  
  The install_client script does the following:
  
  1. Verifies that the target hostname matches the requested hostname. This
 is 
  important because of the next step.
  2. Establishes a unique password for rsync for the machine. This is put
 into 
  rsyncd.secrets and hostname.pl for the client. This is to ensure that BPC
  will not back up machine X thinking it is machine Y. I know there is some
  name checking when the backup starts, but dynamic name registration is
 not 
  always perfect.
  3. Copies rsync and other binaries to c:\rsyncd on the client. You need
 to 
  put the rsyncd.tar.gz files in /home/backuppc/backuppc_client or some
 other 
  directory (but change the scripts accordingly). You also need to get the
  rest of rsync from the backuppc.sourceforge.net page; this tar only
 includes 
  my config files for reference. I couldn¹t include everything because the
  file size is too big for the list.
  4. Sets the host password in /etc/BackupPC/pc/hostname.pl
  
  You will also need the following things installed on the server, which
 you 
  may not have:
  
  Apg ­ password generator. Used by install_client. This should be
 available 
  in apt, yum, etc.
  Winexe ­ allows remote command execution on windows hosts from Linux.
  http://eol.ovh.org/winexe/
  
  I really wanted to get kerberos authentication working, but this is where
I 
  got stuck. It seems to work fine when running BackupPC_dump when su¹d to
  backuppc, but when I run it from the daemon (via the CGI page), it hangs
  when running winexe. There seems to be some issue with winexe when it is
 ran 
  this way, and I am not sure if it is the way that Perl calls shell
 commands, 
  or if it is BackupPC. I tried to debug it, but I am not a programmer (as
 you 
  will see in my scripting genius) nor am I am Perl guru. I am submitting
 my 
  work so that others may be able to figure it out, because I am stuck and
 at 
  this point I can¹t continue using BPC (sadly). One main reason why I
 wanted 
  to use Kerberos is because it doesn¹t require the backuppc user¹s
  credentials to be stored in clear text on the file system, and also
 because 
  Vista seems to have some issue with using NTLM. I found that when I tried
 to 
  connect to 

Re: [BackupPC-users] multiple IP for the same host

2007-09-28 Thread Rod Dickerson
One possible solution is just to hard code your machines¹ IP addresses. You
only have 4, so set them all with static ip addresses and then use
/etc/hosts. Set the wireless and wired IP addresses to be the same; you only
use one at a time, right? I would hate to see you use Samba just for some
sort of name resolution just for 4 machines. Yes, it is a little
inconvenient when traveling with the laptop. My Mac has profiles for
different network locations, not sure if Windows has that or not. Otherwise
you will just have to remember to change it back to DHCP when mobile.

Another possible solution is to set up static arp entries on the backuppc
server, and then still use /etc/hosts. I think that you can have multiple
mac addresses associated with a single IP address (statically), but be
careful because this could cause strange things on the network, especially
if you are doing routing or something else with the BPC server. So what you
would do is something like this:

arp -s 10.0.1.5 00:1b:63:f1:86:3c (wired MAC address)
arp -s 10.0.1.5 00:1b:63:f1:86:5c (wireless MAC address)

Then in /etc/hosts:

10.0.1.5 myLaptop

Then you use the machine name in backuppc for backups. Don¹t forget to make
the arp entries persistent, whether that is with a flag or by setting up a
script that is ran at boot time.

Rod



On 9/28/07 1:27 PM, Jack Coats [EMAIL PROTECTED] wrote:

 You are on the right track.  I would suggest enable samba network
 sharing but first, go in and edit your smb.conf (/etc/samba/smb.conf on
 my machine) to disable all of the open network sharing. ... share only
 what you want, if anything).  The important thing is that samba is
 running so it could be detected by your backuppc server.
 
 Other folks, please speak up in case I am spouting information 'from
 where I do not know' :)
 
 On Fri, 2007-09-28 at 19:15 +0200, Sébastien Barthélemy wrote:
  Le vendredi 28 septembre 2007 à 09:55 -0500, Jack Coats a écrit :
   yes, expounded upon :) --- my wife gets on my case for answering the
   question she asked rather than what she meant!
   
   the facility to detect a machine by name rather than just IP works well.
   I would just try it first.  Don't put the laptop in your /etc/hosts
   files, and see it you can get to it from the server using nmblookup. (I
   forget the full syntax right now, sorry)
  
  Thank you for your suggestion,
  
  nmblookup does work for a windows computer but not for my ubuntu laptop
  (nor for a MacOS X laptop I tried a few month ago).
  
  Is there a simple and secure way to enable such reply ? (if yes, what is it
 ?) Will it work on mac ?
  
  (I'm rather suspicious on that topic: last time I enabled samba sharing on
 ubuntu, people where able to see all the computer user names)
 
  
  
  cheers 
  
  
  
 
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] copy pool to another filesystem formatted differently

2007-09-28 Thread Ben Nickell
I have set up my new backuppc filesystem on an LVM volume and decided on 
using ext3 for the filesytem.  The old filesytem is reiserfs. I 
would like to copy the old filesystem to the new larger volume.  The 
backuppc FAQ states:

The best way to copy a pool file system, if possible, is by copying
the raw device at the block level (eg: using dd). Application level
programs that understand hardlinks include the GNU cp program with
the -a option and rsync -H. However, the large number of hardlinks
in the pool will make the memory usage large and the copy very slow.
Don't forget to stop BackupPC while the copy runs.


At the risk of exposing my ignorance, If I use dd won't it copy 
filesystem information as well?   Maybe I'm just tired and not thinking 
clearly. but would something like this work?

dd if=/var/mapper/vg-reiserfs_volume of=/var/mapper/vg-larger_ext3volume

I know that rsync would take a really long time.  (about 1.1tb of data, 
with all the hardlinks)  Should I just try cp -a?

Any other suggestions?

Thanks,
Ben


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-28 Thread Ben Nickell
Doug Lytle wrote:
 Josh Marshall wrote:
   
   
 
   
 I use xfs on all my installations and feel that's the best mix of 
 performance and reliability. I use the standard mkfs.xfs but I've read 

   
 
 Just a note on this,

 I've recently purchased two 500GB drives that I wanted to add to my XFS 
 LVM.  It turns out that you can't resize an XFS partition.  I ended up 
 having to recreate the LVM.  I moved the data over to 1 of the drives, 
 recreated the LVM using reiserfs, copied the data over to the new LVM.  
 Then I added the 2nd drive and resized the partition.

 Dou

Doug,

Can I ask what method or command you used to copy the data to the new 
LVM?  (see my new thread on this subject for the whole story)

Thanks,
Ben

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/