Re: [BackupPC-users] Backup never completes

2010-06-17 Thread Matthias Meyer
B. Alexander wrote:

 Hey,
 
 I have a single virtual host (OpenVZ) that never completes a backup.
 Neither incrementals nor fulls complete. I don't see any errors in the
 logs:
 
 2010-06-15 18:14:19 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /lib/modules
 2010-06-15 18:14:19 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /home
 2010-06-15 18:14:19 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /etc
 2010-06-15 18:14:21 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /var/backups
 2010-06-15 18:14:22 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /var/cache/apt
 2010-06-15 18:14:34 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /var/lib/apt
 2010-06-15 18:14:43 incr backup started back to 2010-06-12 07:00:02
 (backup #624) for directory /var/lib/dpkg
 2010-06-16 07:04:24 Aborting backup up after signal INT
 2010-06-16 07:04:25 Got fatal error during xfer (aborted by user
 (signal=INT)) 2010-06-16 07:04:34 full backup started for directory
 /lib/modules (baseline backup #624)
 2010-06-16 07:04:35 full backup started for directory /home (baseline
 backup #624)
 2010-06-16 07:04:41 full backup started for directory /etc (baseline
 backup #624)
 2010-06-16 07:04:50 full backup started for directory /var/backups
 (baseline backup #624)
 2010-06-16 07:04:51 full backup started for directory /var/cache/apt
 (baseline backup #624)
 2010-06-16 07:05:02 full backup started for directory /var/lib/apt
 (baseline backup #624)
 2010-06-16 07:05:15 full backup started for directory /var/lib/dpkg
 (baseline backup #624)
 
 /var/lib/dpkg is not empty, and this host has backed up successfully for
 well over a year. The only difference is that I moved it to another
 physical host, however, the other 5 VMs on that machine back up fine.
 
 Any ideas on where to look for the problem? I have rebooted the VM, and
 the backup still stops at the same point.
 
 Thanks,
 --b

Maybee you could tell us which backup method you use!?
- you could improve log verbosity on server side
- you could try watch lsof /var/lib/backuppc

br
Matthias
-- 
Don't Panic


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Retrieve the proper incremental level

2010-06-17 Thread Matthias Meyer
Inno wrote:

 Hello,
 
 I use incremental level (1,2,3,4 correspond to Monday, Tuesday, Wednesday,
 Thursday). But Wednesday and Thursday have bugged last week.It stopped
 at level 2. Am I required to reactivate two incremental to retrieve the
 proper level?
 
 Thanks.
 

The incremental level didn't respond to week days !?
What do you mean with bugged ? buggy, failed, ...?
backuppc would not stop at any level. If a backup failed it will retried
at the next WakeupSchedule.
But you will get a backup of level 3 for Friday.

br
Matthias
-- 
Don't Panic


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd, Parent read EOF from child

2010-06-17 Thread Matthias Meyer
Alexander Moisseev wrote:

 I have BackupPC configured to backup several directories on the same
 server as different BackupPC hosts. It works without a problem more than 2
 years. But now backup of one directory interrupts during transfer when it
 still works normally for other ones.
 

There are a lot of problems with rsyncd in windows. At least with cygwin
prior V1.7.
I found a lot of information (but no solution) about this in different
mailing lists within the last two years :-(
So it is very interesting that it does work with cygwin-rsyncd-2.6.8_0,
Windows Server 2003 Std R2.

I use rsync instead rsyncd and happy with that.

Got fatal error during xfer (Child exited prematurely) means that your
cygwin-rsyncd die.
You should increase log verbosity on cygwin-rsyncd and check against this
log.

br
Matthias
-- 
Don't Panic


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd, Parent read EOF from child

2010-06-17 Thread Alexander Moisseev
Matthias Meyer wrote:
 So it is very interesting that it does work with cygwin-rsyncd-2.6.8_0,
 Windows Server 2003 Std R2.

I have no problem  with one at all.
  
 Got fatal error during xfer (Child exited prematurely) means that your
 cygwin-rsyncd die.
 You should increase log verbosity on cygwin-rsyncd and check against this
 log.

I had overlook this lines in BackupPC log:

2010-06-11 04:04:13 1s_trade_at_phoenix: Out of memory during large request 
for 16781312 bytes, total sbrk() is 529952768 bytes at 
/usr/local/BackupPC/lib/BackupPC/FileZIO.pm line 203.
2010-06-11 04:04:19 Backup failed on 1s_trade_at_phoenix (Child exited 
prematurely)

It seems that when BackupPC tries to _decompress_ 342 MB file from pool it 
consumes all available RAM and swap.
(I had 1G RAM + 1G swap). I had resolve the problem by adding extra 512 MB of 
RAM.

But the directory that I am backing up is only about 1000 Files and 2.5 GB and 
the biggest file is 342 MB. Is BackupPC really needs so much memory? Or it is 
memory leaks?




--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd, Parent read EOF from child

2010-06-17 Thread Les Mikesell
Alexander Moisseev wrote:
 Matthias Meyer wrote:
 So it is very interesting that it does work with cygwin-rsyncd-2.6.8_0,
 Windows Server 2003 Std R2.
 
 I have no problem  with one at all.
   
 Got fatal error during xfer (Child exited prematurely) means that your
 cygwin-rsyncd die.
 You should increase log verbosity on cygwin-rsyncd and check against this
 log.
 
 I had overlook this lines in BackupPC log:
 
 2010-06-11 04:04:13 1s_trade_at_phoenix: Out of memory during large request 
 for 16781312 bytes, total sbrk() is 529952768 bytes at 
 /usr/local/BackupPC/lib/BackupPC/FileZIO.pm line 203.
 2010-06-11 04:04:19 Backup failed on 1s_trade_at_phoenix (Child exited 
 prematurely)
 
 It seems that when BackupPC tries to _decompress_ 342 MB file from pool it 
 consumes all available RAM and swap.
 (I had 1G RAM + 1G swap). I had resolve the problem by adding extra 512 MB of 
 RAM.
 
 But the directory that I am backing up is only about 1000 Files and 2.5 GB 
 and the biggest file is 342 MB. Is BackupPC really needs so much memory? Or 
 it is memory leaks?

Are you running a 64-bit perl on the server?  I think it consumes much more 
memory than a 32 bit instance would.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] File restore integrity

2010-06-17 Thread Jonathan Schaeffer
Hi all,

I'm administrating a BackupPC server and I'm concerned about the security of 
the 
whole system.

I configured the linux clients as unpriviledged users doing sudos for rsyncs to 
limit the risk of intrusion from the backupPC server to the clients as 
described 
in the FAQ : 
http://backuppc.sourceforge.net/faq/ssh.html#how_can_client_access_as_root_be_avoided

But I found a simple way to screw up the client when the backupPC server is 
corrupted :

It is easy to empty some (or all) files of a backup :

r...@backuppc:/data/backuppc/pc/172.16.2.44/3/f%2f/fhome/fjschaeff# cat 
/dev/null  f.bashrc

And then, when the client restores the file, it gets an empty file.

Is there a checking mechanism to ensure the integrity of the restored files ? 
i.e. the server can check that the files he is about to restore is the same as 
the one he stored previously ?

Cheers,

Jonathan

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backuppc over zfs

2010-06-17 Thread Mario Giammarco
Hello,
I have 200 giga of backuppc backups in a partition. I have told backuppc to use
compression. Now (as a crazy test) I would like to move all pool data in another
partition that is zfs formatted. 
But I would like to uncompress the backuppc files while I move them in the zfs
partition.

How can I do this thing?

Thanks!

Mario


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File restore integrity

2010-06-17 Thread Jeffrey J. Kosowsky
Jonathan Schaeffer wrote at about 16:29:19 +0200 on Thursday, June 17, 2010:
  Hi all,
  
  I'm administrating a BackupPC server and I'm concerned about the security of 
  the 
  whole system.
  
  I configured the linux clients as unpriviledged users doing sudos for rsyncs 
  to 
  limit the risk of intrusion from the backupPC server to the clients as 
  described 
  in the FAQ : 
  http://backuppc.sourceforge.net/faq/ssh.html#how_can_client_access_as_root_be_avoided
  
  But I found a simple way to screw up the client when the backupPC server is 
  corrupted :
  
  It is easy to empty some (or all) files of a backup :
  
  r...@backuppc:/data/backuppc/pc/172.16.2.44/3/f%2f/fhome/fjschaeff# cat 
  /dev/null  f.bashrc
  
  And then, when the client restores the file, it gets an empty file.
  
  Is there a checking mechanism to ensure the integrity of the restored files 
  ? 
  i.e. the server can check that the files he is about to restore is the same 
  as 
  the one he stored previously ?
  

Not automatically or officially. Though it might be a good feature to
add in the future.

If you use rsync checksum caching, I have written a routine that
allows to check some or all pool or pc files for consistency between
the full file md4 checksum stored by rsync and the actual file
content.

One could also do other checks such as checking the pool file name
against its contents using the partial file md5sum that backuppc
uses. One could also check the file size stored in the attrib file
against the actual size.

Here is my routine for verifying the rsync checksum digests:
--
#!/usr/bin/perl
#Validate rsync digest

use strict;
use Getopt::Std;

use lib /usr/share/BackupPC/lib;
use BackupPC::Xfer::RsyncDigest;
use BackupPC::Lib;
use File::Find;

use constant RSYNC_CSUMSEED_CACHE = 32761;
use constant DEFAULT_BLOCKSIZE = 2048;


my $dotfreq=100;
my %opts;
if ( !getopts(cCpdv, \%opts) || @ARGV !=1
 || ($opts{c} + $opts{C} + $opts{p}  1)
 || ($opts{d} + $opts{v}  1)) {
print STDERR EOF;
usage: $0 [-c|-C|-p] [-d|-v] [File or Directory]
  Verify Rsync digest in compressed files containing digests.
  Ignores directories and files without digests
  Only prints if digest does not match content unless verbose flag
  (firstbyte = 0xd7)
  Options:
-c   Consider path relative to cpool directory
-C   Entry is a single cpool file name (no path)
-p   Consider path relative to pc directory
-d   Print a '.' for every $dotfreq digest checks
-v   Verbose - print result of each check;

EOF
exit(1);
}

die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new) );
#die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new(, , 
, 1)) ); #No user check

my $Topdir = $bpc-TopDir();
my $root;
$root = $Topdir . /pc/ if $opts{p};
$root = $bpc-{CPoolDir}/ if $opts{c};
$root =~ s|//*|/|g;

my $path = $ARGV[0];
if ($opts{C}) {
$path = $bpc-MD52Path($ARGV[0], 1, $bpc-{CPoolDir});
$path =~ m|(.*/)|;
$root = $1; 
}
else {
$path = $root . $ARGV[0];
}
my $verbose=$opts{v};
my $progress= $opts{d};

die $0: Cannot read $path\n unless (-r $path);


my ($totfiles, $totdigfiles, $totbadfiles) = (0, 0 , 0);
find(\verify_digest, $path); 
print \n if $progress;
print Looked at $totfiles files including $totdigfiles digest files of which 
$totbadfiles have bad digests\n;
exit;

sub verify_digest {
return -200 unless (-f);
$totfiles++;
return -200 unless -s  0;
return -201 unless BackupPC::Xfer::RsyncDigest-fileDigestIsCached($_); 
#Not cached type (i.e. first byte not 0xd7); 
$totdigfiles++;

my $ret = BackupPC::Xfer::RsyncDigest-digestAdd($_, DEFAULT_BLOCKSIZE, 
RSYNC_CSUMSEED_CACHE, 2);  #2=verify
#Note setting blocksize=0, results in using the default blocksize of 2048 also, 
but it generates an error message
#Also leave out final protocol_version input since by setting it undefined we 
make it read it from the digest.
$totbadfiles++ if $ret!=1;

(my $file = $File::Find::name) =~ s|$root||;
if ($progress  !($totdigfiles%$dotfreq)) {
print STDERR .; 
++$|; # flush print buffer
}
if ($verbose || $ret!=1) {
my $inode = (stat($File::Find::name))[1];
print $inode $ret $file\n;
}
return $ret;
}

# Return codes:
# -100: Wrong RSYNC_CSUMSEED_CACHE or zero file size
# -101: Bad/missing RsyncLib
# -102: ZIO can't open file
# -103: sysopen can't open file
# -104: sysread can't read file
# -105: Bad first byte (not 0x78, 0xd6 or 0xd7)
# -106: Can't seek to end of file
# -107: First byte not 0xd7
# -108: Error on readin digest
# -109: Can't seek when trying to position to rewrite digest data (shouldn't 
happen if only verifying)
# -110: Can't write digest data (shouldn't happen if only verifying)
# -111: Can't seek looking for extraneous data after 

Re: [BackupPC-users] File restore integrity

2010-06-17 Thread Les Mikesell
On 6/17/2010 9:29 AM, Jonathan Schaeffer wrote:
 Hi all,

 I'm administrating a BackupPC server and I'm concerned about the security of 
 the
 whole system.

It is based on controlling access to root and the backuppc user on the 
server.  I don't see a way around that.

 I configured the linux clients as unpriviledged users doing sudos for rsyncs 
 to
 limit the risk of intrusion from the backupPC server to the clients as 
 described
 in the FAQ :
 http://backuppc.sourceforge.net/faq/ssh.html#how_can_client_access_as_root_be_avoided

 But I found a simple way to screw up the client when the backupPC server is
 corrupted :

 It is easy to empty some (or all) files of a backup :

 r...@backuppc:/data/backuppc/pc/172.16.2.44/3/f%2f/fhome/fjschaeff# cat
 /dev/null  f.bashrc

I think this falls into the if it hurts, don't do it category.

 And then, when the client restores the file, it gets an empty file.

 Is there a checking mechanism to ensure the integrity of the restored files ?
 i.e. the server can check that the files he is about to restore is the same as
 the one he stored previously ?

If you are going to corrupt something intentionally and you have root 
access, you would also be able to replace/bypass any such check.  Don't 
give anyone you don't trust root access...

-- 
   Les Mikesell
  lesmikes...@gmail.com


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd, Parent read EOF from child

2010-06-17 Thread Alexander Moisseev
Les Mikesell wrote:
 Are you running a 64-bit perl on the server?  I think it consumes much more
 memory than a 32 bit instance would.

No, my hardware have no support 64-bit at all.

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/