Re: [BackupPC-users] Solved, again; devs please read (was Re: The one-at-a-time nightly problem returns (was Re: The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.))

2011-08-22 Thread Robin Lee Powell
Here's my proposed fix, which is untested (since testing it means
letting things run for a week or so and seeing that they don't
break).

ut00-s3 ~ # diff -u /usr/local/bin/BackupPC /var/tmp/
--- /usr/local/bin/BackupPC 2011-08-22 18:51:26.0 +
+++ /var/tmp/BackupPC   2011-08-22 18:17:49.0 +
@@ -507,7 +507,7 @@
   $bpc-isAdminJob($CmdQueue[0]-{host})
 ) {
 local(*FH);
-$req = shift(@CmdQueue);
+$req = pop(@CmdQueue);

 $host = $req-{host};
 if ( defined($Jobs{$host}) ) {
@@ -651,7 +651,7 @@
 if ( $nJobs  $Conf{MaxBackups} + $Conf{MaxUserBackups}
  @UserQueue  0 ) {
 $req = pop(@UserQueue);
-if ( defined($Jobs{$req-{host}}) || $Status{$host}{state} eq 
'Status_link_pending' ) {
+if ( defined($Jobs{$req-{host}}) ) {
 push(@deferUserQueue, $req);
 next;
 }
@@ -661,7 +661,7 @@
 = $Conf{MaxBackups} + $Conf{MaxPendingCmds}
  @BgQueue  0 ) {
 $req = pop(@BgQueue);
-if ( defined($Jobs{$req-{host}}) || $Status{$host}{state} eq 
'Status_link_pending' ) {
+if ( defined($Jobs{$req-{host}}) ) {
 #
 # Job is currently running for this host; save it for later
 #


-Robin

On Sun, Aug 21, 2011 at 10:11:45AM -0700, Robin Lee Powell wrote:
 
 Ah-*HAH*!  Got it!
 
 So (part of) the problem, *AGAIN*, is the way the CmdQueue tests the
 *front* of the queue, but pulls jobs from the *BACK* of the queue.
 
 Now, what happens if a backup goes for a really long time?  Like,
 more than 24 hours?  Well, even though it's still running, an entry
 for that host is also put in the background queue, which means that
 when the current backup finshes, a new one runs immediately, which
 is fine.
 
 The problem occurs when the current backup finishes *during a
 nightly run*.  This means that the link can't run.  But it gets
 queued on the CmdQueue.  Then the backup on the BgQueue starts.
 
 Then the nightlies end, and the link on the CmdQueue tries to run.
 It refuses, with a Botch on admin job message, because a backup
 for that host is running.  This happens many many many many many
 times, until the next nightlies try to run.
 
 So the new nightlies get pushed onto the *FRONT* of the CmdQueue via unshift;
 now the queue is a bunch of nightlies and some 
 
 Here's the top of the CmdQueue loop:
 
 while ( $CmdJob eq   @CmdQueue  0  $RunNightlyWhenIdle != 1
 || @CmdQueue  0  $RunNightlyWhenIdle == 2
   $bpc-isAdminJob($CmdQueue[0]-{host})
 ) {
 local(*FH);
 $req = pop(@CmdQueue);
 
 $host = $req-{host};
 if ( defined($Jobs{$host}) ) {
 print(LOG $bpc-timeStamp,
Botch on admin job for $host: already in use!!\n);
 #
 # This could happen during normal opertion: a user could
 # request a backup while a BackupPC_link is queued from
 # a previous backup.  But it is unlikely.  Just put this
 # request back on the end of the queue.
 #
 unshift(@CmdQueue, $req);
 return;
 }
 
 So the loop can be entered, no problem, because it's nightly time
 ($RunNightlyWhenIdle == 2 is true) and the first job in the queue is
 a nightly job ( $bpc-isAdminJob($CmdQueue[0]-{host} is true) and
 there's certainly more than one such job.  So it enters the queue,
 and then *POPS* the *LAST* job off the queue.  This is a *LINK* job,
 not a nightly, and, better still, *it fails*, which means that the
 *front* of the queue now holds a link job.
 
 Now we're back at the top of that while, which works because CmdJob
 (which gets cleared at the end of every successful CmdQueue job
 but does not get set in the failure case) is empty, so we take the
 first branch, since $RunNightlyWhenIdle != 1 is true.  So we kick
 off the nightly that's at the end of the queue (which, since unshift
 was used in order, is the first one).
 
 Now we have a nightly running (CmdJob is not ), *AND* the first
 job in the queue is a link job (
 $bpc-isAdminJob($CmdQueue[0]-{host}) is false).  This means we
 can't enter either branch of the CmdQueue loop, and we're stuck
 until the nightly finishes.
 
 This is what the logs look like in this case:
 
 2011-08-21 07:00:02 Running 16 BackupPC_nightly jobs from 0..15 (out of 0..15)
 2011-08-21 07:00:02 Botch on admin job for [host2]: already in use!!
 2011-08-21 07:00:02 Next wakeup is 2011-08-21 08:00:00
 2011-08-21 07:00:04 Running BackupPC_nightly -m 0 15 (pid=17895)
 
 So there's the problem with testing the front of the queue and then
 popping the back of the queue (!!), which I pointed out long ago and
 the devs don't seem to want to fix.  Fixing that would allow

[BackupPC-users] The one-at-a-time nightly problem returns (was Re: The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.)

2011-08-21 Thread Robin Lee Powell
So, I haven't figured out why yet, but this keeps happening on my
hosts *even though* I'm not running BackupPC_serverMesg
BackupPC_nightly run.

As far as I know, other than BackupPC_serverMesg server reload
running from a script every once in a while, and having an extremely
large configuration, I'm doing nothing abnormal.

I am now running 3.2.1, so it's not an old bug; this happened to me
on two different hosts within the last couple of days.

What seems to be happening is that after a while I get:

2011-08-19 10:21:34 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
2011-08-19 10:21:43 Botch on admin job for [host1]: already in use!!

and then after that, the nightly jobs only run one at a time until I
completely restart the server.  Which I really don't want to do on
one of the servers because I'm in the middle of a 5 day backup.

This is really bad for my systems, and really frustrating. -_-

-Robin


On Wed, Nov 24, 2010 at 06:11:20PM -0800, Robin Lee Powell wrote:
 
 Figured it out.  The problem was that I have BackupPC set to run 8
 nightlies at once (which usually takes 12 or more hours), but it was
 ending up in a state where only one was running at a time.
 
 This may be the longest, most detailed debugging writeup I've ever
 done in 15 years of being a computer professional; I hope y'all
 appreciate it.  :)  I had to do this to hold all the relevant state
 in my head.
 
 It turns out that the issue occurs when the 24-hour-ly nightlies job
 is already running, and you do
 
sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
 
 which I've been doing a lot.
 
 Deciding to queue new nightly jobs goes like this:
 
   while ( $CmdJob eq   @CmdQueue  0  $RunNightlyWhenIdle != 1
 || @CmdQueue  0  $RunNightlyWhenIdle == 2
  $bpc-isAdminJob($CmdQueue[0]-{host}) ) {
 
 We'll be coming back to this a lot.  isAdminJob matches nightly
 jobs only AFAICT.
 
 CmdQueue State: Empty
 CmdJob: Empty
 RunNightlyWhenIdle: 0
 While State: False, since @CmdQueue = 0
 Running Job State: Empty
 Event:
 
   Normal nightly run occurs.  RunNightlyWhenIdle is set to 1, which
   triggers all the nightly jobs getting added to the queue, and
   RunNightlyWhenIdle getting set to 0
 
 CmdQueue State: 8 nightly jobs
 CmdJob: Empty
 RunNightlyWhenIdle: 2
 isAdminJob Matches First Job: True
 While State: True, via Branch 2
 Running Job State: Empty
 Event:
 
   Nightly jobs get kicked off, all 8 of them.
 
 
 CmdQueue State: Empty
 CmdJob: non-empty; admin7 or similar
 RunNightlyWhenIdle: 2
 isAdminJob Matches First Job: False
 While State: False, since @CmdQueue = 0
 Running Job State: 8 nightly jobs
 Event:
 
   A backup finishes, and queues up a BackupPC_link job.  This
   happens several times, since the nightly jobs take 8+ hours, even
   split into 8 parts, on my machine (4+TiB of backups per backup
   machine).
 
 CmdQueue State: Several link jobs
 CmdJob: non-empty; admin7 or similar
 RunNightlyWhenIdle: 2
 isAdminJob Matches First Job: False; link jobs don't match
 While State: False
 Running Job State: 8 nightly jobs
 Event:
 
   User runs sudo -u backuppc BackupPC_serverMesg BackupPC_nightly
   run.  This causes RunNightlyWhenIdle to be set to 1, but before
   that hits the while, the jobs are actually queued, *USING
   unshift*, which puts them at the front of the queue.  This is
   where things start to go horribly wrong.
 
 CmdQueue State: 8 nightly jobs, *THEN* Several link jobs
 CmdJob: non-empty; admin7 or similar
 RunNightlyWhenIdle: 2
 isAdminJob Matches First Job: *TRUE*
 While State: True, branch 2
 Running Job State: 8 nightly jobs
 Event:
 
   *Pop* a job from the queue.  This means that even though the
   *test* is for the job from the *front* of the queue, the job that
   actually gets handled is the job at the *end* of the queue.
 
   So, the last job on the queue, a link job, gets run.  THIS SHOULD
   NEVER HAPPEN, as I understand it, because nightly jobs (the first
   set) are still running.  The link job sets CmdJob, but that
   doesn't matter because we're going through the *second* branch of
   the while, which doesn't care about CmdJob.  So, it happily
   launches another link job:
   
 CmdQueue State: 8 nightly jobs, then N-1 link jobs
 CmdJob: hostname non-empty from the last link job
 RunNightlyWhenIdle: 2
 isAdminJob Matches First Job: True
 While State: True, branch 2
 Running Job State: 8 nightly jobs, 1 link job
 Event:
 
   Runs the next link job.  And all the others.  We end up with *all*
   link jobs running at once.  THIS SHOULD NEVER HAPPEN; CmdQueue is
   supposed to be one at a time.
 
   But wait, it gets better!
 
   When each

[BackupPC-users] Solved, again; devs please read (was Re: The one-at-a-time nightly problem returns (was Re: The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.))

2011-08-21 Thread Robin Lee Powell

Ah-*HAH*!  Got it!

So (part of) the problem, *AGAIN*, is the way the CmdQueue tests the
*front* of the queue, but pulls jobs from the *BACK* of the queue.

Now, what happens if a backup goes for a really long time?  Like,
more than 24 hours?  Well, even though it's still running, an entry
for that host is also put in the background queue, which means that
when the current backup finshes, a new one runs immediately, which
is fine.

The problem occurs when the current backup finishes *during a
nightly run*.  This means that the link can't run.  But it gets
queued on the CmdQueue.  Then the backup on the BgQueue starts.

Then the nightlies end, and the link on the CmdQueue tries to run.
It refuses, with a Botch on admin job message, because a backup
for that host is running.  This happens many many many many many
times, until the next nightlies try to run.

So the new nightlies get pushed onto the *FRONT* of the CmdQueue via unshift;
now the queue is a bunch of nightlies and some 

Here's the top of the CmdQueue loop:

while ( $CmdJob eq   @CmdQueue  0  $RunNightlyWhenIdle != 1
|| @CmdQueue  0  $RunNightlyWhenIdle == 2
  $bpc-isAdminJob($CmdQueue[0]-{host})
) {
local(*FH);
$req = pop(@CmdQueue);

$host = $req-{host};
if ( defined($Jobs{$host}) ) {
print(LOG $bpc-timeStamp,
   Botch on admin job for $host: already in use!!\n);
#
# This could happen during normal opertion: a user could
# request a backup while a BackupPC_link is queued from
# a previous backup.  But it is unlikely.  Just put this
# request back on the end of the queue.
#
unshift(@CmdQueue, $req);
return;
}

So the loop can be entered, no problem, because it's nightly time
($RunNightlyWhenIdle == 2 is true) and the first job in the queue is
a nightly job ( $bpc-isAdminJob($CmdQueue[0]-{host} is true) and
there's certainly more than one such job.  So it enters the queue,
and then *POPS* the *LAST* job off the queue.  This is a *LINK* job,
not a nightly, and, better still, *it fails*, which means that the
*front* of the queue now holds a link job.

Now we're back at the top of that while, which works because CmdJob
(which gets cleared at the end of every successful CmdQueue job
but does not get set in the failure case) is empty, so we take the
first branch, since $RunNightlyWhenIdle != 1 is true.  So we kick
off the nightly that's at the end of the queue (which, since unshift
was used in order, is the first one).

Now we have a nightly running (CmdJob is not ), *AND* the first
job in the queue is a link job (
$bpc-isAdminJob($CmdQueue[0]-{host}) is false).  This means we
can't enter either branch of the CmdQueue loop, and we're stuck
until the nightly finishes.

This is what the logs look like in this case:

2011-08-21 07:00:02 Running 16 BackupPC_nightly jobs from 0..15 (out of 0..15)
2011-08-21 07:00:02 Botch on admin job for [host2]: already in use!!
2011-08-21 07:00:02 Next wakeup is 2011-08-21 08:00:00
2011-08-21 07:00:04 Running BackupPC_nightly -m 0 15 (pid=17895)

So there's the problem with testing the front of the queue and then
popping the back of the queue (!!), which I pointed out long ago and
the devs don't seem to want to fix.  Fixing that would allow the
nightlies to run even when there's a link job mucking things up, but
it wouldn't stop the logs from being spammed with Botch messages.

Here's two options that would:

1.  Don't start a backup, even a bgQueue backup, when a link for
that host is pending.

2.  Don't put a backup for a host on the bgQueue when a backup (or
link job) for that host is currently running, even in QueueAllPCs.

-Robin

On Sun, Aug 21, 2011 at 08:52:38AM -0700, Robin Lee Powell wrote:
 So, I haven't figured out why yet, but this keeps happening on my
 hosts *even though* I'm not running BackupPC_serverMesg
 BackupPC_nightly run.
 
 As far as I know, other than BackupPC_serverMesg server reload
 running from a script every once in a while, and having an extremely
 large configuration, I'm doing nothing abnormal.
 
 I am now running 3.2.1, so it's not an old bug; this happened to me
 on two different hosts within the last couple of days.
 
 What seems to be happening is that after a while I get:
 
 2011-08-19 10:21:34 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:40 Botch on admin job for [host1]: already in use!!
 2011-08-19 10:21:43 Botch on admin job for [host1]: already in use!!
 
 and then after that, the nightly jobs only run one at a time until I
 completely restart the server

[BackupPC-users] Feature request: log both starts.

2011-08-21 Thread Robin Lee Powell

We use dumpPreUserCmd to run database dumps, then backup the
resulting files.  Some of these dumps take a *really* long time.
The annoying part is that the logs don't show the *actual* start
time of the backups (that is, the time when the dumpPreUserCmd is
started), they only show the start of the part after that, the
backup of the actual files, so you get lovely discrepencies like
this:

2011-08-19 09:25:22 incr backup started back to 2011-08-15 00:15:45 (backup 
#378) for directory /
2011-08-19 10:01:46 incr backup 379 complete, 11 files, 3205841445 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)
2011-08-20 18:10:23 full backup started for directory / (baseline backup #379)
2011-08-20 18:46:34 full backup 380 complete, 12 files, 3285062302 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)

vs. this:

Backup#  TypeFilled  Level   Start Date  Duration/mins   Age/days   
 Server Backup Path
[snip]
380  fullyes 0   8/19 10:01  1964.7  2.3
/backups/pc/[snip]

The reason it says the full started at 8/19 10:01 is because it
was queued on the BgQueue and started running th dumpPreUserCmd
immediately after the incremental finished.  But the logs don't show
that at all, they say it started at 2011-08-20 18:10:23 (which
is, what, 32 hours later?).

I would very much like if the logs included both the very start and
the start of the actual transfer.

Thanks.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup retention and deletion.

2011-05-13 Thread Robin Lee Powell

So, I had FullKeepCntMin set really high, and I lowered it, and it's
been several days, and yet I still have a ton of old fulls lying
around.

My questions:

1.  When and how do deletions of the excess occur?

2.  Why aren't they occuring?  (in particular, is it possible that I
need to completely restart the server for that to work? we *always*
have backups running, and there's still no stop when all current
backups are complete option, so I avoid that)

Here's the current config:

$Conf{FullPeriod} = '3.7';
$Conf{IncrPeriod} = '0.9';
$Conf{FullKeepCnt} = '25';
$Conf{FullKeepCntMin} = '5';
$Conf{FullAgeMax} = '60';
$Conf{IncrKeepCnt} = '45';
$Conf{IncrKeepCntMin} = '10';
$Conf{IncrAgeMax} = '60';

And here's the list of backups with types and dates:

# awk -F'   ' '{ print $1, $2, strftime(%F, $3); }' backups
0 full 2011-02-16
4 full 2011-02-21
10 full 2011-02-25
16 full 2011-03-01
22 full 2011-03-05
28 full 2011-03-10
34 full 2011-03-14
40 full 2011-03-18
46 full 2011-03-22
51 full 2011-03-26
52 incr 2011-03-27
53 incr 2011-03-28
54 incr 2011-03-28
55 incr 2011-03-29
56 incr 2011-03-30
57 full 2011-03-31
58 incr 2011-03-31
59 incr 2011-04-01
60 incr 2011-04-02
61 incr 2011-04-03
62 full 2011-04-04
63 incr 2011-04-06
64 incr 2011-04-07
65 incr 2011-04-07
66 full 2011-04-08
67 incr 2011-04-09
68 incr 2011-04-09
69 incr 2011-04-10
70 incr 2011-04-11
71 incr 2011-04-12
72 full 2011-04-12
73 incr 2011-04-13
74 incr 2011-04-14
75 incr 2011-04-15
76 incr 2011-04-15
77 full 2011-04-16
78 incr 2011-04-17
79 incr 2011-04-17
80 incr 2011-04-18
81 incr 2011-04-19
82 full 2011-04-20
83 incr 2011-04-22
84 incr 2011-04-22
85 incr 2011-04-23
86 full 2011-04-24
87 incr 2011-04-26
88 incr 2011-04-27
89 incr 2011-04-27
90 incr 2011-04-28
91 full 2011-04-29
92 incr 2011-04-29
93 incr 2011-04-30
94 incr 2011-05-02
95 full 2011-05-03
96 incr 2011-05-04
97 incr 2011-05-04
98 incr 2011-05-05
99 incr 2011-05-06
100 full 2011-05-07
101 incr 2011-05-08
102 incr 2011-05-09
103 incr 2011-05-10
104 full 2011-05-11
105 incr 2011-05-11
106 incr 2011-05-12
107 incr 2011-05-13
108 full 2011-05-13


Thanks for your help.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup retention and deletion.

2011-05-13 Thread Robin Lee Powell
On Fri, May 13, 2011 at 10:47:27PM +0200, Matthias Meyer wrote:
 Robin Lee Powell wrote:
 
  
  So, I had FullKeepCntMin set really high, and I lowered it, and it's
  been several days, and yet I still have a ton of old fulls lying
  around.
  
  My questions:
  
  1.  When and how do deletions of the excess occur?
  
  2.  Why aren't they occuring?  (in particular, is it possible that I
  need to completely restart the server for that to work? we *always*
  have backups running, and there's still no stop when all current
  backups are complete option, so I avoid that)
  
  Here's the current config:
  
[snip]
  $Conf{FullKeepCnt} = '25';
  $Conf{FullKeepCntMin} = '5';
  $Conf{FullAgeMax} = '60';
[snip]

 Where is your problem? You have 22 full backups already achieved.
 So your server will also keep the next 3 full and will remove the
 #0 since the 4. full backup will occur. The count of already
 achieved incrementals is 45. Exactly what you want.

I want it to delete all the fulls older than 60 days.
FullKeepCntMin is only 5, so if I have more than 5 fulls (which I
do), then everything older than FullAgeMax should be deleted, no?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup retention and deletion.

2011-05-13 Thread Robin Lee Powell
On Sat, May 14, 2011 at 04:09:51AM +0200, Holger Parplies wrote:
 config.pl commented on 2010-07-31 19:52 [BackupPC 3.2.0]:
 # Note that $Conf{FullAgeMax} will be increased to $Conf{FullKeepCnt}
 # times $Conf{FullPeriod} if $Conf{FullKeepCnt} specifies enough
 # full backups to exceed $Conf{FullAgeMax}.

Ugh, missed that.  I thought that past FullAgeMax, only
FullKeepCntMin mattered.  In fact, I thought that was the *point* of
FullKeepCntMin.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Distributing user configs from a central host?

2011-02-17 Thread Robin Lee Powell
That was *not* sent to the right mailing list.  Sorry.

-Robin

On Thu, Feb 17, 2011 at 06:34:58AM -0800, Robin Lee Powell wrote:
 I have a central server, that happens to be the puppetmaster, that
 has various users on it.  I would like to copy out their information
 (name, uid, password, .bashrc, etc) to all my other hosts, but I
 want to let the users change their stuff on that host, so I don't
 want to just stick it in puppet.
 
 My inclination is to just make a script that runs through the passwd
 file and generates puppet instructions out, and also copies the user
 files in question into a place in the puppetmaster directories.
 
 Is there a more-idiomatic way to do that?
 
 -Robin
 
 -- 
 http://singinst.org/ :  Our last, best hope for a fantastic future.
 Lojban (http://www.lojban.org/): The language in which this parrot
 is dead is ti poi spitaki cu morsi, but this sentence is false
 is na nei.   My personal page: http://www.digitalkingdom.org/rlp/
 
 --
 The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
 Pinpoint memory and threading errors before they happen.
 Find and fix more than 250 security defects in the development cycle.
 Locate bottlenecks in serial and parallel code that limit performance.
 http://p.sf.net/sfu/intel-dev2devfeb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Distributing user configs from a central host?

2011-02-17 Thread Robin Lee Powell
I have a central server, that happens to be the puppetmaster, that
has various users on it.  I would like to copy out their information
(name, uid, password, .bashrc, etc) to all my other hosts, but I
want to let the users change their stuff on that host, so I don't
want to just stick it in puppet.

My inclination is to just make a script that runs through the passwd
file and generates puppet instructions out, and also copies the user
files in question into a place in the puppetmaster directories.

Is there a more-idiomatic way to do that?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Re-request: finish all backups and stop.

2011-02-13 Thread Robin Lee Powell
On Sat, Feb 12, 2011 at 10:29:12PM -0600, Les Mikesell wrote:
 On 2/12/11 9:46 PM, Robin Lee Powell wrote:
 
  Would be Soo handy.
 
 But what would it mean?  And what if one or more of the targets is
 unavailable and you never stop the retries and thus never finish?

I mean finish all *currently running* backups, and do not start any
others at all.  If one currently running ends in failure, that's
fine; don't retry.

The point is to reach the server-off state without losing the 12
hours spent on the current backup, or whatever.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Re-request: finish all backups and stop.

2011-02-12 Thread Robin Lee Powell

Would be Soo handy.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General questions about the background queue and link commands.

2011-02-03 Thread Robin Lee Powell
On Thu, Feb 03, 2011 at 05:25:48PM +0100, Holger Parplies wrote:
 Hi,
 
 Robin Lee Powell wrote on 2011-02-02 08:12:47 -0800
 [[BackupPC-users] General questions about the background queue and
 link commands.]:
  Let me ask some more general questions:  What does the
  background queue actually *mean*?
 
 from the source (3.2.0beta0, but probably unchanged):
 
 #- @BgQueue is a queue of automatically scheduled backup
 requests.
 
 If my memory serves me correctly, BackupPC queues each host it
 knows about at each wakeup (presumably unless it's already on a
 queue ... see %BgQueueOn, %CmdQueueOn, %UserQueueOn). It then
 proceeds to process these backup requests in the order they're
 in the queue. Most of the time, that will simply mean deciding
 that it's not yet time for the next backup of this host. Now, if
 ...
 
  All but one host (!), 226 out of 227, is in the background
  queue, which seems rather excessive since about half my hosts
  have recent backups.
 
 ... I would guess that the first host is currently doing its
 backup, you've got $Conf{MaxBackups} = 1, 

8 actually.

This isn't happening on most of my hosts; only 2 out of 6.

 and most of the queue will simply disappear once the running
 backup is finished, because BackupPC will decide for each host in
 turn whether its backup is recent enough.
 
 Of course, if your situation is that one backup is always running
 (i.e. your BackupPC server is constantly backing up something),
 you'll see this situation most of the time - all of the time if
 backups take more time than the interval between wakeups.

Since it's every 15 minutes, yeah.  :)

 Well, you obviously won't really have that situation - the example
 is just for illustration. But you'll see this (for a possibly
 short period of time) whenever a backup takes longer than the
 wakeup interval, and whenever the first host to be scheduled is
 actually backed up (for the duration of that backup). That is not
 a problem. It is not an indication of high load.
 
 To sum it up, that a host is on @BgQueue does *not* necessarily
 mean that a backup will be done for this host, just that a check
 will be done whether a backup is needed. If so, the backup will
 also be done, else it's just the check.
 
 Note that this also means that backups may start at any time, not
 just at the times listed in the WakeupSchedule (but you've
 probably already noticed that).

Got it.

OK, so the question becomes, how do I monitor for general backup
queue problems?  I've had situations where something gets stuck,
like a nightly job, and the queue gets backed up, and I want to
detect that.

I guess if I could get access to the backup age that's on the host
summary, from a script, that would do.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General questions about the background queue and link commands.

2011-02-03 Thread Robin Lee Powell
On Thu, Feb 03, 2011 at 09:22:00AM -0800, Robin Lee Powell wrote:
 OK, so the question becomes, how do I monitor for general backup
 queue problems?  I've had situations where something gets stuck,
 like a nightly job, and the queue gets backed up, and I want to
 detect that.
 
 I guess if I could get access to the backup age that's on the host
 summary, from a script, that would do.

Got it; short version below, remind me to post after the real script
is done:

for num in $(sudo -u backuppc BackupPC_serverMesg status hosts | sed 
's/},[{]/\n/g' | grep lastGoodBackupTime | sed 's/.*lastGoodBackupTime = 
//' | sed 's/.*//' )
do
  if [ $[$(date +%s) - $num] -gt 77760 ]
  then
echo needs backup
  fi
done

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is the backup not being queued?

2011-02-02 Thread Robin Lee Powell
On Wed, Feb 02, 2011 at 04:07:41PM +0100, martin f krafft wrote:
 Hello,
 
 I have a BackupPC configured with FullPeriod at 7.97 and daily
 incremental backups, no blackout periods, hourly wakeup, and
 2 consecutive backups permitted.
 
 One of the hosts' last full backup is now 8.2 days old, while it's
 incremental backup is 0.9 days old.
 
 BackupPC just woke up, but it did not schedule the host, nor any
 other host. Nor did it schedule anything the hour before. But there
 are hosts, whose FullAge and IncrAge are larger than the FullPeriod
 and IncrPeriod settings.
 
 What could be the reason for this?

Most likely the disk is too full.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is the backup not being queued?

2011-02-02 Thread Robin Lee Powell
On Wed, Feb 02, 2011 at 04:54:57PM +0100, martin f krafft wrote:
 also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1628 
 +0100]:
   What could be the reason for this?
  
  Most likely the disk is too full.
 
 Nope. The disk is at 67%. And if it were too full, I would have
 received an e-mail.

Next most likely is dumpPre commands failing.

What does the host log say?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] General questions about the background queue and link commands.

2011-02-02 Thread Robin Lee Powell

Let me ask some more general questions:  What does the background queue
actually *mean*?

What does a green background in the Host Summary mean?

All but one host (!), 226 out of 227, is in the background queue,
which seems rather excessive since about half my hosts have recent
backups.

All the hosts which have recent backups show green in the host
summary.

I sure hope those aren't all pending link commands, because:

$Conf{MaxPendingCmds} = 8;

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is the backup not being queued?

2011-02-02 Thread Robin Lee Powell
On Wed, Feb 02, 2011 at 05:11:50PM +0100, martin f krafft wrote:
 also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1704 
 +0100]:
  What does the host log say?
 
 Nothing, really:
 
   2011-02-02 15:00:00 Next wakeup is 2011-02-02 16:00:00
   2011-02-02 16:00:00 Next wakeup is 2011-02-02 17:00:00
   2011-02-02 17:00:00 Next wakeup is 2011-02-02 18:00:00
   2011-02-02 17:00:21 Started full backup on seamus.madduck.net (pid=22484, 
 share=/)
 
 Now the backup of the host started. I am trying to understand why it
 wasn't scheduled at 15:00 already. The last full backup age had
 already passed FullPeriod by that time.

Check the server log, see if any nightly jobs were running.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is the backup not being queued?

2011-02-02 Thread Robin Lee Powell
On Wed, Feb 02, 2011 at 05:53:33PM +0100, martin f krafft wrote:
 also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1720 
 +0100]:
 2011-02-02 15:00:00 Next wakeup is 2011-02-02 16:00:00
 2011-02-02 16:00:00 Next wakeup is 2011-02-02 17:00:00
 2011-02-02 17:00:00 Next wakeup is 2011-02-02 18:00:00
 2011-02-02 17:00:21 Started full backup on seamus.madduck.net 
   (pid=22484, share=/)
   
   Now the backup of the host started. I am trying to understand why it
   wasn't scheduled at 15:00 already. The last full backup age had
   already passed FullPeriod by that time.
  
  Check the server log, see if any nightly jobs were running.
 
 That *is* the server log, accessed through the website. I am not
 aware of any other logs.

There's a log for each client, which you can see by accessing the
client in the GUI (it'll be the top-most LOG File link) and a log
for the server as a whole.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why is the backup not being queued?

2011-02-02 Thread Robin Lee Powell
On Wed, Feb 02, 2011 at 06:20:15PM +0100, martin f krafft wrote:
 also sprach Robin Lee Powell rlpow...@digitalkingdom.org
 [2011.02.02.1757 +0100]:
  There's a log for each client, which you can see by accessing
  the client in the GUI (it'll be the top-most LOG File link)
  and a log for the server as a whole.
 
 Oh, of course, sorry. But there is nothing in that log file
 hinting at any reason why the backup did not get scheduled or
 didn't run at 15:00 or 16:00
 
   2011-02-01 20:41:51 removing full backup 128
   2011-02-02 17:00:21 full backup started for directory / (baseline backup 
 #143)

I got nothin'.  I've often wished for a way to get BackupPC to be
more verbose in the logs.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Clients in the backuground queue that don't actually need backing up?

2011-02-01 Thread Robin Lee Powell

I have one server where the background queue is *huge* (hundreds of
hosts, small segment below with actual host names trimmed, sorry)

backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
nik[snip]
backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
med[snip]
backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
app[snip]
backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
tm2[snip]
backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
tra[snip]
backupType = auto,reqTime = 1296592200,user = BackupPC,host = 
tm2[snip]

Got that from sudo -u backuppc BackupPC_serverMesg status queues |
sed 's/},{/\n/g' | less, and all those hosts show up green in the
Host Summary.

The thing is, almost none of them actually need to be backed up.
Many of them have been backed up within the last 0.1 days, in fact,
and my period is 0.7

Any idea what's going on?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there any danger to removing pool entries with nlink=1

2011-01-20 Thread Robin Lee Powell
On Thu, Jan 20, 2011 at 11:34:01AM -0500, Jeffrey J. Kosowsky wrote:
 If BackupPC is *not* running, is there any danger to removing pool
 entries with just one hard link?

If there are no new/ directories, that should be fine.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there any danger to removing pool entries with nlink=1

2011-01-20 Thread Robin Lee Powell
On Thu, Jan 20, 2011 at 04:50:47PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 09:52:30 -0800 on Thursday, January 20, 2011:
   On Thu, Jan 20, 2011 at 11:34:01AM -0500, Jeffrey J. Kosowsky wrote:
If BackupPC is *not* running, is there any danger to removing pool
entries with just one hard link?
   
   If there are no new/ directories, that should be fine.
   

 I'm not sure I understand this. We are talking about the pool.

My point was that things might not have gotten linked into the pool
yet, if link was pending.  But you're right, that wouldn't lead to
an nlink=1 case; nevermind.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE

2011-01-10 Thread Robin Lee Powell
On Mon, Jan 10, 2011 at 01:11:50PM +0330, mohammad tayebi wrote:
 *Hi Backuppc Users*
 
 i have problem ?
 my backupc server has Raid 5 And mounted /var/lib/backuppc
 
 ofcource my quesition is : This Log
 
 * 2011-01-09 20:00:03 Got fatal error during xfer (aborted by signal=PIPE)*
 
 backuppc serve stop every daily at 20 pm or (19:58 , 19:57 ) i backup
 my fileserver (Openfiler-linux os)by *rsync*.
 
 I confuse about this problem  , i surf to net but dont find any specific
 answer at web , 

That's interesting, because I google for backuppc signal=PIPE and I get many 
answers.

Short version: it's probably a timeout issue.  Try increasing
$Conf{ClientTimeout}.  Mine is set to 604800, which is one week.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE

2011-01-10 Thread Robin Lee Powell
On Mon, Jan 10, 2011 at 10:43:50AM -0600, Les Mikesell wrote:
 On 1/10/2011 10:26 AM, Robin Lee Powell wrote:
  On Mon, Jan 10, 2011 at 01:11:50PM +0330, mohammad tayebi wrote:
  *Hi Backuppc Users*
 
  i have problem ? my backupc server has Raid 5 And mounted
  /var/lib/backuppc
 
  ofcource my quesition is : This Log
 
  * 2011-01-09 20:00:03 Got fatal error during xfer (aborted by
  signal=PIPE)*
 
  backuppc serve stop every daily at 20 pm or (19:58 , 19:57 ) i
  backup my fileserver (Openfiler-linux os)by *rsync*.
 
  I confuse about this problem  , i surf to net but dont find any
  specific answer at web ,
 
  That's interesting, because I google for backuppc signal=PIPE
  and I get many answers.
 
  Short version: it's probably a timeout issue.  Try increasing
  $Conf{ClientTimeout}.  Mine is set to 604800, which is one week.
 
 Timeouts would give you an alarm signal. Pipe signals mean the
 program communicating with the target system quit unexpectedly
 which can happen for a lot of reasons.

D'oh!  Yes.

One thing you can do there is get it running and then strace (or
similar) the remote end.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Double-checking something: cpool and links.

2011-01-04 Thread Robin Lee Powell

Am I correct in my belief that:

  find /backups/cpool/ -links 1 -ls

should always return nothing, and that I can freely delete anything
it *does* find?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Seed Copy

2010-12-20 Thread Robin Lee Powell
On Mon, Dec 20, 2010 at 10:06:53AM -0500, barry wrote:
 Looking for a good way to do a seed copy import into backuppc. I
 want to be able to get the bulk of backup data on a usb disk then
 import it into backuppc from said disk and resume rsyncs without
 copying all the data in over the wire. Is this possible? Thanks

Yes: copy it to your backup server host, and then tell BackupPC to
backup those files *on the server host*.  This will put them in the
pool, which means they won't be retransferred.  After a couple of
backups on the client are done, delete them on the server.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating backup machines

2010-12-15 Thread Robin Lee Powell
On Wed, Dec 15, 2010 at 04:26:07PM +0100, d.davo...@mastertraining.it wrote:
 OK, I know that it's an old topic :)

 I checked the mailing list and wiki but still I can't find the right 
 direction.

 I just need migrate to a new Backupc server. I don't need to move the 
 pool because my data are on a NAS. My old backuppc server is a Linux 
 Debian etch. The new one is a Debian lenny that mount exactly the same 
 storage as the former backup server and the backuppc comes from standard 
 repository.

 Seen that I don't want to loose my working pool and I cant duplicate it 
 (I don't have another NAS with 4TB free space), is it enough to copy the 
 configuration from one server to another? Which files?

If the pool/, cpool/, and pc/ directories will be available, and
mounted in the same place, on both machines, you just need to copy
the /etc/backuppc directory.  I would try a couple of test restores
to be sure, but that should be it.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
 also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
  I wrote two programs that might be helpful here:
  1. BackupPC_digestVerify.pl
 If you use rsync with checksum caching then this program checks the
 (uncompressed) contents of each pool file against the stored md4
 checksum. This should catch any bit errors in the pool. (Note
 though that I seem to recall that the checksum only gets stored the
 second time a file in the pool is backed up so some pool files may
 not have a checksum included - I may be wrong since it's been a
 while...)
 
 I did a test run of this tool and it took 12 days to run across the
 pool. I cannot take the backup machine offline for so long. Is it
 possible to run this while BackupPC runs in the background?
 
  2. BackupPC_fixLinks.pl
 This program scans through both the pool and pc trees to look for
 wrong, duplicate, or missing links. It can fix most errors.
 
 And this?

I don't know about the first one, but BackupPC_fixLinks.pl can
*definitely* be run while BackupPC runs.

For serious corruption, you may want to grab the patch I posted a
few days ago; it makes the run *much* slower, but on the plus side
it will fix more errors.

OTOH, the errors it fixes only waste disk space, they don't actually
break BackupPC's ability to function at all.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 2010:
   On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
 I wrote two programs that might be helpful here:
 1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
checksum. This should catch any bit errors in the pool. (Note
though that I seem to recall that the checksum only gets stored the
second time a file in the pool is backed up so some pool files may
not have a checksum included - I may be wrong since it's been a
while...)

I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?

 2. BackupPC_fixLinks.pl
This program scans through both the pool and pc trees to look for
wrong, duplicate, or missing links. It can fix most errors.

And this?
   
   I don't know about the first one, but BackupPC_fixLinks.pl can
   *definitely* be run while BackupPC runs.
   
   For serious corruption, you may want to grab the patch I posted a
   few days ago; it makes the run *much* slower, but on the plus side
   it will fix more errors.
 
 I would suggest instead using the version I posted last night...
 It should be much faster though still slow and may avoid some
 issues...

Well, I meant that version *plus* my patch. :D

Will your new version catch the this has multiple hard links but
not into the pool error I was seeing?  (If so yay! and thank you!)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:15:41PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 12:06:24 -0800 on Thursday,
 December 9, 2010:
   On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky
   wrote:
Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday,
December 9, 2010:
  On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft
  wrote:
   also sprach Jeffrey J. Kosowsky backu...@kosowsky.org
   [2010.11.17.0059 +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this
   program checks the (uncompressed) contents of each
   pool file against the stored md4 checksum. This
   should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only
   gets stored the second time a file in the pool is
   backed up so some pool files may not have a
   checksum included - I may be wrong since it's been
   a while...)
   
   I did a test run of this tool and it took 12 days to run
   across the pool. I cannot take the backup machine
   offline for so long. Is it possible to run this while
   BackupPC runs in the background?
   
2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc
   trees to look for wrong, duplicate, or missing
   links. It can fix most errors.
   
   And this?
  
  I don't know about the first one, but BackupPC_fixLinks.pl
  can *definitely* be run while BackupPC runs.
  
  For serious corruption, you may want to grab the patch I
  posted a few days ago; it makes the run *much* slower, but
  on the plus side it will fix more errors.

I would suggest instead using the version I posted last
night... It should be much faster though still slow and may
avoid some issues...
   
   Well, I meant that version *plus* my patch. :D
 
 My version does what your patch posted a couple of days does only
 faster  probably better (i.e. your version may miss some cases
 where there are pool dups and unlinked pc files with multiple
 links).

I repeat my assertion that you are my hero.  :)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 12:05:01AM -0500, Jeffrey J. Kosowsky wrote:
 Anyway here is the diff. I have not had time to check it much
 beyond verifying that it seems to run -- SO I WOULD TRULY
 APPRECIATE IT IF YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK.
 Also, it would be great if you would let me know approximately
 what speedup you achieved with this code vs. your original.

Yeah, I can do that.  You mind sending me a completely updated
version privately?  i.e. what you'd post to the wiki once it was
tested?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 02:27:46PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:05:46 -0800 on Tuesday, December 7, 2010:
   On Tue, Dec 07, 2010 at 01:58:28PM -0500, Jeffrey J. Kosowsky wrote:
Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 
 2010:
  This is *fascinating*.
  
  From the actually-fixing-stuff part of the run, I get:
  
ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
  
  to which I say lolwut? and investigate.
  
  $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
  2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
  2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
   156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
  3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
   106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
   247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
   293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
   513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
  $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  
  That's a bunch of files with *thirty two thousand* hard links.
  Apparently that's a limit of some kind.  BackupPC handles this by
  adding new copies, a hack that BackupPC_fixLinks is apparently
  unaware of.

BackupPC_fixLinks does know about the limit and in fact is careful
not to exceed it (using the same hack) when it combines/rewrites
links. Other than that, I'm not sure where you think
BackupPC_fixLinks needs to be aware of it?
   
   I would expect it to not emit an ERROR there?  :)  Shouldn't it move
   to the next file, and the next, and so on, until it finds one it
   *can* link to?
   
   It emitted thousands of such ERROR lines; surely that's not good
   behaviour.
 
 Well, it was designed (and tested) for the use case where this was
 a *rare* event so that it would be interesting to signal it.
 Perhaps even then  WARN or NOTICE would have been better than
 ERROR. Indeed, that would be a good change (and you could always
 'grep -v' it out of your results).
 
 My thinking was that in the case of a messed-up pool knowing that
 some files had 32000 links would be worthy of notice of
 course, it seems like for you this is a non note-worthy
 occurrence.
 
 Now per my comments in the code, this doesn't break anything, it
 only means that the links can't be combined and so pool usage
 can't be freed up for that file. 

I'm worried we're talking past each other, so be gentle if I'm
confused.  :)

If I have thousands of such files, each copy takes up the usual
amount of space.  They *should* be linked into the pool, so as to
take up 32k times less space.  The reason I ran it in the first
place was to link unlinked files like this into the pool; in this
case, unless I'm missing something, they stayed unlinked.

Since my goal was to free up space, it's important to me.

I agree it's something of an edge case, though, and if you don't
want to fix it I'd totally understand.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 12:35:46AM -0500, Jeffrey J. Kosowsky wrote:
 Jeffrey J. Kosowsky wrote at about 13:58:28 -0500 on Tuesday, December 7, 
 2010:
   Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 2010:
 This is *fascinating*.
 
 From the actually-fixing-stuff part of the run, I get:
 
   ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
 
 to which I say lolwut? and investigate.
 
 $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
 2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
 2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
   79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
  156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
 3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
  106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
  247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
  293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
  513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
   52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
 $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 
 That's a bunch of files with *thirty two thousand* hard links.
 Apparently that's a limit of some kind.  BackupPC handles this by
 adding new copies, a hack that BackupPC_fixLinks is apparently
 unaware of.
   
   BackupPC_fixLinks does know about the limit and in fact is careful not
   to exceed it (using the same hack) when it combines/rewrites links.
   Other than that, I'm not sure where you think BackupPC_fixLinks needs
   to be aware of it?
   
   To be fair, since I don't have any systems with that many hard links,
   I have not tested that use case so perhaps my code is missing
   something (I haven't looked through the logic of how BackupPC_fixLinks
   traverses chains in a while so maybe there is something there that
   needs to be adjusted for your use case but again since I haven't
   encountered it I probably have not given it enough thought)
   
 
 Robin, can you let me know in what way you think BackupPC misses
 here? It seems to me that my program does the following:

 1. It avoids calling a pool element a duplicate if the sum of the
 number of links in the duplicates exceeds the maximum link number
 (i.e. the pool duplicate is justified)
 
 2. When it fixes/combines links, it avoids exceeding the maximum
 link number and creates a new element of the md5sum chain instead.
 
 Is there any other way that maxlinks comes into play that I am
 missing?

*blink*

I was under the impression that it did *not* do creates a new
element of the md5sum chain instead..

I took the error to mean I see too many links to this file already,
so screw it, I'm giving up and leaving this file alone.

If the file *does* get linked in despite the error, then yeah,
that's totally fine, although I'd change the wording.  I read Too
many links if added to mean so I'm not going to add it.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 06:41:22PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 15:24:30 -0800 on Thursday, December 9, 2010:
   On Thu, Dec 09, 2010 at 12:05:01AM -0500, Jeffrey J. Kosowsky wrote:
Anyway here is the diff. I have not had time to check it much
beyond verifying that it seems to run -- SO I WOULD TRULY
APPRECIATE IT IF YOU CONTINUE TO TEST IT AND GIVE ME FEEDBACK.
Also, it would be great if you would let me know approximately
what speedup you achieved with this code vs. your original.
   
   Yeah, I can do that.  You mind sending me a completely updated
   version privately?  i.e. what you'd post to the wiki once it was
   tested?
   
 Sure...

Well, initially:


ut00-s8 pc # sudo -u backuppc /var/tmp/BackupPC_fixLinks
Subroutine jlink redefined at /var/tmp/BackupPC_fixLinks line 597.
Subroutine junlink redefined at /var/tmp/BackupPC_fixLinks line 603.
Use of uninitialized value in numeric eq (==) at /var/tmp/BackupPC_fixLinks 
line 99.


The first two seem deliberate, but are surprising.

Oh, hey, a request: can you add $|=1; to your scripts?  I end up
adding it regularily because I want to save the output but I also
want to see that it's doing something, so I do things like:

  $ sudo -u backuppc /var/tmp/BackupPC_fixLinks | tee /tmp/fix.out

which appears to do nothing for ages due to buffering.

I have a super-giant run going now; I'll let you know how it goes.
It will likely take many days.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Status and stop backup from command line

2010-12-08 Thread Robin Lee Powell
On Wed, Dec 08, 2010 at 05:14:51PM +, Keith Edmunds wrote:
 I need to be able to stop backups running during the working day.
 I'm aware of BlackoutPeriods and, mostly, that manages to achieve
 what I need. However, there are times when, for whatever reason,
 backups overrun. What I want to do is have a cron job that can run
 at the start of the day and:
 
  - list any running backups

sudo -u backuppc BackupPC_serverMesg status queues 

I also append: | sed 's/,/,\n/g' | less

  - stop them

sudo -u backuppc BackupPC_serverMesg stop [hostname]

  - notify by email of the actions taken

mailx

See /usr/local/bin/BackupPC, the Main_Check_Client_Messages
subroutine, for all the commands it'll take.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 01:07:50PM -0500, Jeffrey J. Kosowsky wrote:
 2. More importantly, there is no fast way

No, there really isn't.

 that I know of checking whether a pc entry with more than one link
 is in the pool. You first need to read in the 1st MB of each file,
 calculate the partial file md5sum, then see if it is present in
 the pool and then if there is a chain of files with the same
 partial md5sum, you need to compare the files individually. All
 the pieces to do that are in my various posted routines and code
 snippets, it's just not wrapped with one big for loop to go
 through the pc directory.

BackupPC_fixLinks is very close; I posted my changes seperately.  To
productionalize them they should be tied to an option flag, because
yeah, *SLOW*.  But I've apparently lost some hundreds of GiB this
way (because that's how big the tarPCCopy file was when I ran out of
space -_-), and I want it back.

 Still, blindly traversing the entire pc tree will generally be an
 order of magnitude slower or more than traversing the pool
 assuming that you have a lot of current and incrementals. If you
 are satisfied just going through the latest backup then it might
 be more manageable...

Unfortunately, no.  And yes, just doing this one customer is going
to take days.  Plural.

 In fact, it was for applications like this that I had suggested a
 while back adding the partial md5sum to the attrib file so that
 the reverse lookup can be done more cheaply 

That would, in fact, be fantastic.

 (the need for all of this will be obviated when Craig finishes the
 next version :P )

Oh?  How's that looking?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Feature? Bug? Something. BackupPC_tarPCCopy and fixing links.

2010-12-07 Thread Robin Lee Powell

So, I've made BackupPC_fixLinks do what I want in terms of fixing
the problems that caused it to emit hundreds of:

  Can't find 
redbubble--tm50-e00145--tm50-s00339---shared/40/f%2f/fshared/fredbubble/fpurchase_order_assets/fbatch_7749/faddresses_dom
estic_20101026T01.csv in pool, will copy file

and fill up my drive with extra data.

Then I realized something.

tarPCCopy does exactly the same thing that I just made
BackupPC_fixLinks do, as far as I can tell: it does all the md5sum
calculations and checks the pool, it just doesn't *fix* the problems
it finds.  Which seems *really* unfortunate, having done all that
work!

So.  I would love it if tarPCCopy fixed these problems.

I also echo Jeffrey's request for the md5sum to be stored in the
attrib file; that would make tarPCCopy so very, very fast.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 01:16:32PM -0500, Jeffrey J. Kosowsky wrote:
 Robin, can you just clarify the context. Did this apparent pool
 corruption only occur after running BackupPC_tarPCCopy or did it
 occur in the course of normal backuppc running.

I honestly don't know; I've done so much on these servers over the
last few months.

 Because if the second then I can think of only 2 ways that you would
 have pc files with more than one link but not in the pool:

 1. File system corruption

I can't take the machine down long enough to check.

 2. Something buggy with BackupPC_nightly

I have killed BackupPC_nightly processes many times on various
hosts.  I assumed the next run would fix it.  It's possible that's
the issue.

If one isn't allowed to kill BackupPC_nightly, how *does* one stop
backuppc cleanly when it's running?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] jLib bug with md5sum stuff

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 01:34:19PM -0500, Jeffrey J. Kosowsky wrote:
 OK I was confused for a second there since usually the convention
 is to use '---' as the original code and '+++' as the new code...

Sorry, musta put the diff in the wrong order.

 Well actually, I had found and fixed that bug a *long* time ago
 (Dec 2009) or earlier but the new version was not updated on the
 Wiki.

Yeah, about that: is it really that hard to get wiki access?  I've
made a lot of updates to various scripts, and I've seen many scripts
posted that never get updated on the wiki.

Do you have access?  Seems like you should.

Can I get access?

 Actually, my correction is IMO simpler and less kludgey. 

Agreed.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 01:58:28PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 15:40:04 -0800 on Monday, December 6, 2010:
   This is *fascinating*.
   
   From the actually-fixing-stuff part of the run, I get:
   
 ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
   
   to which I say lolwut? and investigate.
   
   $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
   2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
   2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
 79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
   3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
 52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
   $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
   
   That's a bunch of files with *thirty two thousand* hard links.
   Apparently that's a limit of some kind.  BackupPC handles this by
   adding new copies, a hack that BackupPC_fixLinks is apparently
   unaware of.
 
 BackupPC_fixLinks does know about the limit and in fact is careful
 not to exceed it (using the same hack) when it combines/rewrites
 links. Other than that, I'm not sure where you think
 BackupPC_fixLinks needs to be aware of it?

I would expect it to not emit an ERROR there?  :)  Shouldn't it move
to the next file, and the next, and so on, until it finds one it
*can* link to?

It emitted thousands of such ERROR lines; surely that's not good
behaviour.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 02:18:51PM -0500, Dan Pritts wrote:
 umount /var/lib/backuppc
 dd if=/dev/onedisk of=/dev/someotherdisk bs=1M

Only works if you have identical disks, which is hard when you've
got a few TiB on a SAN.

 In practice, rsync -H is the reasonable way to do what you're
 after, EXCEPT that there are just too many hard links on a
 backuppc data store for this to work. 

And there is a solution to this problem: BackupPC_tarPCCopy.

Please read my other post in this thread for full details.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] It's not rsync -H as such, it's BackupPC data.

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 11:44:38AM -0800, Robin Lee Powell wrote:
 On Tue, Dec 07, 2010 at 02:18:51PM -0500, Dan Pritts wrote:
  In practice, rsync -H is the reasonable way to do what you're
  after, EXCEPT that there are just too many hard links on a
  backuppc data store for this to work. 
 
 And there is a solution to this problem: BackupPC_tarPCCopy.

To be clear: there's nothing wrong with rsync -H for *most* data,
it's just a very very bad choice for BackupPC data and the way it is
interlinked.  Again, see my other post in this thread for details of
how this works and why BackupPC_tarPCCopy is better.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-07 Thread Robin Lee Powell
On Tue, Dec 07, 2010 at 06:19:45PM -0600, Les Mikesell wrote:
 On 12/7/10 5:19 PM, Tyler J. Wagner wrote:
  On Tue, 2010-12-07 at 14:18 -0500, Dan Pritts wrote:
  Not exactly an answer to your question, but i would do this:
 
  umount /var/lib/backuppc dd if=/dev/onedisk
  of=/dev/someotherdisk bs=1M
 
  Yes, that's a good way to copy the filesystem. But if you want
  to move the files to another filesystem, such as to upgrade to
  ext4 with extents, you need another method.
 
 Or just start over, keeping the old instance around until the new
 one builds a reasonable amount of history.

Yeah, I've done that as well.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-07 Thread Robin Lee Powell
On Wed, Dec 08, 2010 at 12:06:40AM -0500, Dan Pritts wrote:
 If I had that big a pool, I think I'd not be using a single
 backuppc instance, but that's just me.  I'm guessing it works well
 for you, I'm glad it does.

We aren't; we have 6.

The *smallest* is about 4 TiB if disk, most of it in use.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-06 Thread Robin Lee Powell
On Thu, Dec 02, 2010 at 01:27:17PM +0100, Oliver Freyd wrote:
 Hello,
 
 I'm a happy user of BackupPC since a few years,
 running an old installation of backuppc that was created
 on some version of SuSE linux, then ported over to debian lenny.
 
 The pool is a reiserfs3 on LVM, about 300GB size, but with a lot of 
 hardlinks...
 Now I'm trying to put the pool onto a new filesystem, so I created an 
 XFS on a striped RAID0 of 3 disks (to speed up copying), and use
 rsync -aHv to copy everything including the hardlinks.
 The cpool itself took about a day, and now it is running for 6 days and
 maybe it has done 70% of the work. BTW, a copy with dd takes about 2 hours.
 
 I've tried to do this with BackupPC_TarPCCopy, but it does not seem to 
 be any faster.

IME it's *much* faster that way; you do BackupPC_TarPCCopy, and then
rsync the cpoll *without -H*.  It shouldn't take any longer than the
actual data transfer time itself.

I've moved user backups of a TiB and larger in a day or two this
way.

If you try to copy BackupPC data with rsync -H, you're doing it
wrong.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Feature request: finish what you're doing and then stop

2010-12-06 Thread Robin Lee Powell

It would be really nice to be able to tell the backuppc server to
finish all current backups without queuing any others, and then
stop/exit completely.

I know I can sort-of do this by disabling backups for each host, but
that's a really big pain, and from the reading of the queuing system
I did a couple of weeks ago, I think this would be pretty easy to
add.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] jLib.pm question

2010-12-06 Thread Robin Lee Powell

I've locally modified BackupPC_fixLinks to check files even if they
have more than one link.  I'm getting a lot of errors like this:

  Can't read size of 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/flogs/fapplication.log-20100830.gz
 from attrib file so calculating manually

Would it be possible for jLib to fix that, so it doesn't happen
again on subsequent runs?  Or is that something that should be
fixed?  I'm not really clear on what the problem is.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 10:37:52AM -0800, Robin Lee Powell wrote:
 
 So I'm writing a script to transfer a client from one host to
 another, using tarPCCopy, and I'm getting messages like this:
 
   Can't find 
 foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
  in pool, will copy file
 
 which is fascinating because the first column in ls -l is *3*. -_-
 
 The tarPCCopy tar file therefore ends up becoming really large
 (hundreds of gibibytes) with files that already exist in the pool,
 presumably.
 
 I've tried running md5sum on that file; can't find that in the pool.
 I've tried BackupPC_zcat | md5sum; can't find that in the pool.

I see that that's not how BackupPC md5sums work.

 BackupPC_fixLinks, from the wiki, doesn't see the problem at all,
 which I'd *very* much like to fix.

I've locally modified my copy to check all files.  Trying it out
now.  I'd still like to know what the hell happened here.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] jLib bug with md5sum stuff

2010-12-06 Thread Robin Lee Powell

substr outside of string at /usr/local/lib64/BackupPC/jLib.pm line 162.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/lib64/BackupPC/jLib.pm line 162.

The line(s) in question:

$datalast = substr($data[($i-1)%2], $rsize, _128KB-$rsize)
. substr($data[$i%2], 0 ,$rsize);

in zFile2MD5

I have no idea how that could happen.  -_-  Will help debug any way
requested.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] jLib.pm question

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 03:33:52PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:28:56 -0800 on Monday, December 6, 2010:
   
   I've locally modified BackupPC_fixLinks to check files even if
   they have more than one link. 

 I'm not sure what you mean... the program should work with any
 number of links. If there is a missing use case, please explain so
 I can fix it.

I have files with 3+ links that, according to BackupPC_tarPCCopy,
aren't actually linked into the pool.  See my other post.

  I'm getting a lot of errors like this:
   
 Can't read size of 
 /backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/flogs/fapplication.log-20100830.gz
  from attrib file so calculating manually
   
   Would it be possible for jLib to fix that, so it doesn't happen
   again on subsequent runs? 
  
 It's been a while since I wrote the program... but I think that
 not being able to read the size is a true error case and I would
 wonder why the attrib file is not including the filesize entry. As
 far as I recall, every file backed up should be listed in the
 attrib file and the size is one of the basic features that should
 always be recorded.

Yeah, these backups are broken somehow; see above.

 Are your attrib files corrupted?

I don't know?  What should they look like?

 Can you look at the attrib file and see what might be wrong and/or
 different about that file?

/me finds BackupPC_attribPrint :)

I'm afraid it looks like a bug in your code.  :(

Example:

  Can't read size of 
/backups/pc/customer--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/fbubblevision/freports/fproduct_sales/fproduct_sales_2010-04-05_2334.zip
 from attrib file so calculating manually

Attrib file:

  $ ls -l 
/backups/pc/customer--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/fbubblevision/freports/fproduct_sales/attrib
  -rw-r- 12 backuppc backuppc 86 Oct 12 18:04 
/backups/pc/customer--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/fbubblevision/freports/fproduct_sales/attrib

Contents thereof:

  $ sudo -u backuppc BackupPC_attribPrint 
/backups/pc/customer--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/fbubblevision/freports/fproduct_sales/attrib
  $VAR1 = {
'product_sales_2010-04-05_2333.zip' = {
  'uid' = 1000,
  'mtime' = 1270535656,
  'mode' = 33188,
  'size' = 3527103,
  'sizeDiv4GB' = 0,
  'type' = 0,
  'gid' = 415,
  'sizeMod4GB' = 3527103
},
'product_sales_2010-04-05_2330.zip' = {
  'uid' = 1000,
  'mtime' = 1270535436,
  'mode' = 33188,
  'size' = 3527103,
  'sizeDiv4GB' = 0,
  'type' = 0,
  'gid' = 415,
  'sizeMod4GB' = 3527103
},
'product_sales_2010-04-05_2325.zip' = {
  'uid' = 1000,
  'mtime' = 1270535162,
  'mode' = 33188,
  'size' = 3527103,
  'sizeDiv4GB' = 0,
  'type' = 0,
  'gid' = 415,
  'sizeMod4GB' = 3527103
},
'product_sales_2010-04-05_2334.zip' = {
  'uid' = 1000,
  'mtime' = 1270535712,
  'mode' = 33188,
  'size' = 3527103,
  'sizeDiv4GB' = 0,
  'type' = 0,
  'gid' = 415,
  'sizeMod4GB' = 3527103
}
  };

As you can see, fproduct_sales_2010-04-05_2334.zip *does* have an
associated size.

As a random side comment, it's amazing how much interesting stuff
I'm finding.  :)  I wonder if our BackupPC install is now the
largest in the world; just under 25 TiB across 5 hosts and thousands
of clients.  I suspect I'm hitting so much stuff because no-one has
tried to productionalize it at this scale before.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 03:48:04PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 10:37:52 -0800 on Monday, December 6, 2010:
   
   So I'm writing a script to transfer a client from one host to
   another, using tarPCCopy, and I'm getting messages like this:
   
 Can't find 
 foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
  in pool, will copy file
   
   which is fascinating because the first column in ls -l is *3*. -_-
   
   The tarPCCopy tar file therefore ends up becoming really large
   (hundreds of gibibytes) with files that already exist in the pool,
   presumably.
   
   I've tried running md5sum on that file; can't find that in the pool.
   I've tried BackupPC_zcat | md5sum; can't find that in the pool.
 
 Well the 'md5sum' used in pool naming is only a partial file md5sum.
 I wrote (and posted) a routine to calculate and optionally test for
 existence of the md5sum pool name corresponding to any pc tree
 file. I will attach a copy to the end of this post.
 
   BackupPC_fixLinks, from the wiki, doesn't see the problem at all,
   which I'd *very* much like to fix.
 
 First check to make sure there really is a problem with the pool...
 Then, we need to figure out whether there is a problem with tarcopy or
 with my program BackupPC_fixLinks etc.

Before I go off testing that, I just want to mention that you're my
hero.  :D  Next to the actual authors of BackupPC, of course.

I've made so much use of your scripts it's not even funny; BackupPC
is great but it breaks down when you're trying to juggle super-large
clients (I have *one BackupPC client* with 2.3TiB in the pool)
around between various BackupPC hosts, and other crazy large-scale
crap.  Your scripts have been invaluable.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 03:48:04PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 10:37:52 -0800 on Monday, December 6, 2010:
   
   So I'm writing a script to transfer a client from one host to
   another, using tarPCCopy, and I'm getting messages like this:
   
 Can't find 
 foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
  in pool, will copy file
   
   which is fascinating because the first column in ls -l is *3*. -_-
   
   The tarPCCopy tar file therefore ends up becoming really large
   (hundreds of gibibytes) with files that already exist in the pool,
   presumably.
   
   I've tried running md5sum on that file; can't find that in the pool.
   I've tried BackupPC_zcat | md5sum; can't find that in the pool.
 
 Well the 'md5sum' used in pool naming is only a partial file md5sum.
 I wrote (and posted) a routine to calculate and optionally test for
 existence of the md5sum pool name corresponding to any pc tree
 file. I will attach a copy to the end of this post.
 
   BackupPC_fixLinks, from the wiki, doesn't see the problem at all,
   which I'd *very* much like to fix.
 
 First check to make sure there really is a problem with the pool...
 Then, we need to figure out whether there is a problem with tarcopy or
 with my program BackupPC_fixLinks etc.

$ ls -l 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
-rw-r- 3 backuppc backuppc 27 Nov 24 09:59 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt

$ perl /tmp/bpctest.pl 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
15c0e4b08058ef3704b8fc24887e2bcc  
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt

$ ls -l /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
-rw-r- 3 backuppc backuppc 27 Nov 22 19:33 
/backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc

ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt

*BUT*.  Not linked.

$ ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
 255523133 -rw-r- 3 backuppc backuppc 27 Nov 22 19:33 
/backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
2376493624 -rw-r- 3 backuppc backuppc 27 Nov 24 09:59 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt

Isn't that fascinating, boys and girls?

Let's check another. 

$ ls -l 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip
-rw-r- 3 backuppc backuppc 8510861 Nov 24 09:14 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip

$ perl /tmp/bpctest.pl 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip
42a13e7f5875b2d8ff79ae54e2cb41a9  
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip

$ ls -l /backups/cpool/4/2/a/42a13e7f5875b2d8ff79ae54e2cb41a9
-rw-r- 3 backuppc backuppc 8510861 Nov 22 18:53 
/backups/cpool/4/2/a/42a13e7f5875b2d8ff79ae54e2cb41a9

$ ls -li /backups/cpool/4/2/a/42a13e7f5875b2d8ff79ae54e2cb41a9 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip
3696447185 -rw-r- 3 backuppc backuppc 8510861 Nov 22 18:53 
/backups/cpool/4/2/a/42a13e7f5875b2d8ff79ae54e2cb41a9
 145635130 -rw-r- 3 backuppc backuppc 8510861 Nov 24 09:14 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7809/fposter/fposter_234517.zip

So, yeah.  More than one link, matches something in the pool, but
not actually linked to it.  Isn't that *awesome*?  ;'(

I very much want BackupPC_fixLinks to deal with this, and I'm trying
to modify it to do that now.

-Robin


-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes

[BackupPC-users] Another jLib/fixLinks issue.

2010-12-06 Thread Robin Lee Powell
This is *fascinating*.

From the actually-fixing-stuff part of the run, I get:

  ERROR: 
tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec

to which I say lolwut? and investigate.

$ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
  79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
 156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
 106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
 247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
 293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
 513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
  52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
/backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
$ ls -li 
/backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
/backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg

That's a bunch of files with *thirty two thousand* hard links.
Apparently that's a limit of some kind.  BackupPC handles this by
adding new copies, a hack that BackupPC_fixLinks is apparently
unaware of.

Did I mention that my installation is large?  :D

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Another jLib/fixLinks issue.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 03:40:04PM -0800, Robin Lee Powell wrote:
 This is *fascinating*.
 
 From the actually-fixing-stuff part of the run, I get:
 
   ERROR: 
 tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
  - Too many links if added to 59c43b51dbdd9031ba54971e359cdcec
 
 to which I say lolwut? and investigate.
 
 $ ls -li /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec*
 2159521202 -rw-r- 31999 backuppc backuppc 76046 Oct  7 08:29 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec
 2670969865 -rw-r- 31999 backuppc backuppc 76046 Oct 16 15:15 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_0
   79561977 -rw-r- 31999 backuppc backuppc 76046 Oct 22 22:07 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_1
  156369809 -rw-r- 31999 backuppc backuppc 76046 Oct 31 09:06 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_2
 3389777838 -rw-r- 31999 backuppc backuppc 76046 Nov  7 09:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_3
  106188559 -rw-r- 31999 backuppc backuppc 76046 Nov 13 15:10 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_4
  247044591 -rw-r- 31999 backuppc backuppc 76046 Nov 19 17:20 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_5
  293083240 -rw-r- 31999 backuppc backuppc 76046 Nov 26 06:14 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_6
  513555136 -rw-r- 31999 backuppc backuppc 76046 Dec  1 19:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_7
   52908307 -rw-r-  7767 backuppc backuppc 76046 Dec  4 10:37 
 /backups/cpool/5/9/c/59c43b51dbdd9031ba54971e359cdcec_8
 $ ls -li 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 374791856 -rw-r- 1 backuppc backuppc 76046 Dec  4 08:03 
 /backups/pc/tm50-s00292__nfs/68/f%2f/fshared/fthepoint/fsite_images/f0042/f4097/fMULTI_medium.jpg
 
 That's a bunch of files with *thirty two thousand* hard links.
 Apparently that's a limit of some kind.  BackupPC handles this by
 adding new copies, a hack that BackupPC_fixLinks is apparently
 unaware of.

I have many pages of these errors, btw; thousands, I think.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 01:17:43PM -0800, Robin Lee Powell wrote:
 
 So, yeah.  More than one link, matches something in the pool, but
 not actually linked to it.  Isn't that *awesome*?  ;'(
 
 I very much want BackupPC_fixLinks to deal with this, and I'm
 trying to modify it to do that now.

Seems to be working; here's the diff.  Feel free to drop the print
statements.  :)

For all I know this will eat your dog; I have no idea what else I
broke.  I *do* know that it should be a flag, because I expect that
checksumming *everything* takes a very, very long time.

-Robin

--- /usr/local/bin/BackupPC_fixLinks2010-12-05 10:38:42.028375381 +
+++ /tmp/BackupPC_fixLinks  2010-12-06 22:53:00.174783302 +
@@ -358,6 +358,7 @@
 sub BadOrMissingLinks {
 my $matchpath = $_[0];
 (my $matchname = $matchpath) =~ s|^$pc/*||; # Delete leading path 
directories (up to machine)
+   print file: $matchpath\n;

 my $rettype;
 my $matchtype;
@@ -368,7 +369,7 @@
 warnerr Can't stat: $matchpath\n;
 return;
 }
-if ($nlinkM == 1  $sizeM  0) { # Non-zero file with no link to pool
+if ($sizeM  0) { # Non-zero file with no link to pool
 my $matchbyte = firstbyte($matchpath);
 my $comparflg = 'x';  # Default if no link to pool
 my $matchtype = NewFile; # Default if no link to pool
@@ -386,6 +387,11 @@
 my $md5sumpath = my $md5sumpathbase = $bpc-MD52Path($md5sum, 
0, $thepooldir);
 my $i;
 for ($i=-1; -f $md5sumpath ; $md5sumpath = $md5sumpathbase . 
'_' . ++$i) {
+   my $md5sumpathinode = (stat($md5sumpath))[1];
+   # This is actually a correct and matching link; do 
nothing
+   if( $md5sumpathinode == $inoM ) {
+   return -1;
+   }
 #Again start at the root, try to find best match in pool...
 if ((my $cmpresult  = compare_files ($matchpath, 
$md5sumpath, $cmprsslvl))  0) { #match found

@@ -405,6 +411,7 @@
 $matchtype = NewLink;
 $totnewlinks++;
 $rettype=2; #NewLink
+   print NEW LINK files: `/bin/ls -li 
$matchpath $md5sumpath` \n;
 goto match_return;
 } #Otherwise, continue to move up the chain looking 
for a pool match...
 }
@@ -420,6 +427,7 @@
 $md5sumhash{$fullmd5sum} = $md5sum;
 $rettype=3; #NewFile-x
 }
+   print NO MATCH files: `/bin/ls -li 
$matchpath $md5sumpath` \n;

   match_return:
 @MatchA = ($matchname, $inoM, $md5sum, $matchtype, $thepool, 
${comparflg}.${matchbyte}.${md5sumbyte}, $nlinkM, $sizeM);


-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] jLib bug with md5sum stuff

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 12:32:01PM -0800, Robin Lee Powell wrote:
 
 substr outside of string at /usr/local/lib64/BackupPC/jLib.pm line 162.
 Use of uninitialized value in concatenation (.) or string at 
 /usr/local/lib64/BackupPC/jLib.pm line 162.
 
 The line(s) in question:
 
 $datalast = substr($data[($i-1)%2], $rsize, _128KB-$rsize)
 . substr($data[$i%2], 0 ,$rsize);
 
 in zFile2MD5
 
 I have no idea how that could happen.  -_-  Will help debug any way
 requested.

Had to figure it out sooner rather than later.  :)

It happens when a file is  128KiB but  256KiB, so only one buffer
is full.  Fix:

--- /tmp/jLib.pm2010-12-07 05:47:42.086646699 +
+++ /usr/local/lib64/BackupPC/jLib.pm   2010-12-07 05:46:55.108231972 +
@@ -159,8 +159,8 @@
while ( ($rsize = $fh-read(\$data[(++$i)%2], _128KB)) == _128KB
  ($totsize += $rsize)  _1MB) {}
$totsize +=$rsize if $rsize  _128KB; # Add back in partial read
-   $datalast = ( $data[($i-1)%2] ? substr($data[($i-1)%2], $rsize, 
_128KB-$rsize) : '' )
-   . ( $data[$i%2] ? substr($data[$i%2], 0 ,$rsize) : '' );
+   $datalast = substr($data[($i-1)%2], $rsize, _128KB-$rsize)
+   . substr($data[$i%2], 0 ,$rsize);
 }
 $filesize = $totsize if $totsize  _1MB; #Already know the size because 
read it all
 if ($filesize == 0) { # Try to find size from attrib file


-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 10:46:12PM -0800, Craig Barratt wrote:
 Robin,
 
  ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc 
  /backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
  
  *BUT*.  Not linked.
  
  $ ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc 
  /backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
   255523133 -rw-r- 3 backuppc backuppc 27 Nov 22 19:33 
  /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
  2376493624 -rw-r- 3 backuppc backuppc 27 Nov 24 09:59 
  /backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
 
 You need to check all files with that digest (since there could be
 collisions):
 
 $ ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc*
 
 I would expect one of them will have an inode of 2376493624.

Nope; I'm a big tab completion user, so I would have seen it.  ;)

$ ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc*
255523133 -rw-r- 3 backuppc backuppc 27 Nov 22 19:33 
/backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
$

 I'm happy to help you look at the original problem with
 BackupPC_tarPCCopy if you can't make progress using Jeffrey's
 scripts.

I honestly don't know how this happened, but the next time I use
BackupPC_tarPCCopy I'll see if it gets weird.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bizarre form of cpool corruption.

2010-12-06 Thread Robin Lee Powell
On Mon, Dec 06, 2010 at 11:33:15PM -0800, Craig Barratt wrote:
 Robin,
 
  Nope; I'm a big tab completion user, so I would have seen it.  ;)
  
  $ ls -li /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc*
  255523133 -rw-r- 3 backuppc backuppc 27 Nov 22 19:33 
  /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
 
 Hmmm.  I'm not familiar with Jeffrey's zLib code.  What happens
 when you compute the digest of this pool file?  Ie:
 
 perl /tmp/bpctest.pl /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc

$ perl /tmp/bpctest.pl /backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc
15c0e4b08058ef3704b8fc24887e2bcc  
/backups/cpool/1/5/c/15c0e4b08058ef3704b8fc24887e2bcc

 Here is another script that computes the BackupPC digest.  You
 might need to modify the library paths.

$ sudo -u backuppc perl /tmp/bpctest2.pl 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt
15c0e4b08058ef3704b8fc24887e2bcc 
/backups/pc/foo--tm50-e00145--tm50-s00339---shared/47/f%2f/fshared/ffoo/fpurchase_order_assets/fbatch_7813/f7105620_done.txt

So, yeah, that's really it.  They're both really there, and that's
the right md5sum, and both the pool file and the original file have
more than 1 hardlink count, and there's no inode match.

No idea how this happened, but I have a lot of them.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-30 Thread Robin Lee Powell
On Tue, Nov 30, 2010 at 03:18:46PM -0500, Timothy J Massey wrote:
 Robin Lee Powell rlpow...@digitalkingdom.org wrote on 11/25/2010
 01:12:50 PM:
 
  The problem is that calling BackupPC_serverMesg
  BackupPC_nightly run when the regular nightlies are already
  running, or calling it twice in quick succession, doesn't result
  in the scheduler restarting the nightlies run from scratch
  (GOOD) or queuing up a second nightlies run when the first
  finishes (not great, but OK), it results in the scheduler eating
  its own face (BAD).
 
 Given your use case (free up space immediately), your GOOD and
 not great options are effectively identical.  The extant nightly
 will be some random amount through its process when you submit the
 second nightly.  It will then free up the space for the remaining
 amount of the pool.  Once it finishes and starts a new nightly,
 the new nightly will start at the beginning of the pool and make
 its way through it, freeing up the part that the existing one
 missed.

But it will take a lot longer.

 The only part of the process that is wasted when you run the
 nightly twice is the part where the second nightly job covers the
 part of the pool that was already processed by the first one after
 you submitted the second nightly job.  Restarting the nightly job
 from scratch will not change the amount of time that it takes to
 free up your disk space--not one instant. It will, however, cause
 the nightly job to process some of your pool twice, *after* the
 disk space is freed up.

Yeah, and the hosts can barely keep up with the backups required of
them as it is; I need that time.

But I understand your point, and had missed it before; thanks.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-29 Thread Robin Lee Powell
On Sun, Nov 28, 2010 at 10:35:20PM -0800, Robin Lee Powell wrote:
 On Sun, Nov 28, 2010 at 07:01:45PM -0800, Craig Barratt wrote:
  Robin,
  
  Thanks for the detailed analysis.  I agree - it's pretty broken.
  
  Preventing queuing a second nightly request when one is running
  will at least avoid the problem.  However, I don't recommend
  killing the current running nightly.
 
 When I ask it to run a nightly, it's because I've just manually
 deleted stuff and I really need to free up disk space ASAP.  This
 happens regularily around here.
 
 Just so you're aware of the use case.

In case you decide to go with a solution like mine, or someone else
wants it, here's the actually-tested version:

- -

ut00-s3 ~ # diff -uw /var/tmp/ /usr/local/bin/BackupPC
--- /var/tmp/BackupPC   2010-11-25 01:52:30.0 +
+++ /usr/local/bin/BackupPC 2010-11-29 18:18:22.0 +
@@ -522,6 +522,15 @@
 $req = pop(@CmdQueue);

 $host = $req-{host};
+
+   if( $BackupPCNightlyJobs  0  ! $bpc-isAdminJob($host) )
+   {
+print(LOG $bpc-timeStamp, Tried to run .$req-{cmd}. on $host 
when there are nightlies running.  That's bad.\n);
+
+unshift(@CmdQueue, $req);
+return;
+   }
+
 if ( defined($Jobs{$host}) ) {
 print(LOG $bpc-timeStamp,
Botch on admin job for $host: already in use!!\n);
@@ -1362,7 +1371,31 @@
 } elsif ( $cmd =~ /^backup all$/ ) {
 QueueAllPCs();
 } elsif ( $cmd =~ /^BackupPC_nightly run$/ ) {
+print(LOG $bpc-timeStamp,
+ Running nightlies at user request.\n );
+   foreach my $host (keys %Jobs) {
+ if( $bpc-isAdminJob( $host ) ) {
+   my $pid = $Jobs{$host}{pid};
+   kill($bpc-sigName2num(INT), $pid);
+   delete $Jobs{$host};
+print(LOG $bpc-timeStamp,
+ Killing nightly job $host with PID $pid to make way for 
manual run.\n);
+ }
+   }
+   # Clear all traces of Nightly jobs by name, just in case
+   for ( my $i = 0 ; $i  $Conf{MaxBackupPCNightlyJobs} ; $i++ ) {
+ my $host = $bpc-adminJob($i);
+ if( exists $Jobs{$host} ) {
+   my $pid = $Jobs{$host}{pid};
+   kill($bpc-sigName2num(INT), $pid);
+   delete $Jobs{$host};
+   print(LOG $bpc-timeStamp,
+ Killing nightly job $host with PID $pid to make way for 
manual run.\n);
+ }
+   }
 $RunNightlyWhenIdle = 1;
+   $BackupPCNightlyJobs = 0;
+   $BackupPCNightlyLock = 0;
 } elsif ( $cmd =~ /^backup (\S+)\s+(\S+)\s+(\S+)\s+(\S+)/ ) {
 my $hostIP = $1;
 $host  = $2;


- -

Perhaps some redundancy/overkill there, but I've seen enough failure
modes here that I want to be careful.  :)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-28 Thread Robin Lee Powell
On Sun, Nov 28, 2010 at 07:01:45PM -0800, Craig Barratt wrote:
 Robin,
 
 Thanks for the detailed analysis.  I agree - it's pretty broken.
 
 Preventing queuing a second nightly request when one is running
 will at least avoid the problem.  However, I don't recommend
 killing the current running nightly.

When I ask it to run a nightly, it's because I've just manually
deleted stuff and I really need to free up disk space ASAP.  This
happens regularily around here.

Just so you're aware of the use case.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-25 Thread Robin Lee Powell
On Thu, Nov 25, 2010 at 09:29:34AM +, Tyler J. Wagner wrote:
 Robin,
 
 Thank you for the awesome write-up.
 
 On Wed, 2010-11-24 at 18:11 -0800, Robin Lee Powell wrote:
  1.  Don't run a non-nightly job from the CmdQueue when there are
  nightly job running, *EVER*.
 
 Unfortunately, that's already well-documented. NEVER, EVER call
 BackupPC_nightly while another is running. Let BackupPC's
 scheduler do its job.

Yeah, I didn't realize at first that that was what was causing it.

Regardless, though, if you're going to have the serverMesg call, and
scripts that make use of it, either the serverMesg call needs to
check that nightly is already running and bail, or (better, IMO) it
needs to restart nightly from the beginning.

If there wasn't a reason to start it from the beginning, I wouldn't
be asking.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-25 Thread Robin Lee Powell
On Thu, Nov 25, 2010 at 09:29:34AM +, Tyler J. Wagner wrote:
 Robin,
 
 Thank you for the awesome write-up.
 
 On Wed, 2010-11-24 at 18:11 -0800, Robin Lee Powell wrote:
  1.  Don't run a non-nightly job from the CmdQueue when there are
  nightly job running, *EVER*.
 
 Unfortunately, that's already well-documented. NEVER, EVER call
 BackupPC_nightly while another is running. Let BackupPC's
 scheduler do its job.

I just realized there's been a miscommunication here:

1.  My exhortation you quoted wasn't directed at the *user*, it was
directed at the BackupPC code.  The patch enforces said exhortation.

2.  I *did not* call BackupPC_nightly directly, I was fully aware of
that issue.  I called BackupPC_serverMesg BackupPC_nightly run, as
I had been told was safe, which means I was trying to let the
scheduler do its job, just as you said.  The problem is that calling
BackupPC_serverMesg BackupPC_nightly run when the regular
nightlies are already running, or calling it twice in quick
succession, doesn't result in the scheduler restarting the nightlies
run from scratch (GOOD) or queuing up a second nightlies run when
the first finishes (not great, but OK), it results in the scheduler
eating its own face (BAD).

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] The one-at-a-time nightly problem, debugged; DEVS PLEASE READ; long.

2010-11-24 Thread Robin Lee Powell

Figured it out.  The problem was that I have BackupPC set to run 8
nightlies at once (which usually takes 12 or more hours), but it was
ending up in a state where only one was running at a time.

This may be the longest, most detailed debugging writeup I've ever
done in 15 years of being a computer professional; I hope y'all
appreciate it.  :)  I had to do this to hold all the relevant state
in my head.

It turns out that the issue occurs when the 24-hour-ly nightlies job
is already running, and you do

   sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run

which I've been doing a lot.

Deciding to queue new nightly jobs goes like this:

  while ( $CmdJob eq   @CmdQueue  0  $RunNightlyWhenIdle != 1
|| @CmdQueue  0  $RunNightlyWhenIdle == 2
 $bpc-isAdminJob($CmdQueue[0]-{host}) ) {

We'll be coming back to this a lot.  isAdminJob matches nightly
jobs only AFAICT.

CmdQueue State: Empty
CmdJob: Empty
RunNightlyWhenIdle: 0
While State: False, since @CmdQueue = 0
Running Job State: Empty
Event:

  Normal nightly run occurs.  RunNightlyWhenIdle is set to 1, which
  triggers all the nightly jobs getting added to the queue, and
  RunNightlyWhenIdle getting set to 0

CmdQueue State: 8 nightly jobs
CmdJob: Empty
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: True
While State: True, via Branch 2
Running Job State: Empty
Event:

  Nightly jobs get kicked off, all 8 of them.


CmdQueue State: Empty
CmdJob: non-empty; admin7 or similar
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: False
While State: False, since @CmdQueue = 0
Running Job State: 8 nightly jobs
Event:

  A backup finishes, and queues up a BackupPC_link job.  This
  happens several times, since the nightly jobs take 8+ hours, even
  split into 8 parts, on my machine (4+TiB of backups per backup
  machine).

CmdQueue State: Several link jobs
CmdJob: non-empty; admin7 or similar
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: False; link jobs don't match
While State: False
Running Job State: 8 nightly jobs
Event:

  User runs sudo -u backuppc BackupPC_serverMesg BackupPC_nightly
  run.  This causes RunNightlyWhenIdle to be set to 1, but before
  that hits the while, the jobs are actually queued, *USING
  unshift*, which puts them at the front of the queue.  This is
  where things start to go horribly wrong.

CmdQueue State: 8 nightly jobs, *THEN* Several link jobs
CmdJob: non-empty; admin7 or similar
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: *TRUE*
While State: True, branch 2
Running Job State: 8 nightly jobs
Event:

  *Pop* a job from the queue.  This means that even though the
  *test* is for the job from the *front* of the queue, the job that
  actually gets handled is the job at the *end* of the queue.

  So, the last job on the queue, a link job, gets run.  THIS SHOULD
  NEVER HAPPEN, as I understand it, because nightly jobs (the first
  set) are still running.  The link job sets CmdJob, but that
  doesn't matter because we're going through the *second* branch of
  the while, which doesn't care about CmdJob.  So, it happily
  launches another link job:
  
CmdQueue State: 8 nightly jobs, then N-1 link jobs
CmdJob: hostname non-empty from the last link job
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: True
While State: True, branch 2
Running Job State: 8 nightly jobs, 1 link job
Event:

  Runs the next link job.  And all the others.  We end up with *all*
  link jobs running at once.  THIS SHOULD NEVER HAPPEN; CmdQueue is
  supposed to be one at a time.

  But wait, it gets better!

  When each link job starts, it sets $CmdJob to its own host name;
  this means that at the end of the run through the queue, it's set
  to the last link job that ran, like this:

CmdQueue State: 8 nightly jobs
CmdJob: hostname from the last link job
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: True
While State: True, branch 2
Running Job State: 8 nightly jobs, N link jobs
Event:

  So, from here, it tries to run the last (remember, pop) nightly
  job, but it can't, because any given nightly segment can only run
  one at a time, because they are named to prevent duplicates
  (leading to the Botch on admin job for admin7 : already in use!!
  log messages: that means the 8th nightly job is running, so you
  can't start it again).

  Having failed to run the nightly job, it unshifts it onto the
  front of the CmdQueue.

  It runs through all the queued nightly jobs in this way.

  Eventually, a link job finishes.

CmdQueue State: 8 nightly jobs
CmdJob: hostname from the last link job
RunNightlyWhenIdle: 2
isAdminJob Matches First Job: True
While State: True, branch 2
Running Job State: 8 nightly jobs, N-1 link jobs
Event:

  When each link job finshes, this test runs:

if ( $CmdJob eq $host || $bpc-isAdminJob($host) ) {

  This will only match the last link job.  If it doesn't match, the
  host is tested for whether it needs linking, and if so, it
  requeues the (already completed) link, *at the front of the
  queue*.  

Re: [BackupPC-users] Huge remote directory (20GB): how will it be transferred?

2010-11-09 Thread Robin Lee Powell
On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio wrote:
 Hello Pavel.
 
  for huge dirs with millions of files we got almost an order of
  magnitude faster runs with the tar mode instead of rsync (which
  eventually consumed all the memory anyways :) )
 
 How would I be able to use tar over a remote DSL connection?

Well, by hand you'd do:

ssh host 'tar -czvf - /dir' /backups/foo.tgz

BackupPC's tar method seems to be over ssh by default; just set up a
passwordless root key and you should be fine.

20GB over DSL is going to take many days, though, no matter what you
do.

Note that once the initial backup is run you almost certainly do
*not* want tar transport, as this implies copying the entire 20GB
on every full.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Huge remote directory (20GB): how will it be transferred?

2010-11-09 Thread Robin Lee Powell
On Tue, Nov 09, 2010 at 09:37:01AM -0600, Richard Shaw wrote:
 On Tue, Nov 9, 2010 at 9:27 AM, Robin Lee Powell
 rlpow...@digitalkingdom.org wrote:
  On Tue, Nov 09, 2010 at 04:11:15PM +0100, Boniforti Flavio
  wrote:
  Hello Pavel.
 
   for huge dirs with millions of files we got almost an order
   of magnitude faster runs with the tar mode instead of rsync
   (which eventually consumed all the memory anyways :) )
 
  How would I be able to use tar over a remote DSL connection?
 
  Well, by hand you'd do:
 
  ssh host 'tar -czvf - /dir' /backups/foo.tgz
 
  BackupPC's tar method seems to be over ssh by default; just set
  up a passwordless root key and you should be fine.
 
  20GB over DSL is going to take many days, though, no matter what
  you do.
 
  Note that once the initial backup is run you almost certainly do
  *not* want tar transport, as this implies copying the entire
  20GB on every full.
 
 Popping into this conversation with a couple of questions
 
 1. Can you switch methods without consequence? In other words can
 you switch back and forth without any side effects?

I've not tried, but knowing the internals as I unfortunately do, I'd
be *very* surprised if there were any consequences.

 2. Does the timeout apply if you run the backup command from the
 cli? 

How do you mean run from the CLI?.  If it goes through the
BackupPC server, the timeout applies.

 Is it easy to remove or override the timeout from the command
 line?

I doubt it.  Easy enough to temporarily override it in the host's
config file.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Huge remote directory (20GB): how will it be transferred?

2010-11-09 Thread Robin Lee Powell
On Tue, Nov 09, 2010 at 05:13:58PM +0100, Boniforti Flavio wrote:
 
  Well, by hand you'd do:
  
  ssh host 'tar -czvf - /dir' /backups/foo.tgz
 
 But wouldn't this create *one huge tarball*??? That's not what I'd
 like to get...

It was an example for your benefit in future; it has nothing to do
with BackupPC.  For that, look at the host's config file and change
its backup method.

To solve that problem, by the way:

ssh host 'tar -czvf - /dir' | tar -C /dir -xzvf -

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Welcome to the BackupPC-users mailing list (Digest mode)

2010-11-07 Thread Robin Lee Powell
On Sun, Nov 07, 2010 at 08:41:10AM -0800, auto316...@hushmail.com
wrote:
 I am new to backuppc and would appreciate some pointers on how to
 get started.  I don't see an executable file in the distribution
 -- just a zip file.  And I would like to know what is the first
 step that I need to do so that I can backup Windows XP.  Will
 Version 3.02 of backup PC work with Windows 7?

Setting up a Windows XP *client* goes something like this:
http://taksuyama.com/?page_id=8

Setting up a Windows XP *server*, I've no idea.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] brackup?

2010-11-05 Thread Robin Lee Powell
On Tue, Nov 02, 2010 at 09:39:22AM -0500, Les Mikesell wrote:
 Has anyone run across 'brackup' 
 (http://search.cpan.org/~bradfitz/Brackup-1.10/lib/Brackup.pm)?  It is 
 just a command line tool, not much like backuppc, but it appears to have 
 some very interesting concepts for the backend storage, chunking and 
 encrypting the files and then is able to store them on an assortment of 
 cloud/cluster systems like riak or amazon's s3 storage as well as normal 
 filesystems or ftp servers.

Just so you know people saw this: no, I've not played with it.
Curious as to how it goes if you get a chance.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-28 Thread Robin Lee Powell
On Fri, Oct 15, 2010 at 12:48:07PM -0700, Robin Lee Powell wrote:
 It happened again last night during the actual run:
 
 2010-10-15 01:00:00 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
 2010-10-15 01:00:01 Running BackupPC_nightly -m 0 31 (pid=30718)
 2010-10-15 07:56:33 BackupPC_nightly now running BackupPC_sendEmail
 2010-10-15 08:00:58 Finished  admin  (BackupPC_nightly -m 0 31)
 2010-10-15 08:00:58 Pool nightly clean removed 0 files of size 0.00GB
 2010-10-15 08:00:58 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
 links), 1 directories
 2010-10-15 08:00:58 Cpool nightly clean removed 0 files of size 0.00GB
 2010-10-15 08:00:58 Cpool is 335.44GB, 2491927 files (66 repeated, 39 max 
 chain, 31999 max links), 547 directories
 2010-10-15 08:00:58 Running BackupPC_nightly 32 63 (pid=27724)
 
 I can't restart backuppc because I've got a multi-day backup that
 isn't finished.

This is still happening: it's launching only the first nightly part,
waiting until that's done, and then launching the next part.  Any
suggetions as to what/how to  debug this issue?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Access Backuppc outside the office ??

2010-10-24 Thread Robin Lee Powell
On Sun, Oct 24, 2010 at 09:59:16PM -0400, southasia wrote:
 Finally installed Backuppc successfully. I could access web
 interface inside the office. If I would like to access from home
 to office Backuppc from Internet, how should do configure it?
 Please advice me or if there is a link for explanation. Thanks.

That's a webserver config issue, not a BackupPC issue.  If you want
us to help, you're going to at least need to past your current
webserver config for BackupPC.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-15 Thread Robin Lee Powell
On Thu, Oct 14, 2010 at 10:04:27PM -0700, Craig Barratt wrote:
 Robin writes:
 
  I have four hosts with identical configuration, as far as I know.
  All of them have:
  
 $Conf{MaxBackupPCNightlyJobs} = 8;
 
  On one, and only one as far as I can tell, running:
  
sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
  
  results in:
  
$ ps -aef | grep -i nigh
backuppc  6375  2788  0 11:39 ?00:00:12 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly -m 0 31
 
 As you noted, the 0 31 argument means it has read the config
 correctly.
 
 Is it possible that on this host the pool is relatively empty so
 the other 7 children finish quickly?  

BWAHAHAHAHAHA!!!

Ahem.

Sorry.

There are over a hundred hosts being backed up, with almost no
duplication between them.  There is 3.5TiB on the BackupPC
partition.

 The first one (with -m) does some extra work, so it usually takes
 longer.  

The second one starts after the first finishes, as I said before.

 Look in the LOG file - there should be an entry for each child
 starting and finishing.

The actual nightly run:

2010-10-14 01:00:00 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2010-10-14 01:00:00 Running BackupPC_nightly -m 0 31 (pid=18304)
2010-10-14 01:00:00 Running BackupPC_nightly 32 63 (pid=18305)
2010-10-14 01:00:00 Running BackupPC_nightly 64 95 (pid=18306)
2010-10-14 01:00:00 Running BackupPC_nightly 96 127 (pid=18307)
2010-10-14 01:00:00 Running BackupPC_nightly 128 159 (pid=18308)
2010-10-14 01:00:00 Running BackupPC_nightly 160 191 (pid=18309)
2010-10-14 01:00:00 Running BackupPC_nightly 192 223 (pid=18310)
2010-10-14 01:00:00 Running BackupPC_nightly 224 255 (pid=18311)
2010-10-14 07:11:56 Finished  admin2  (BackupPC_nightly 64 95)
2010-10-14 07:12:15 Finished  admin5  (BackupPC_nightly 160 191)
2010-10-14 07:12:15 Finished  admin4  (BackupPC_nightly 128 159)
2010-10-14 07:12:15 Finished  admin6  (BackupPC_nightly 192 223)
2010-10-14 07:12:16 Finished  admin7  (BackupPC_nightly 224 255)
2010-10-14 07:12:19 BackupPC_nightly now running BackupPC_sendEmail
2010-10-14 07:12:20 Finished  admin3  (BackupPC_nightly 96 127)
2010-10-14 07:12:22 Finished  admin1  (BackupPC_nightly 32 63)
2010-10-14 07:16:46 Finished  admin  (BackupPC_nightly -m 0 31)
2010-10-14 07:16:46 Pool nightly clean removed 0 files of size 0.00GB
2010-10-14 07:16:46 Cpool nightly clean removed 618357 files of size 5.94GB

The manually kicked off run:

2010-10-14 11:39:03 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2010-10-14 11:39:03 Running BackupPC_nightly -m 0 31 (pid=6375)
2010-10-14 18:50:09 BackupPC_nightly now running BackupPC_sendEmail
2010-10-14 18:54:56 Finished  admin  (BackupPC_nightly -m 0 31)
2010-10-14 18:54:56 Pool nightly clean removed 0 files of size 0.00GB
2010-10-14 18:54:56 Cpool nightly clean removed 128 files of size 0.00GB
2010-10-14 18:54:56 Running BackupPC_nightly 32 63 (pid=4868)

And it's still going.  I'm going to kill it; it's just delaying
backups to no purpose.  Maybe the nightly run will work.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General Praise.

2010-10-15 Thread Robin Lee Powell
On Thu, Oct 07, 2010 at 11:55:41AM -0400, Dan Pritts wrote:
 I agree with your general praise, BackupPC works very well for us
 in our environment, which is maybe half your size.  Due to your
 large size, I'll leave you with one thought:
 
 One concern I've always had with backuppc is what would happen if
 i had a disaster and had to restore everything from backuppc.  
 
 It would take absolutely forever to do this, because backuppc has
 to seek the disks so much (due to the effects of all those hard
 links).

I don't know of anything that would be faster, though.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General Praise.

2010-10-15 Thread Robin Lee Powell
On Fri, Oct 15, 2010 at 08:57:52AM +0100, James Wells wrote:
 I find that BackupPC is good for general data retention, but bad
 for bare metal restores where you have to reinstall the OS first
 and then get data back on there. 

Fortunately, I have no need for that.  :)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-15 Thread Robin Lee Powell
It happened again last night during the actual run:

2010-10-15 01:00:00 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2010-10-15 01:00:01 Running BackupPC_nightly -m 0 31 (pid=30718)
2010-10-15 07:56:33 BackupPC_nightly now running BackupPC_sendEmail
2010-10-15 08:00:58 Finished  admin  (BackupPC_nightly -m 0 31)
2010-10-15 08:00:58 Pool nightly clean removed 0 files of size 0.00GB
2010-10-15 08:00:58 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
links), 1 directories
2010-10-15 08:00:58 Cpool nightly clean removed 0 files of size 0.00GB
2010-10-15 08:00:58 Cpool is 335.44GB, 2491927 files (66 repeated, 39 max 
chain, 31999 max links), 547 directories
2010-10-15 08:00:58 Running BackupPC_nightly 32 63 (pid=27724)

I can't restart backuppc because I've got a multi-day backup that
isn't finished.

I'm going to have to manually run BackupPC_nightly, because I'm
about to run out of disk space.

Is there any way to do that safely?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-14 Thread Robin Lee Powell

I have four hosts with identical configuration, as far as I know.
All of them have:

  $Conf{MaxBackupPCNightlyJobs} = 8;

On one, and only one as far as I can tell, running:

  sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run

results in:

  $ ps -aef | grep -i nigh
  backuppc  6375  2788  0 11:39 ?00:00:12 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly -m 0 31

On all three others, I get:

  $ ps -aef | grep -i nigh
  backuppc 30013  2856  1 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly -m 0 31
  backuppc 30014  2856  0 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 32 63
  backuppc 30015  2856  1 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 64 95
  backuppc 30016  2856  1 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 96 127
  backuppc 30017  2856  0 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 128 159
  backuppc 30018  2856  1 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 160 191
  backuppc 30019  2856  0 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 192 223
  backuppc 30020  2856  1 12:00 ?00:00:00 /usr/bin/perl 
/usr/local/bin/BackupPC_nightly 224 255

when I do that.

I have run the admin reload.

I have my config in an unusual place, but AFAICT that's handled:

  /usr/local/lib/BackupPC/Lib.pm:ConfDir= $confDir eq  ? 
'/engineyard/etc/backuppc' : $confDir,

All of them are running:

  # Version 3.2.0beta1, released 24 Jan 2010.

Any ideas as to what might be going on?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-14 Thread Robin Lee Powell
On Thu, Oct 14, 2010 at 12:06:40PM -0700, Robin Lee Powell wrote:
 
 I have four hosts with identical configuration, as far as I know.
 All of them have:
 
   $Conf{MaxBackupPCNightlyJobs} = 8;
 
 On one, and only one as far as I can tell, running:
 
   sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
 
 results in:
 
   $ ps -aef | grep -i nigh
   backuppc  6375  2788  0 11:39 ?00:00:12 /usr/bin/perl 
 /usr/local/bin/BackupPC_nightly -m 0 31

You know what the weirdest part there is?  It's clearly divided it
up into 8 runs, it just isn't running them *at the same time*.

And it's not like it's trying to do the one portion every night
thing, either; when that job finishes, the next one runs
immediately.  ... the hell?

Also, $Conf{BackupPCNightlyPeriod} = 1; just for the record.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] MaxBackupPCNightlyJobs being ignored, but only on one host??

2010-10-14 Thread Robin Lee Powell
On Thu, Oct 14, 2010 at 03:19:31PM -0500, Les Mikesell wrote:
 On 10/14/2010 2:06 PM, Robin Lee Powell wrote:
 
  I have four hosts with identical configuration, as far as I know.
  All of them have:
 
 $Conf{MaxBackupPCNightlyJobs} = 8;
 
  On one, and only one as far as I can tell, running:
 
 sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
 
  results in:
 
 $ ps -aef | grep -i nigh
 backuppc  6375  2788  0 11:39 ?00:00:12 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly -m 0 31
 
  On all three others, I get:
 
 $ ps -aef | grep -i nigh
 backuppc 30013  2856  1 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly -m 0 31
 backuppc 30014  2856  0 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 32 63
 backuppc 30015  2856  1 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 64 95
 backuppc 30016  2856  1 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 96 127
 backuppc 30017  2856  0 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 128 159
 backuppc 30018  2856  1 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 160 191
 backuppc 30019  2856  0 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 192 223
 backuppc 30020  2856  1 12:00 ?00:00:00 /usr/bin/perl 
  /usr/local/bin/BackupPC_nightly 224 255
 
  when I do that.
 
  I have run the admin reload.
 
  I have my config in an unusual place, but AFAICT that's handled:
 
 /usr/local/lib/BackupPC/Lib.pm:ConfDir=  $confDir eq  
  ? '/engineyard/etc/backuppc' : $confDir,
 
  All of them are running:
 
 # Version 3.2.0beta1, released 24 Jan 2010.
 
  Any ideas as to what might be going on?
 
 Are the servers all the same hardware?  

3/4 are 2 CPU, including this one, the other is 1 CPU.  They are
otherwise identical.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_DeleteFile requirement jLib.pm not working

2010-10-10 Thread Robin Lee Powell
On Sun, Oct 10, 2010 at 11:36:02AM -0400, Carl T. Miller wrote:
 I found what looks like an excellent tool for managing
 backuppc pools.  It allows the deletion of files or
 directories from archives.
 
 http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_DeleteFile
 
 The instructions say to install the jLib.pm module,
 but don't tell how to install it.
 
 When I run BackupPC_DeleteFile without the module it says:
 Can't locate BackupPC/jLib.pm in @INC (@INC contains:
 /usr/share/BackupPC/lib ...
 
 So I copied jLib.pm to /usr/share/BackupPC/lib/BackupPC.
 Now when I runBackupPC_DeleteFile it says:
 
 syntax error at /usr/local/bin/BackupPC_DeleteFile line
 1055, near package BackupPC::jLib

You need to trim the jLip.pm part out of your copy of
/usr/local/bin/BackupPC_DeleteFile ; that's a line from jlib, not
from BackupPC_DeleteFile

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Whoops. Updated version ( was Re: Fork-or-something of BackupPC_delete)

2010-10-09 Thread Robin Lee Powell

Whoops.  More testing, new script.  :)

-Robin

#! /bin/bash
#this script contributed by Matthias Meyer
#note that if your $Topdir has been changed, the script will ask you
#the new location.
#
# Significant modifications by Robin Lee Powell, aka
# rlpow...@digitalkingdom.org, all of which are placed into the public domain.
#
usage=\
Usage: $0 -c client [-d backupnumber -b before data [-f] [-n]] | [-l]

Delete specified backups.
Attention!
 If a full backup is deleted, all incremental backups
 that depends on it will also be deleted.

-c client- client machine for which the backup was made
-d number- backup number to delete; if this is a full, deletes all
 dependent incrementals.  Conflicts with -b
-b date  - delete all backups before this date (-MM-DD); will
 only remove fulls if all dependent incrementals are gone.
 Conflicts with -d
-f - run Backuppc_nightly afterwards to clean up the pool
-l - list all backups for client
-n | --dry-run - Don't actually do anything, just say what would be done
-h - this help

Example:
list backups of client
 $0 -c name of the client which was backed up -l

remove backup #3 from client
 $0 -c name of the client which was backed up -d 3

remove all backups before 2007-07-02 from client
 $0 -c name of the client which was backed up -b 2007-07-02


typeset -i len

while test $# -gt 0; do
case $1 in
-c | --client )
shift; client=$1; shift;;
-b | --before )
shift; bDate=$1; shift;;
-d | --delete )
shift; bNumber=$1; shift;;
-f | --force )
nightly=true; shift;;
-n | --dry-run )
dryRun=true; shift;;
-l | --list )
list=true; shift;;
* | -h | --help)
echo $usage
exit 0
;;
esac
done

if [ -z $client ] || [ -z $list ]  [ -z $bNumber ]  [ -z $bDate ]
then
echo $usage
exit 0
fi

if [ $bNumber -a $bDate ]
then
echo Please use either a specific number or a date, not both.
echo $usage
exit 0
fi

if [ -e /engineyard/etc/backuppc/config.pl ]
then
TopDir=`grep $Conf{TopDir} /engineyard/etc/backuppc/config.pl | awk '{print 
$3}'`
len=${#TopDir}-3
TopDir=${TopDir:1:len}
else
echo /engineyard/etc/backuppc/config.pl not found
exit 1
fi

ls $TopDir/pc  /dev/null 21
while [ $? != 0 ]
do
read -p examined $TopDir seems wrong. What is TopDir ?  TopDir
ls $TopDir/pc  /dev/null 21
done

ls $TopDir/pc/$client  /dev/null 21
if [ $? != 0 ]
then
echo $client have no backups
exit 1
fi

if [ ! -z $list ]
then
while read CLine
do
BackupNumber=`echo $CLine | awk '{print $1}'`
BackupType=`echo $CLine | awk '{print $2}'`
BackupTime=$(date -d @$(echo $CLine | awk '{ print $4 }'))
echo BackupNumber $BackupNumber - $BackupType-Backup from $BackupTime
done  $TopDir/pc/$client/backups
exit 0
fi

if [ ! -z $bNumber ]  [ ! -e $TopDir/pc/$client/$bNumber ]
then
echo Backup Number $bNumber does not exist for client $client
exit 1
fi

LogDir=`grep $Conf{LogDir} /engineyard/etc/backuppc/config.pl | awk '{print 
$3}'`
len=${#LogDir}-3
LogDir=${LogDir:1:len}

rm -f $TopDir/pc/$client/backups.new  /dev/null 21

#**
# Two Processes
#
# Deleting a single backup is very different from deleting
# everything before a date.
#
# If the user specifies a backup number, and the backup is a full,
# well, the user said to delete it, so delete it and everything that
# depends on it.  This means walking the list forwards deleting
# everything until we get to the next full.
#
# On the other hand, if the user asks to delete everything before a
# particular date, and that date comes just after a full, deleting
# the full and all the incrementals is not the expected behaviour at
# all.
#
# As an example: If the first backup is a full on the 5th, and an
# incremental for every day from then on, and it's the 30th, and the
# user says to delete everything older than the 6th, deleting the
# full *and all the incrementals* up to today (i.e. all the
# backups!!) is probably not what they had in mind.
#
# This means that for -b we walk backwards in time so we know if the
# fulls are still needed.
#
# So the two versions actually walk the backup list in opposite
# directions.
#
# -_-
#
# My (Robin Lee Powell) apologies for the resulting code
# duplication.  It's hard to abstract a lot of things in bash.
#
#**

delete_dir() {
dir=$1

if [ $dryRun ]
then
echo not actually removing $dir, in dry run mode
else
echo remove $dir
echo `date +\%Y-%m-%d %T\` BackupPC_deleteBackup delete $dir  
$LogDir/LOG
rm -fr $dir  /dev/null 21
echo `date +\%Y-%m-%d %T\` BackupPC_deleteBackup $dir deleted  
$LogDir/LOG
fi
}

swap_backups_file

[BackupPC-users] Fork-or-something of BackupPC_delete

2010-10-08 Thread Robin Lee Powell

I've made a *bunch* of changes to BackupPC_delete from
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=How_to_delete_backups
, including integration of the given patch.  If someone could post
this version, that would be swell.

I AM NOT THE ORIGINAL AUTHOR.  Hopefully the original author doesn't
mind.

Changes:

1.  -b mode doesn't delete a full and all dependents; it deletes the
full IFF all dependents are gone.  From the comments:

  As an example: If the first backup is a full on the 5th, and an
  incremental for every day from then on, and it's the 30th, and the
  user says to delete everything older than the 6th, deleting the
  full *and all the incrementals* up to today (i.e. all the
  backups!!) is probably not what they had in mind.

(this is what motivated my modifications: I lost a bunch of data
that way)

2.  Now has -n: the dry run flag

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/
#! /bin/bash
#this script contributed by Matthias Meyer
#note that if your $Topdir has been changed, the script will ask you
#the new location.
#
# Significant modifications by Robin Lee Powell, aka
# rlpow...@digitalkingdom.org, all of which are placed into the public domain.
#
usage=\
Usage: $0 -c client [-d backupnumber -b before data [-f] [-n]] | [-l]

Delete specified backups.
Attention!
 If a full backup is deleted, all incremental backups
 that depends on it will also be deleted.

-c client- client machine for which the backup was made
-d number- backup number to delete; if this is a full, deletes all
 dependent incrementals.  Conflicts with -b
-b date  - delete all backups before this date (-MM-DD); will
 only remove fulls if all dependent incrementals are gone.
 Conflicts with -d
-f - run Backuppc_nightly afterwards to clean up the pool
-l - list all backups for client
-n | --dry-run - Don't actually do anything, just say what would be done
-h - this help

Example:
list backups of client
 $0 -c name of the client which was backed up -l

remove backup #3 from client
 $0 -c name of the client which was backed up -d 3

remove all backups before 2007-07-02 from client
 $0 -c name of the client which was backed up -b 2007-07-02


typeset -i len

while test $# -gt 0; do
case $1 in
-c | --client )
shift; client=$1; shift;;
-b | --before )
shift; bDate=$1; shift;;
-d | --delete )
shift; bNumber=$1; shift;;
-f | --force )
nightly=true; shift;;
-n | --dry-run )
dryRun=true; shift;;
-l | --list )
list=true; shift;;
* | -h | --help)
echo $usage
exit 0
;;
esac
done

if [ -z $client ] || [ -z $list ]  [ -z $bNumber ]  [ -z $bDate ]
then
echo $usage
exit 0
fi

if [ $bNumber -a $bDate ]
then
echo Please use either a specific number or a date, not both.
echo $usage
exit 0
fi

if [ -e /engineyard/etc/backuppc/config.pl ]
then
TopDir=`grep $Conf{TopDir} /engineyard/etc/backuppc/config.pl | awk '{print 
$3}'`
len=${#TopDir}-3
TopDir=${TopDir:1:len}
else
echo /engineyard/etc/backuppc/config.pl not found
exit 1
fi

ls $TopDir/pc  /dev/null 21
while [ $? != 0 ]
do
read -p examined $TopDir seems wrong. What is TopDir ?  TopDir
ls $TopDir/pc  /dev/null 21
done

ls $TopDir/pc/$client  /dev/null 21
if [ $? != 0 ]
then
echo $client have no backups
exit 1
fi

if [ ! -z $list ]
then
while read CLine
do
BackupNumber=`echo $CLine | awk '{print $1}'`
BackupType=`echo $CLine | awk '{print $2}'`
BackupTime=$(date -d @$(echo $CLine | awk '{ print $4 }'))
echo BackupNumber $BackupNumber - $BackupType-Backup from $BackupTime
done  $TopDir/pc/$client/backups
exit 0
fi

if [ ! -z $bNumber ]  [ ! -e $TopDir/pc/$client/$bNumber ]
then
echo Backup Number $bNumber does not exist for client $client
exit 1
fi

LogDir=`grep $Conf{LogDir} /engineyard/etc/backuppc/config.pl | awk '{print 
$3}'`
len=${#LogDir}-3
LogDir=${LogDir:1:len}

rm -f $TopDir/pc/$client/backups.new  /dev/null 21

#**
# Two Processes
#
# Deleting a single backup is very different from deleting
# everything before a date.
#
# If the user specifies a backup number, and the backup is a full,
# well, the user said to delete it, so delete it and everything that
# depends on it.  This means walking the list forwards deleting
# everything until we get to the next full.
#
# On the other hand, if the user asks to delete everything before a
# particular date, and that date comes just after a full, deleting
# the full and all the incrementals

Re: [BackupPC-users] The total field in the backups file.

2010-10-07 Thread Robin Lee Powell
Is the format written up anywhere?

-Robin

On Tue, Oct 05, 2010 at 11:36:08AM -0700, Robin Lee Powell wrote:
 
 I couldn't find anything on the wiki describing the fields of that
 file, so:
 
 If I want to know the total on-disk space used by a particular host,
 do I want to add up the total column in the backups file (i.e.
 field 6, counting from 1) or the total column minus the existing
 column (column 8), or something else?
 
 Both ways seem to give me incorrect results, frankly, but with
 300GiB unaccounted for I suppose that's to be expected.
 
 Thanks.
 
 -Robin
 
 -- 
 http://singinst.org/ :  Our last, best hope for a fantastic future.
 Lojban (http://www.lojban.org/): The language in which this parrot
 is dead is ti poi spitaki cu morsi, but this sentence is false
 is na nei.   My personal page: http://www.digitalkingdom.org/rlp/
 
 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] How to get total on-disk backup size from the backups file? ( was Re: The total field in the backups file. )

2010-10-07 Thread Robin Lee Powell
On Thu, Oct 07, 2010 at 10:45:42AM -0700, Craig Barratt wrote:
 Robin,
 
  Is the format written up anywhere?
 
 Yes, it's in the documentation:
 
 http://backuppc.sourceforge.net/faq/BackupPC.html#storage_layout
 
 Scroll down to backups.

*blink*

I *swear* I looked.  -_-

Unfortunately, it doesn't really answer my question.

I *think* to get the amount of on-disk space used by each backup, I
want size minus sizeExist.  Anyone have knowledge to the contrary?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What's using all this disk space?

2010-10-05 Thread Robin Lee Powell
On Mon, Oct 04, 2010 at 03:32:23PM -0400, Timothy J Massey wrote:
 Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
 03:28:23 PM:
 
  On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey
  wrote:
   Robin Lee Powell rlpow...@digitalkingdom.org wrote on
   10/04/2010 03:15:29 PM:
   
How do I find out which backups are using a lot of disk?
We'd like to see if there's a problem with our retention
policy, especially on database servers, but I've no insight
at all into where all this disk is *going*.

Anyone got a script for this?
   
   I don't have a script for this, but if you look at the host
   page for each server, examine the New Files section.  This
   will tell you which backups are consuming a lot of space (i.e.
   aren't pooling well).
  
  We have 200+ servers getting backed up on here.  :)
 
 Well, then, you'll want to parse the pc/hostname/backups file.
 The 9th (New Files Count) and 10th (New Files Size) field (AFAICT)
 are what you're looking for.
 
 Sorry, no script.

I've got one.  Attached.  Specialized for our environment, not
productionalized or anything, but it works.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/
#!/usr/bin/perl


use strict;
use warnings;
use Data::Dumper;

chdir '/backups/pc';

opendir(DIR, '.') || die can't opendir . $!;
my @dirs = grep { ! /^\./ } readdir(DIR);
closedir DIR;

my %values;

foreach my $dir (@dirs) {
open( BACKUPS, $dir/backups ) || do {
print Could not open file /backups/pc/$dir/backups \n;
next;
};

my $total_total=0;
my $new_total=0;
my $num_backups=0;
while( BACKUPS ) {
my @fields = split( /\t/ );

if( $fields[5] =~ /^\d+$/  $fields[9] =~ /^\d+$/ ) {
# THe total size for this backup
$total_total += $fields[5];
# Minus what was already there
$total_total -= $fields[7];
# THe new/additional size for this backup
$new_total += $fields[9];
$num_backups++;
}
}
close( BACKUPS );

$values{$dir} = {
total = $total_total,
new = $new_total,
num = $num_backups,
}
}

print q{
   Total New 
SizeAverage New Size Per Backup
Backup NameTotal Size 
Number Of Backups
};

foreach my $key (sort { $values{$b}-{total} = $values{$a}-{total} } keys 
%values) {
#   printf( Backup $key has %20.2d GiB of backups on disk total.\n, ( 
($values{$key}-{total}) / 1024 / 1024 / 1024 ) );
my $total_gib=( ($values{$key}-{total}) / 1024 / 1024 / 1024 );
my $new_gib=( ($values{$key}-{new}) / 1024 / 1024 / 1024 );
my $num_backups=$values{$key}-{num};
my $avg_new_mib=( ( ($values{$key}-{new}) / $num_backups ) / 1024 / 1024 / 
1024 );
 format STDOUT =
@   @.## GiB @.## 
GiB @## @. GiB
$key, $total_gib, $new_gib, $num_backups, $avg_new_mib
.
write;
}

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] The total field in the backups file.

2010-10-05 Thread Robin Lee Powell

I couldn't find anything on the wiki describing the fields of that
file, so:

If I want to know the total on-disk space used by a particular host,
do I want to add up the total column in the backups file (i.e.
field 6, counting from 1) or the total column minus the existing
column (column 8), or something else?

Both ways seem to give me incorrect results, frankly, but with
300GiB unaccounted for I suppose that's to be expected.

Thanks.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] copying the pool

2010-10-04 Thread Robin Lee Powell
On Mon, Oct 04, 2010 at 08:56:49AM -0400, Chris Purves wrote:
 I recently copied the pool to a new hard disk following the
 Copying the pool instructions from the main documentation.  The
 documentation says to copy the 'cpool', 'log', and 'conf'
 directories using any technique and the 'pc' directory using
 BackupPC_tarPCCopy; however, there is no mention of what to do
 with the 'pool' directory.  I thought it might be created
 automatically when the nightly cleanup runs, but three days later
 and still no 'pool' directory.
 
 Is this an oversight in the documentation or is the 'pool'
 directory not needed?  I am using BackupPC 3.1.0.

Unless you have compression turned off, the pool directory should be
totally empty.

If you have compression turned off, the cpool directory should be
totally empty.

Whichever one is empty can be ignored.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] What's using all this disk space?

2010-10-04 Thread Robin Lee Powell

OK, so, BackupPC says it's using 5590.63GB.  This number is rapidly
growing.  (it's all wrong; we're actually using 5848.6GB, but that's
not what this mail is about).

How do I find out which backups are using a lot of disk?  We'd like
to see if there's a problem with our retention policy, especially on
database servers, but I've no insight at all into where all this
disk is *going*.

Anyone got a script for this?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What's using all this disk space?

2010-10-04 Thread Robin Lee Powell
On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey wrote:
 Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
 03:15:29 PM:
 
  How do I find out which backups are using a lot of disk?  We'd
  like to see if there's a problem with our retention policy,
  especially on database servers, but I've no insight at all into
  where all this disk is *going*.
  
  Anyone got a script for this?
 
 I don't have a script for this, but if you look at the host page
 for each server, examine the New Files section.  This will tell
 you which backups are consuming a lot of space (i.e. aren't
 pooling well).

We have 200+ servers getting backed up on here.  :)

Good to know where to look, though.

 Database servers have the same problem that mail servers have:
 large files that change each and every single day, and therefore
 consume their full amount of space for each backup you keep.

*nod*

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What's using all this disk space?

2010-10-04 Thread Robin Lee Powell
On Mon, Oct 04, 2010 at 03:32:23PM -0400, Timothy J Massey wrote:
 Robin Lee Powell rlpow...@digitalkingdom.org wrote on 10/04/2010
 03:28:23 PM:
 
  On Mon, Oct 04, 2010 at 03:25:03PM -0400, Timothy J Massey
  wrote:
   Robin Lee Powell rlpow...@digitalkingdom.org wrote on
   10/04/2010 03:15:29 PM:
   
How do I find out which backups are using a lot of disk?
We'd like to see if there's a problem with our retention
policy, especially on database servers, but I've no insight
at all into where all this disk is *going*.

Anyone got a script for this?
   
   I don't have a script for this, but if you look at the host
   page for each server, examine the New Files section.  This
   will tell you which backups are consuming a lot of space (i.e.
   aren't pooling well).
  
  We have 200+ servers getting backed up on here.  :)
 
 Well, then, you'll want to parse the pc/hostname/backups file.
 The 9th (New Files Count) and 10th (New Files Size) field (AFAICT)
 are what you're looking for.

*Ooooh*.  I thought I had to write Perl to talk to the
server.  That's *easy*.

Thank you!

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New release of BackupPC_deleteBackup - who can put it into the wiki?

2010-10-04 Thread Robin Lee Powell
On Sun, Dec 06, 2009 at 01:13:57AM +0100, Matthias Meyer wrote:
 Hi,
 
 I have a new release of the BackupPC_deleteBackup script.
 Unfortunately I can't put it into the wiki
 (http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=How_to_delete_backups).
 Jeffrey would do that but I didn't reach him via email :-(
 
 Anybody else here would put it into the wiki?

Did this ever get done?  I'm guessing not, given the last-modified
date.  Could you send the new version to the list in the meantime?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubles restoring via SSH Tunnel

2010-10-01 Thread Robin Lee Powell
On Fri, Oct 01, 2010 at 11:52:18AM +0200, Boniforti Flavio wrote:
 Hello list.
 
 I tried to put back on a remote server, via SSH tunnel and rsync, some
 files. What I got was this:

Can we get the complete config for this host?  And any global rsync
or ssh options?  And the contents of the tunnel script?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Add NAS to LWM

2010-09-30 Thread Robin Lee Powell
On Thu, Sep 30, 2010 at 06:55:18PM +0200, Leif Gunnar Einmo wrote:
 I have seeked the forum for a solution, but couldn't find it :(
 
 I have a running BackupPc that have around 400 Gb of data 90 %
 loaded. Now i'm in need to axpand the the storage with a NAS as
 the server is full. I have mounted the NAS as /mnt/nas over a Gb
 NIC and have access to the space there. The BackupPc server is
 running on 6 * 146Gb disks in raid on a LWM.
 
 Anyone that could help me how to expand this LWM withe the space
 on the NAS? if possible

You'll need a raw device to run pvcreate on, as far as I know.  IOW,
a SAN would work, but I don't think a NAS will?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automated config reload?

2010-09-29 Thread Robin Lee Powell
On Tue, Sep 28, 2010 at 11:16:51PM -0700, Craig Barratt wrote:
 Robin writes:
 
  We add a lot of stuff automatically to our backuppc configs, and
  manually going into the UI and doing the config reload is easy
  to forgot.  Can it be done on the command line without breaking
  any backups (i.e. without restarting)?
 
 Run this command:
 
 INSTALLDIR/bin/BackupPC_serverMesg server reload

Yay!  Thanks.  Can someone update
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=ServerMesg_commands
?  I don't seem to have access.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync --fuzzy ?

2010-09-29 Thread Robin Lee Powell

rsync --fuzzy seems to break BackupPC; fileListReceived breaks.
Could that be fixed easily?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync --fuzzy ?

2010-09-29 Thread Robin Lee Powell
On Wed, Sep 29, 2010 at 03:47:25PM -0700, Craig Barratt wrote:
 Robin writes:
 
  rsync --fuzzy seems to break BackupPC; fileListReceived
  breaks. Could that be fixed easily?
 
 Sorry, but no.  In 4.x I hope to implement solution that will be
 better than even --fuzzy, although it might not be included in the
 initial release.  Since 4.x uses (almost) native rsync on the
 server side, --fuzzy should work, although the performance might
 not be very good.

That's great to hear.  Thank you!

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Automated config reload?

2010-09-28 Thread Robin Lee Powell

We add a lot of stuff automatically to our backuppc configs, and
manually going into the UI and doing the config reload is easy to
forgot.  Can it be done on the command line without breaking any
backups (i.e. without restarting)?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore takes way too long and the fails

2010-09-23 Thread Robin Lee Powell
On Thu, Sep 23, 2010 at 05:52:26PM +0200, Marcus Hardt wrote:
 [..]
 
  I think it does the basic permissions that map to unix
  equivalents.  It doesn't preserve acls, nor does it have any way
  to work around the existing ones - so you may have files that
  you can read in the backups but can't write back over the
  existing copy
 
 Right. There might be files already the image restoration done in
 an earlier step.
 
 
 Would s.th. like this work:
 
 1: Restore an half year old image, using dd (for partition table and MBR's 
 sake)
 2; Mount it 
 3: rm -rf it
 4: Copy the backup
 
 Or would this kill the windows installation at some point?

0.o

I really don't think that would work.

The big thing here is that you *can't modify open files in Windows*.
That includes all of the system libraries.  This is probably the
source of a lot of your trouble.

So you can't rm -rf the OS (and even if you could, yes, everything
would break as soon as you hit the wrong library).

It sounds like you're trying to restore the Windows *OS*, rather
than just the data.  This strikes me as a very bad idea.  Install
the OS normally, and restore just the data files.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Jeff, script question (was Re: How to run night manually?)

2010-09-23 Thread Robin Lee Powell

Another question:  The script, running with -c, failed eventually
at this line:

my $err = $bpc-ServerConnect($Conf{ServerHost}, $Conf{ServerPort});

My serverport is set to -1; does it need to be set for this to work?
Could you not just call BackupPC_ServerMesg instead?  Perhaps I
should hack my copy locally to do that...

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore takes way too long and the fails

2010-09-22 Thread Robin Lee Powell
There is something *very* wrong with either the tar used to make the
archive, or the tar used to restore.  I wouldn't trust anything it
outputs at all.

What version of tar on both ends?

Have you tried getting a zip archive from the GUI instead?  Or using
BackupPC_zipCreate on the CLI?

-Robin

On Tue, Sep 14, 2010 at 03:22:28PM +0200, Marcus Hardt wrote:
 Update:
 
 On Tuesday 14 September 2010 13:16:01 Marcus Hardt wrote:
  Update:
  
  tar xf restore.tar  will fail, if restore.tar is pretty big
 
 fails
 
  cat restore.tar | tar x   seems to work
 
 fails
 
 But:
 using the 'i' option for 
  -i, --ignore-zeros
ignore zeroed blocks in archive (means EOF)
 
 makes tar wander through the archive even thought it might have detected EOF 
 markers (i.e. two consecutive zero-filled records according to the 
 wikipedia page of the tar format)
 
 I observed several warnings in my cmdline:
 
   tar tfi restore.tar |wc -l
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Skipping to next header
 tar: Exiting with failure status due to previous errors
 387781
 
 I can only hope this works and helps others.
 
 M.
 
 
  And I thought windows was terrible...
  
  M.
  
  On Monday 13 September 2010 23:26:42 Les Mikesell wrote:
   On 9/13/2010 10:49 AM, Marcus Hardt wrote:
Hi,

btw:  this problem seems to be client unspecific. I see the same
errors using smbclient and rsync via ssh.
   
   But windows specific?  Are you sure the windows user has write access
   and the file isn't locked by something else having it open?
   
And, of course I'm in deep shit now, since I told everone how super
great backuppc was...
   
   There is at least the option of downloading an archive file through a
   browser and restoring from that.
 -- 
 M.
 
 --
 Start uncovering the many advantages of virtual appliances
 and start using them to simplify application deployment and
 accelerate your shift to cloud computing.
 http://p.sf.net/sfu/novell-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Jeff, script question (was Re: How to run night manually?)

2010-09-22 Thread Robin Lee Powell
On Wed, Sep 22, 2010 at 02:31:29AM -0400, Jeffrey J. Kosowsky wrote:
 Honestly, I never really looked into whether it enforces the
 constraints you mention above. But looking at the code, it seems
 that these constraints are enforced only by the BackupPC main
 routine (i.e. daeomon) itself (which then after checking such
 constraints uses the same server messaging system to actually call
 the BackupPC_nightly routine). Therefore my standalone routine
 would NOT enforce such constraints.

*Fantastic*.  Thank you!  That's exactly what I wanted.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   >