Re: [BackupPC-users] Stop TrashClean / Return directories backup list

2013-03-20 Thread backuppc
Holger Parplies wrote at about 03:50:34 +0100 on Thursday, March 21, 2013:
 > Ideally, one of us would write "BackupPC_ls" for you, so
 > you could list hosts and backups from the command line. Jeffrey, are there 
 > any
 > volunteers? ;-)
 > Actually, I seem to have written something vaguely similar in 2007 ... a 
 > quick
 > hack it was then, and it still is now, and it's almost certainly untested 
 > with
 > BackupPC 3.x, but you could give it a try. Should list the hosts and backups
 > BackupPC is aware of. Seems non-destructive enough, in any case.

I tend to just manually 'read' the 'backups' file for each
host... it's plaintext with tab separated values... and I pretty much
know what fields I am interested in... (alternatively, I just look at
the subdirectories that are numerical if I just want to know what
backups I have)... so who needs anything fancy? :P

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Adam Goryachev
On 21/03/13 12:28, Phil Kennedy wrote:
> On 3/20/2013 9:12 PM, Holger Parplies wrote:
>> I've had that happen (except that I noticed before a drive broke) at least
>> once, and I remember that Les has also. From what I remember of his
>> explanation (please correct me if I'm wrong), two physical disks concurrently
>> positioning their heads can disturb each other (through vibration) in such a
>> way that one of them returns a read or write error and is kicked out of the
>> array without the drive actually being in any way defective. I *would*
>> consider this a shortcoming of Linux software RAID-1.
>>
>> As Adam wrote, you can easily monitor that. It still is a nuisance, though.
> As an aside, i've seen drives in other backuppc / software RAID 
> instances fail for no good reason, to the point that they pass long 
> smartctl test, yet mdadm is still convinced that the drive is bad. 
> Perhaps the vibration issue you've described was the culprit then?

This is perhaps getting a little off-topic for this list, but if you are
interested in these issues, I would suggest the linux-raid list has a
lot of very knowledgeable people with a lot to say about these sorts of
problems.
As just one possible explanation, you are using "cheap" drives without
properly configuring them.
ie, if the drive has a problem reading a sector from the drive, then the
drive will try to read the sector (try really hard), what usually
happens is the controller or linux driver will timeout while waiting and
ask the drive to reset, etc etc... eventually it will think the drive is
not responding (because it is still trying to read the sector it had a
problem with), and so it will be kicked from the array as a failed
drive. There are ways to resolve this, either telling the drive to
timeout much more quickly (usually about 7 seconds or less) or telling
linux not to be so impatient and wait much longer for the drive to
return the failed read (a number of minutes). From memory, if the drive
supports ECT then this works. On "RAID or Enterprise" drives, the
default is usually to timeout a failed read within a few seconds,
because then the RAID can simply read that data from another drive.
Linux software raid will notice the read failure and attempt to re-write
the failed sector by using data from the other drives. The failed sector
will either re-write successfully, or be transparently relocated by the
drive. If the write fails, then the drive is kicked from the array.

Search keywords like URE (Unrecoverable Read Error), ERC/ECT or just
check the linux-raid mailing list, there is an email about this issue
frequently.

I've *never* had drives being randomly kicked from an array except where
either the above was happening, or SATA driver issues. In any case, with
proper monitoring, this is almost a non-event.

I'm not suggesting this was your issue, nor anybody else's, just
suggesting that appears to be a much more common cause of perfectly good
drives being randomly kicked from a raid array, as opposed to
"vibration" issues.

Regards,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrashClean / Return directories backup list

2013-03-20 Thread Holger Parplies
Hi,

Phil Kennedy wrote on 2013-03-20 21:28:52 -0400 [Re: [BackupPC-users] Stop 
TrachClean / Return directories backup list]:
> On 3/20/2013 9:12 PM, Holger Parplies wrote:
> > [...]
> That's very odd, but a possiblity. My thought was that the mdadm.conf 
> was rebuilt when the Promise array was brought online, and the previous 
> admin simply omitted the old RAID confs for sda and sdb. From that point 
> on, sda booted as a normal drive rather than a RAID member and no one 
> was the wiser.

I'm not sure under which conditions the kernel automatically assembles an
array. Concerning the root FS, the kernel command line parameter should give a
reasonable hint. It's either 'root=/dev/md0' (or similar), then booting should
fail if it's not (and can't be) assembled, or it's 'root=/dev/sda1', which
should be a problem even *if* the array is assembled :-).

> As an aside, i've seen drives in other backuppc / software RAID 
> instances fail for no good reason, to the point that they pass long 
> smartctl test, yet mdadm is still convinced that the drive is bad. 
> Perhaps the vibration issue you've described was the culprit then?

In my case, I could simply re-add the device to the array. It wouldn't be
re-added automatically in any case.

> > Do the *hosts* show up in the web interface? If not, look at your hosts file
> > (/etc/BackupPC/hosts or something like that). If so, it could be
> They do now. One of the first things I did was to rebuild the hosts file 
> based on the information in /backup/pc.

What I meant is: were you describing an *empty* host page (i.e. host page with
no backups) or a *missing* host page (i.e. host unknown to BackupPC)?

> > That should be unnecessary as long as BackupPC is not running. Err, does the
> > web interface work without a running BackupPC daemon?
> 
> No, backuppc doesn't work without the daemon running.

That's funny. I consider "BackupPC" to be the daemon, not the web interface :-).

> The web interface 
> makes troubleshooting at little easier for me, especially since I have a 
> number of hosts to verify.

Yes, I can see that. I wouldn't want to interpret backup data from the command
line either.

> I can change the config files from terminal, but the web just makes it far
> prettier / simpler.

The thing is, it needs to work. Perhaps the daemon's understanding of which
backups exist and which don't is just fine (*), and it's only the web interface
that has problems. Either a problem with BackupPC or with the web interface
would cause the web interface to fail, so you need to fix both at once, rather
than one at a time. Ideally, one of us would write "BackupPC_ls" for you, so
you could list hosts and backups from the command line. Jeffrey, are there any
volunteers? ;-)
Actually, I seem to have written something vaguely similar in 2007 ... a quick
hack it was then, and it still is now, and it's almost certainly untested with
BackupPC 3.x, but you could give it a try. Should list the hosts and backups
BackupPC is aware of. Seems non-destructive enough, in any case.

Regards,
Holger

(*) Actually, the daemon doesn't care about existing backups; I should
probably write BackupPC::Lib instead. The web interface uses that, too,
but there's additionally the issue of the correct UID and perhaps
SELinux implications.
#!/usr/bin/perl --  -*- quick-hack -*-
#
# generate CSV output (actually, I'll use "|" as separator) for something
# like:
# hostname|date/time|size in MB|level|duration(min)
# almost as requested by John Rouillard.
# You might also want the backup number. See the comment below.

use lib '/usr/share/backuppc/lib'; # change to match your installation
use BackupPC::Lib;
use POSIX;

my $bpc = new BackupPC::Lib ('', '', '', 0)
  or die "Can't create BackupPC object!\n";
my @hosts; # array of hosts
my $hostinfo;  # pointer to hash of per host information
my @backups;   # info on all backups of one host
my $dt;# output fields for loop iteration: date/time
my $size;  # ... size
my $level; # ... level
my $duration;  # ... duration in minutes

$hostinfo = $bpc->HostInfoRead ();
@hosts = sort keys %$hostinfo;
# print 'hosts =>', (join '<, >', @hosts), "<=\n";

host:
foreach my $host (@hosts) {
  @backups = $bpc -> BackupInfoRead ($host)
or warn "Invalid hostname '$host' or other error!\n";
  foreach my $backup (@backups ) {
#  [-1]   <- add that in the line above for only
#the most recent backup of each host
# exploring the data structure:
# print "$host=>", join (',', map { "$_=$backup->{$_}" } sort keys %$backup), "<=\n";
$dt   = POSIX::strftime ('%Y-%m-%d %H:%M',
 localtime $backup -> {startTime});
$size = int ($backup -> {size} / 1024 / 1024 + 0.5); # MB, rounded
$level= $backup -> {level

Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Phil Kennedy
On 3/20/2013 9:12 PM, Holger Parplies wrote:
> Hi,
>
>
> I've had that happen (except that I noticed before a drive broke) at least
> once, and I remember that Les has also. From what I remember of his
> explanation (please correct me if I'm wrong), two physical disks concurrently
> positioning their heads can disturb each other (through vibration) in such a
> way that one of them returns a read or write error and is kicked out of the
> array without the drive actually being in any way defective. I *would*
> consider this a shortcoming of Linux software RAID-1.
>
> As Adam wrote, you can easily monitor that. It still is a nuisance, though.

That's very odd, but a possiblity. My thought was that the mdadm.conf 
was rebuilt when the Promise array was brought online, and the previous 
admin simply omitted the old RAID confs for sda and sdb. From that point 
on, sda booted as a normal drive rather than a RAID member and no one 
was the wiser.

As an aside, i've seen drives in other backuppc / software RAID 
instances fail for no good reason, to the point that they pass long 
smartctl test, yet mdadm is still convinced that the drive is bad. 
Perhaps the vibration issue you've described was the culprit then?
>
> Do the *hosts* show up in the web interface? If not, look at your hosts file
> (/etc/BackupPC/hosts or something like that). If so, it could be
They do now. One of the first things I did was to rebuild the hosts file 
based on the information in /backup/pc.

>
> That should be unnecessary as long as BackupPC is not running. Err, does the
> web interface work without a running BackupPC daemon?

No, backuppc doesn't work without the daemon running. The web interface 
makes troubleshooting at little easier for me, especially since I have a 
number of hosts to verify. I can change the config files from terminal, 
but the web just makes it far prettier / simpler.
> Yes. As Jeffrey wrote, there should be a file "backupInfo" in the directory
> which should contain the information you need (including the correct backup
> number).
>
> Hope that helps.
It does, thank you.
~Phil


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread backuppc
Holger Parplies wrote at about 02:12:16 +0100 on Thursday, March 21, 2013:
 > That should be unnecessary as long as BackupPC is not running. Err, does the
 > web interface work without a running BackupPC daemon?
 > 
I don't believe it does... but it's been a while since I tried that...

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Holger Parplies
Hi,

Phil Kennedy wrote on 2013-03-20 19:41:07 -0400 [Re: [BackupPC-users] Stop 
TrachClean / Return directories backup list]:
> [...]
> The OS drives were supposed to have been configured as a software RAID 1 
> but (and for the life of me, I cannot figure out how this could happen 
> aside from malice or gross incompetence) the secondary drive (/dev/sdb) 
> apparently hadn't synced with the primary drive (/dev/sda) since August 
> of 2009.

I've had that happen (except that I noticed before a drive broke) at least
once, and I remember that Les has also. From what I remember of his
explanation (please correct me if I'm wrong), two physical disks concurrently
positioning their heads can disturb each other (through vibration) in such a
way that one of them returns a read or write error and is kicked out of the
array without the drive actually being in any way defective. I *would*
consider this a shortcoming of Linux software RAID-1.

As Adam wrote, you can easily monitor that. It still is a nuisance, though.

> Now, the folders within the directories are there. There are directories 
> under /backup/pc/hostname/ by those directories do not show in the menu 
> when you try to to browse via the web interface.

Do the *hosts* show up in the web interface? If not, look at your hosts file
(/etc/BackupPC/hosts or something like that). If so, it could be

* corrupt backups file (/backup/pc/hostname/backups)
* incorrect setting of $Conf {TopDir}
* SELinux problems
* ownership of /backup - if your /etc/passwd is ancient, maybe the UID of
  the backup user was changed for some reason?
* web server setup?
* the BackupPC_Admin script is not setuid backuppc
* patched BackupPC scripts (patches done after August 2009) - though your
  BackupPC version does suggest it must have been stored somewhere unaffected
* ...

> I'll set TreashCleanSleepSec to a ridiculous number as was suggested. 

That should be unnecessary as long as BackupPC is not running. Err, does the
web interface work without a running BackupPC daemon?

> If I can ID which system the data 
> came from, I can probably just move the data back under its host 
> directory, correct? under ../trash the directories are named something 
> like 1363794493_24518_0, if I move them under ../pc/hostname/ and give 
> them a name like 100, it should show as backup 100, correct? (assuming 
> all permissions are correct?)

Yes. As Jeffrey wrote, there should be a file "backupInfo" in the directory
which should contain the information you need (including the correct backup
number).

Hope that helps.

Regards,
Holger

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread backuppc
Phil Kennedy wrote at about 19:41:07 -0400 on Wednesday, March 20, 2013:
 > Self-replying to add a little detail here;
 > 
 > The system that failed is a Red Hat Enterprise system (5.9 at the 
 > moment.) The system has the backuppc (3.2.0) pool living on a Promise 
 > vTrak system in a Software RAID 10 (softlinked to a partition called 
 > /backup). That part is fine.
 > 
 > The OS drives were supposed to have been configured as a software RAID 1 
 > but (and for the life of me, I cannot figure out how this could happen 
 > aside from malice or gross incompetence) the secondary drive (/dev/sdb) 
 > apparently hadn't synced with the primary drive (/dev/sda) since August 
 > of 2009. Literally everything (passwd, group, grub, fstab, the works) 
 > there was almost four years old. Unfortunately for me, the primary drive 
 > failed taking the current (though supposedly, mirrored) configs with it. 
 > The system obviously has undergone a great deal of expansion and 
 > tweaking in the interim. Thus, the config files in /etc/backuppc were 
 > essentially the defaults.
 > 
 > Now, the folders within the directories are there. There are directories 
 > under /backup/pc/hostname/ by those directories do not show in the menu 
 > when you try to to browse via the web interface.

As I pointed out (and I believe Holger also concurs), this does not
seem to be consistent with the known workings of BackupPC -- unless
there is a permissions or SELinux issue or unless perhaps the
'backups' file is missing/corrupted/inconsistent with the actual
backups present. Or perhaps TopDir doesn't point to /backups (that is
not the 'standard' location of it and perhaps your config file doesn't
point there).

Rather than simply restating what you said before, it might actually
be helpful to give some detail of what actually is in
/backup/pc/hostname and what shows up in the web interface. For
example do any of the backups show up at all? Is the 'backups' file
there? the backupInfo files? the share subdirectories? What if you run
a new backup (or even a small test backup)? Does it show up in the web
interface? 

 > I'll set TreashCleanSleepSec to a ridiculous number as was suggested. 
 > Obviously, once I realized that the system may have been eating or had 
 > eaten data, I stopped the backuppc service. There is some data currently 
 > in /backup/trash (though murphey's law says it won't be any of the more 
 > important data that may be missing). If I can ID which system the data 
 > came from, I can probably just move the data back under its host 
 > directory, correct? under ../trash the directories are named something 
 > like 1363794493_24518_0, if I move them under ../pc/hostname/ and give 
 > them a name like 100, it should show as backup 100, correct? (assuming 
 > all permissions are correct?)

You would also need to run BackupPC_fixupBackupSummary as Holger
pointed out...

 > Thanks for the pointers. This event has furthered my belief that 
 > software RAID is crap.
 > ~Phil

If you really want to solve your problem, I would suggest doing some
more troubleshooting on your end and providing details so people can
do more than guess at what the root problem may be...


 > On 3/20/2013 12:37 PM, Phil Kennedy wrote:
 > > Hi,
 > > I recently had a somewhat odd system failure (poorly configured software
 > > RAID) that lead to a *very* old set of BackupPC config files being
 > > loaded. On one windows machine (possibly more), the default SMB share
 > > was reset to C$ instead of E$. The full count keep, plus the min keep
 > > values were also set lower than we wanted them. BackupPC naturally
 > > marked all the old E$ directories as trash, and has removed them from
 > > the browse backup list.
 > >
 > > The good news (besides the fact that the previous admin of this box is
 > > several states away and out of arms reach) is that the backup
 > > directories and their data still exist under the
 > > /var/lib/backuppc/pool/hostname/backup number/ directory. They just
 > > don't show up when you browse the backups. My question is two fold;
 > >
 > > 1. How do I get the directories back in the list? (I'm assuming this
 > > involves one of the .rrd files?) Again, the data *is* there, it's just
 > > not web accessible.
 > >
 > > 2. How can I tell TrashClean to take a couple days vacation while I sort
 > > out the consistency of other 130 machines?
 > >
 > > Thanks,
 > > ~Phil
 > >
 > > --
 > > Everyone hates slow websites. So do we.
 > > Make your web apps faster with AppDynamics
 > > Download AppDynamics Lite for free today:
 > > http://p.sf.net/sfu/appdyn_d2d_mar
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > 

Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Phil Kennedy
On 3/20/2013 7:57 PM, Adam Goryachev wrote:
> On 21/03/13 10:41, Phil Kennedy wrote:
>> Self-replying to add a little detail here;
>>
>> The system that failed is a Red Hat Enterprise system (5.9 at the
>> moment.) The system has the backuppc (3.2.0) pool living on a Promise
>> vTrak system in a Software RAID 10 (softlinked to a partition called
>> /backup). That part is fine.
>>
>> The OS drives were supposed to have been configured as a software RAID 1
>> but (and for the life of me, I cannot figure out how this could happen
>> aside from malice or gross incompetence) the secondary drive (/dev/sdb)
> gross incompetence on whose part? The sysadmin that left some time ago,
> or the sysadmin that has been responsible for the system since the
> previous admin left?

In this case, the previous admin has been gone less than a year. As that 
year has progressed, I've found a lot of the stuff that he "documented" 
he did so by dumping out a stream of consciousnesses the day he put 
things together, then never updated it later as the system grew. The 
longer I go, the more things I find that were ticking time bombs waiting 
to go off.
>> apparently hadn't synced with the primary drive (/dev/sda) since August
>> of 2009. Literally everything (passwd, group, grub, fstab, the works)
>> there was almost four years old. Unfortunately for me, the primary drive
>> failed taking the current (though supposedly, mirrored) configs with it.
>> The system obviously has undergone a great deal of expansion and
>> tweaking in the interim. Thus, the config files in /etc/backuppc were
>> essentially the defaults.
>>
>> Now, the folders within the directories are there. There are directories
>> under /backup/pc/hostname/ by those directories do not show in the menu
>> when you try to to browse via the web interface.
>>
>> I'll set TreashCleanSleepSec to a ridiculous number as was suggested.
>> Obviously, once I realized that the system may have been eating or had
>> eaten data, I stopped the backuppc service. There is some data currently
>> in /backup/trash (though murphey's law says it won't be any of the more
>> important data that may be missing). If I can ID which system the data
>> came from, I can probably just move the data back under its host
>> directory, correct? under ../trash the directories are named something
>> like 1363794493_24518_0, if I move them under ../pc/hostname/ and give
>> them a name like 100, it should show as backup 100, correct? (assuming
>> all permissions are correct?)
> That sounds right to me, technically you should also rebuild the backups
> file, but this is not required to be able to browse the backup (in my
> experience).
>> Thanks for the pointers. This event has furthered my belief that
>> software RAID is crap.
> Interesting, in my opinion, it would further my belief that software
> RAID is fantastic. Hardware RAID usually uses different tools for each
> RAID controller brand (or model), which can be frustratingly difficult
> to get a proper current status from it. In addition, they often (in my
> experience) can have a failed drive without anyone becoming aware of it
> (no user close enough to hear the alarm, no alarm sounding, hard to get
> status, etc), and as always, if the controller fails you are SOL as far
> as getting a working system again.
>
> A simple cat /proc/mdstat would have shown the current status of your
> software RAID array, installing mdadm and configuring would allow it to
> automatically send you alert emails when any drive was missing from the
> array, etc.
We're using mdadm on the array holding the pool. Six months ago, that 
array failed when three drives fails two of which the hardware RAID 
believed failed at the same time. The previous admin believed 
(adamantly) that firmware updates were a waste of time. In this case, a 
year after the array was installed, Promise released firmware that would 
have mitigated the failure we suffered.
>
> Personally, I use a complete (free open source) monitoring system
> (www.xymon.com) with a plugin which will monitor my software raid array,
> this then alerts me via SMS of any failures. It may be overkill in your
> situation, but I would strongly suggest a minimum of mdadm to send
> emails on failure, though you should also consider other failures that
> might bite you in the future (such as backuppc dying and never running a
> backup, not discovered until you need to restore something, or many
> other possible issues). IMHO, a server which is not monitored is a
> disaster that you just don't know about yet.
>
> PS, that is not to say that you will never experience a disaster just
> because you monitor a system, there is always some new way things can
> break which the monitoring system did not test for, but these are much
> more rare, and once you experience them once, you can write the
> monitoring script for it (very easy with xymon).
>
> Regards,
> Adam

Previous admin had us using monitoring with Zenoss (with SNMP hardwar

Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Adam Goryachev
On 21/03/13 10:41, Phil Kennedy wrote:
> Self-replying to add a little detail here;
>
> The system that failed is a Red Hat Enterprise system (5.9 at the 
> moment.) The system has the backuppc (3.2.0) pool living on a Promise 
> vTrak system in a Software RAID 10 (softlinked to a partition called 
> /backup). That part is fine.
>
> The OS drives were supposed to have been configured as a software RAID 1 
> but (and for the life of me, I cannot figure out how this could happen 
> aside from malice or gross incompetence) the secondary drive (/dev/sdb) 
gross incompetence on whose part? The sysadmin that left some time ago,
or the sysadmin that has been responsible for the system since the
previous admin left?
> apparently hadn't synced with the primary drive (/dev/sda) since August 
> of 2009. Literally everything (passwd, group, grub, fstab, the works) 
> there was almost four years old. Unfortunately for me, the primary drive 
> failed taking the current (though supposedly, mirrored) configs with it. 
> The system obviously has undergone a great deal of expansion and 
> tweaking in the interim. Thus, the config files in /etc/backuppc were 
> essentially the defaults.
>
> Now, the folders within the directories are there. There are directories 
> under /backup/pc/hostname/ by those directories do not show in the menu 
> when you try to to browse via the web interface.
>
> I'll set TreashCleanSleepSec to a ridiculous number as was suggested. 
> Obviously, once I realized that the system may have been eating or had 
> eaten data, I stopped the backuppc service. There is some data currently 
> in /backup/trash (though murphey's law says it won't be any of the more 
> important data that may be missing). If I can ID which system the data 
> came from, I can probably just move the data back under its host 
> directory, correct? under ../trash the directories are named something 
> like 1363794493_24518_0, if I move them under ../pc/hostname/ and give 
> them a name like 100, it should show as backup 100, correct? (assuming 
> all permissions are correct?)
That sounds right to me, technically you should also rebuild the backups
file, but this is not required to be able to browse the backup (in my
experience).
> Thanks for the pointers. This event has furthered my belief that 
> software RAID is crap.
Interesting, in my opinion, it would further my belief that software
RAID is fantastic. Hardware RAID usually uses different tools for each
RAID controller brand (or model), which can be frustratingly difficult
to get a proper current status from it. In addition, they often (in my
experience) can have a failed drive without anyone becoming aware of it
(no user close enough to hear the alarm, no alarm sounding, hard to get
status, etc), and as always, if the controller fails you are SOL as far
as getting a working system again.

A simple cat /proc/mdstat would have shown the current status of your
software RAID array, installing mdadm and configuring would allow it to
automatically send you alert emails when any drive was missing from the
array, etc.

Personally, I use a complete (free open source) monitoring system
(www.xymon.com) with a plugin which will monitor my software raid array,
this then alerts me via SMS of any failures. It may be overkill in your
situation, but I would strongly suggest a minimum of mdadm to send
emails on failure, though you should also consider other failures that
might bite you in the future (such as backuppc dying and never running a
backup, not discovered until you need to restore something, or many
other possible issues). IMHO, a server which is not monitored is a
disaster that you just don't know about yet.

PS, that is not to say that you will never experience a disaster just
because you monitor a system, there is always some new way things can
break which the monitoring system did not test for, but these are much
more rare, and once you experience them once, you can write the
monitoring script for it (very easy with xymon).

Regards,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Phil Kennedy
Self-replying to add a little detail here;

The system that failed is a Red Hat Enterprise system (5.9 at the 
moment.) The system has the backuppc (3.2.0) pool living on a Promise 
vTrak system in a Software RAID 10 (softlinked to a partition called 
/backup). That part is fine.

The OS drives were supposed to have been configured as a software RAID 1 
but (and for the life of me, I cannot figure out how this could happen 
aside from malice or gross incompetence) the secondary drive (/dev/sdb) 
apparently hadn't synced with the primary drive (/dev/sda) since August 
of 2009. Literally everything (passwd, group, grub, fstab, the works) 
there was almost four years old. Unfortunately for me, the primary drive 
failed taking the current (though supposedly, mirrored) configs with it. 
The system obviously has undergone a great deal of expansion and 
tweaking in the interim. Thus, the config files in /etc/backuppc were 
essentially the defaults.

Now, the folders within the directories are there. There are directories 
under /backup/pc/hostname/ by those directories do not show in the menu 
when you try to to browse via the web interface.

I'll set TreashCleanSleepSec to a ridiculous number as was suggested. 
Obviously, once I realized that the system may have been eating or had 
eaten data, I stopped the backuppc service. There is some data currently 
in /backup/trash (though murphey's law says it won't be any of the more 
important data that may be missing). If I can ID which system the data 
came from, I can probably just move the data back under its host 
directory, correct? under ../trash the directories are named something 
like 1363794493_24518_0, if I move them under ../pc/hostname/ and give 
them a name like 100, it should show as backup 100, correct? (assuming 
all permissions are correct?)

Thanks for the pointers. This event has furthered my belief that 
software RAID is crap.
~Phil


On 3/20/2013 12:37 PM, Phil Kennedy wrote:
> Hi,
> I recently had a somewhat odd system failure (poorly configured software
> RAID) that lead to a *very* old set of BackupPC config files being
> loaded. On one windows machine (possibly more), the default SMB share
> was reset to C$ instead of E$. The full count keep, plus the min keep
> values were also set lower than we wanted them. BackupPC naturally
> marked all the old E$ directories as trash, and has removed them from
> the browse backup list.
>
> The good news (besides the fact that the previous admin of this box is
> several states away and out of arms reach) is that the backup
> directories and their data still exist under the
> /var/lib/backuppc/pool/hostname/backup number/ directory. They just
> don't show up when you browse the backups. My question is two fold;
>
> 1. How do I get the directories back in the list? (I'm assuming this
> involves one of the .rrd files?) Again, the data *is* there, it's just
> not web accessible.
>
> 2. How can I tell TrashClean to take a couple days vacation while I sort
> out the consistency of other 130 machines?
>
> Thanks,
> ~Phil
>
> --
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_d2d_mar
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need guidance for backing up remote Windows PC

2013-03-20 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2013-03-20 16:19:23 -0500 [Re: [BackupPC-users] Need 
guidance for backing up remote Windows PC]:
> On Wed, Mar 20, 2013 at 4:00 PM, Jeff Boyce  wrote:
> > [...]
> > Local Network
> >   Sequoia = Samba (and WINS server) and OpenVPN server (192.168.112.50)
> >   Taxa = DNSmasq (dns and dhcp server) (192.168.112.51)
> >   Bacteria = BackupPC server (192.168.112.52)
> >   Network IP = 192.168.112.0/24

ok.

> > Remote Windows Box
> >   Computer Name = jks-e6500
> >   Remote LAN IP = unknown
> >   Remote WAN IP = dynamic
> >   OpenVPN Common Name = jkssequoiaclient

All of these don't matter for the question at hand.

> >   OpenVPN IP = static, 10.9.8.10
> >   OpenVPN routed network

> [...]
> If you manage local dns you can add the target name with the VPN IP
> and everything should work the same as locally.  Alternatively, you
> could set ClientNameAlias to the VPN IP in the backuppc config.

In particular, you can choose whatever name for the client suits your
purposes. Usually, you will want to use just one name for one machine, but
since you've used a different one in the OpenVPN certificate, I thought I'd
mention it. The name in the certificate is really only used for selecting the
clients/ file (in OpenVPN), which usually defines the IP used. It does *not* 
magically set up some sort of name resolution for that name. I would have used
"jks-e6500" to match the host name, but it doesn't really make any difference.

Adding something like

10.9.8.10   jks-e6500

to a hosts-type file (/etc/hosts on the BackupPC server or better a hosts file
served by your DNSmasq server) should do the trick.

Talking of hosts files, the DHCP flag in BackupPC's hosts file should be 0 :-).

> > My thinking is that since the remote Windows box can connect and browse the
> > Samba shares on Sequoia via the VPN, then obviously Samba knows how to
> > communicate with this remote client.

At the TCP level, the Samba server doesn't really need to know anything.
There's an incoming connection from an IP it can route reply packets to.
Fine. Samba itself might require more, in order to determine whether to
allow access or not. The remote machine might register itself with the
Samba WINS server. But it's the remote machine that initiates the connection.

> No, that's not entirely obvious unless the backuppc server is also the
> VPN server.   Sometimes VPN servers are configured to NAT to their
> ethernet interfaces to provide LAN connectivity for the remote
> clients.

That's a good point. If that were the case, you'd need to rethink things.

> In your case you need routing  from the backuppc server to
> the client IP which may or may not be present.  Can you connect with
> smbclient to the 10.9.8.10 IP?

If your VPN server is not NATting and it's not the default gateway, then you'd
need either a host or probably better a network route (on your BackupPC
server):

# route add -host 10.9.8.10 gw sequoia
or
# route add -net 10.9.8.0/24 gw sequoia

Additionally, if sequoia was not previously routing traffic, you might need to

# echo 1 > /proc/sys/net/ipv4/ip_forward

(on sequoia) which you'd want to do automatically on reboot by adding (or
uncommenting)

net.ipv4.ip_forward=1

in /etc/sysctl.conf. For IPv6, see the comments in sysctl.conf.

Regards,
Holger

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrashClean / Return directories backup list

2013-03-20 Thread Holger Parplies
Hi,

backu...@kosowsky.org wrote on 2013-03-20 15:35:08 -0400 [Re: [BackupPC-users] 
Stop TrachClean / Return directories backup list]:
> Holger Parplies wrote at about 19:56:26 +0100 on Wednesday, March 20, 2013:
>  > [...]
>  >This means that all backups no longer
>  >to be kept will be moved to $TopDir/trash. As I read the code, they
>  >will get a name consisting of time, process id, and counter. That 
> means
>  >you will have a hard time identifying where they came from (because
>  >neither host name nor backup number will be visible any longer). The
>  >modification date of the subdirectories might give you a hint
>  >concerning the order they belong in. 
> 
> You should be able to completely recreate the numbering plus the info
> in the 'backups' file using the backupInfo file.

presuming it's 3.x ... yes, I should have thought of that. I even mentioned
BackupPC_fixBackupSummary ...

Still, I would prefer not moving backups that are to be kept to the trash in
the first place :). If nothing else, it should avoid needing to reconstruct
the backups file.

I'd like to add that it might be hard to tell which backup (if any) trashClean 
might have started deleting. Backups will usually not be moved to the trash
unless there is a trashClean process running, after all.

>  >Revoking write permission on $Topdir/trash itself
>  >should also stop trashClean, but I believe the code moving
>  >things there in the first place would then resort to
>  >deleting the trees instead.
> 
> Yes - anything not moved will be deleted directly by RmTreeDefer

Which means you shouldn't do that. Change permissions on subdirectories of
trash/ (if you need to), not on the directory itself. Or see below.

>  > Presuming that is not possible, I'd set $Conf {BackupsDisable} = 2
>  > in the main config.pl to disable all backups by default, and then re-enable
>  > them one by one for each host I have checked and corrected in the host.pl 
> file.
>  > Note that I have not tested this. Perhaps someone could confirm that 
> backups
>  > are not expired for hosts with BackupsDisable'd ...
> 
> Note that BackupPC_dump code has the following
> usage comment:
> # -e   Just do an dump expiry check for the client.  Don't do anything
> #  else.  This is used periodically by BackupPC to make sure that
> #  dhcp hosts have correctly expired old backups.  Without this,
 ^^
> #  dhcp hosts that are no longer on the network will not expire
> #  old backups.
> 
> So, expiry is still called even if Backups are disabled...

Yes, but only for DHCP hosts, I believe, and, as you say,

> the code for BackupPC_dump seems to exit before expiry if
> BackupsDisable == 2 so you should be ok...

The main config file contains the following comment explaining
$Conf{BackupsDisabled}:
# Disable all full and incremental backups.  These settings are
# useful for a client that is no longer being backed up
# (eg: a retired machine), but you wish to keep the last
# backups available for browsing or restoring to other machines.

It would make sense not to do expiry for retired machines. Still, I'd hate for
you to rely on our opinion and find your backups disappearing.

So, I'd probably set both BackupsDisabled *and* TrashCleanSleepSec - just in
case.

Jeffrey wrote on Wed, 20 Mar 2013 15:16:58 -0400:
> In fact, if /var/lib/backuppc/pool/hostname/backup number/ truly

Host names are still not pooled ;-).

> exist, then they should all show up in the browser, provided that the
> 'backups' file in each 'hostname' directory hasn't been
> corrupted.

Thank you for clarifying that. I meant to imply it, but I don't think I
actually said it.

> [...]
> So, I guess I'm not sure why they would still be in the pc tree but
> [not] browsable. Perhaps there is a permissions issue?

My first guess is that the observation is inaccurate. It *is* rather hard to
tell just from the backup numbers (unless, of course, your schedule said to
keep 100 backups and the incorrect one now limits it to 7).

Other than that - SELinux perhaps? What do you see in the browser, no backups
at all, or only a few backups?

You (Phil, that is) weren't explicit on the nature of the failure. I'm
guessing it affected your root FS (/etc/backuppc) and not your BackupPC pool
(/var/lib/backuppc) - is that correct?

> [...]
> If it's already in the trash... well you have to do something a little
> kludgey... I probably would just change the permissions on the trash
> directory so that user backuppc can't go there... or change the
> ownership of the subdirectories...

Sometimes it's even more simple, though I obviously missed it the first time
round, too:
% mkdir $TopDir/saved-from-trash
% mv $TopDir/trash/* $TopDir/saved-from-trash

(this should probably even stop an active trashClean from deleting (much) more
from what it's currently working on).

> Alternatively one could 

Re: [BackupPC-users] Need guidance for backing up remote Windows PC

2013-03-20 Thread Les Mikesell
On Wed, Mar 20, 2013 at 4:00 PM, Jeff Boyce  wrote:
>
> I am trying to figure out if my objective is possible.  I want to be able to
> backup a remote Window box that connects to the local network via OpenVPN.
> I have scanned through the archives and have seen some discussion of similar
> things, but nothing that really gives me good overall direction on whether
> it will work, or how to get it to work, with my network configuration.

There's no real difference as long as it is up at a reachable IP address.

> I am
> using BackupPC to backup the local Windows boxes, and would like to add a
> remote one.  I am not that concerned about the time it would take to
> complete a backup over the WAN, as I can configure it to work at night.
>
> Local Network
>   Sequoia = Samba (and WINS server) and OpenVPN server (192.168.112.50)
>   Taxa = DNSmasq (dns and dhcp server) (192.168.112.51)
>   Bacteria = BackupPC server (192.168.112.52)
>   Network IP = 192.168.112.0/24
>
> Remote Windows Box
>   Computer Name = jks-e6500
>   Remote LAN IP = unknown
>   Remote WAN IP = dynamic
>   OpenVPN Common Name = jkssequoiaclient
>   OpenVPN IP = static, 10.9.8.10
>   OpenVPN routed network
>
> I have BackupPC configured to connect to the local Window boxes via SMB, as
> I didn't care for the cygwin and rsync implementation on windows when I used
> it in the past.  Besides, I already have Samba configured and running just
> fine, so why not just use it.

The big difference would be bandwidth usage after the initial copy.
Every smb full is going to send all the data.  Another difference is
that smb incrementals are based on the file timestamps and won't track
files added in ways that keep an old timestamp, old files in their new
position under a renamed directory, or deletions.   You might like the
cwrsync or deltacopy variations of rsync - still cygwin based but
packaged in a windows installer.

> I seem to have both DNS and netbios name
> resolution working properly for the local LAN, but don't know how the remote
> box fits into that when it connects to Samba via a VPN network.

If you manage local dns you can add the target name with the VPN IP
and everything should work the same as locally.  Alternatively, you
could set ClientNameAlias to the VPN IP in the backuppc config.

> My thinking is that since the remote Windows box can connect and browse the
> Samba shares on Sequoia via the VPN, then obviously Samba knows how to
> communicate with this remote client.

No, that's not entirely obvious unless the backuppc server is also the
VPN server.   Sometimes VPN servers are configured to NAT to their
ethernet interfaces to provide LAN connectivity for the remote
clients.   In your case you need routing  from the backuppc server to
the client IP which may or may not be present.  Can you connect with
smbclient to the 10.9.8.10 IP?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Need guidance for backing up remote Windows PC

2013-03-20 Thread Jeff Boyce
Greetings -

I am trying to figure out if my objective is possible.  I want to be able to 
backup a remote Window box that connects to the local network via OpenVPN. 
I have scanned through the archives and have seen some discussion of similar 
things, but nothing that really gives me good overall direction on whether 
it will work, or how to get it to work, with my network configuration.  I am 
using BackupPC to backup the local Windows boxes, and would like to add a 
remote one.  I am not that concerned about the time it would take to 
complete a backup over the WAN, as I can configure it to work at night.

Local Network
  Sequoia = Samba (and WINS server) and OpenVPN server (192.168.112.50)
  Taxa = DNSmasq (dns and dhcp server) (192.168.112.51)
  Bacteria = BackupPC server (192.168.112.52)
  Network IP = 192.168.112.0/24

Remote Windows Box
  Computer Name = jks-e6500
  Remote LAN IP = unknown
  Remote WAN IP = dynamic
  OpenVPN Common Name = jkssequoiaclient
  OpenVPN IP = static, 10.9.8.10
  OpenVPN routed network

I have BackupPC configured to connect to the local Window boxes via SMB, as 
I didn't care for the cygwin and rsync implementation on windows when I used 
it in the past.  Besides, I already have Samba configured and running just 
fine, so why not just use it.  I seem to have both DNS and netbios name 
resolution working properly for the local LAN, but don't know how the remote 
box fits into that when it connects to Samba via a VPN network.

My thinking is that since the remote Windows box can connect and browse the 
Samba shares on Sequoia via the VPN, then obviously Samba knows how to 
communicate with this remote client.  Somehow I need to understand how that 
is occurring (what name or what IP address is Samba referencing for the 
remote box?) and make that information known to the BackupPC server 
(possibly via the DNSmasq server?) so that it could initiate a backup.

Any suggestions on a general approach to evaluating how to achieve my 
objective would be appreciated.  Please CC me directly as I only get the 
mailing list via the daily digest.  Thanks.

Jeff Boyce
Meridian Environmental
www.meridianenv.com


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread backuppc
Holger Parplies wrote at about 19:56:26 +0100 on Wednesday, March 20, 2013:
 > Hi,
 > 
 > Phil Kennedy wrote on 2013-03-20 12:37:10 -0400 [[BackupPC-users] Stop 
 > TrachClean / Return directories backup list]:
 > > Hi,
 > > I recently had a somewhat odd system failure (poorly configured software 
 > > RAID) that lead to a *very* old set of BackupPC config files being 
 > > loaded. On one windows machine (possibly more), the default SMB share 
 > > was reset to C$ instead of E$. The full count keep, plus the min keep 
 > > values were also set lower than we wanted them. BackupPC naturally 
 > > marked all the old E$ directories as trash, and has removed them from 
 > > the browse backup list.
 > > 
 > > The good news (besides the fact that the previous admin of this box is 
 > > several states away and out of arms reach) is that the backup 
 > > directories and their data still exist under the 
 > > /var/lib/backuppc/pool/hostname/backup number/ directory.
 > 
 > You probably mean "pc", not "pool".
 > 
 > > They just don't show up when you browse the backups.
 > 
 > First of all, let's imagine what happens in the case you described.
 
 > 2.) The history settings (FullKeepCnt, IncrKeepCnt, etc.) are changed.
 > => On the next invocation of BackupPC_dump (with the correct options) for
 >the host, normally on the next backup *attempt*, backups will be 
 > expired
 >as defined by the new settings. This means that all backups no longer
 >to be kept will be moved to $TopDir/trash. As I read the code, they
 >will get a name consisting of time, process id, and counter. That 
 > means
 >you will have a hard time identifying where they came from (because
 >neither host name nor backup number will be visible any longer). The
 >modification date of the subdirectories might give you a hint
 >concerning the order they belong in. 

You should be able to completely recreate the numbering plus the info
in the 'backups' file using the backupInfo file. Indeed, I believe the
plaintext hash that it represents includes all the necessary fields:
  'noFill'
  'nFilesNew'
  'num'
  'size"
  'endTime'
  'fillFromNum'
  'xferErrs'
  'xferMethod'
  'startTime'
  'sizeNewComp'
  'mangle'
  'version'
  'nFilesExist'
  'charset'
  'tarErrs'
  'xferBadShare'
  'sizeExist'
  'level'
  'nFiles
  'compress'
  'sizeExistComp'
  'type'
  'xferBadFile'
  'sizeNew'


 >Note, though, that BackupPC_trashClean will become active by
 >default every 5 minutes and delete everything in
 >$TopDir/trash. You can increase the interval by setting
 >$Conf {TrashCleanSleepSec}, but you should note that upon
 >startup trashClean will empty the trash once before
 >sleeping.  You should be able to protect items already in
 >the trash by setting permissions accordingly. trashClean
 >just tries to unlink/rmdir the items, so if you 'chmod a=
 >...' a non-empty directory, it will simply fail and leave
 >the directory where it is (for a file, this would obviously
 >not work). Revoking write permission on $Topdir/trash itself
 >should also stop trashClean, but I believe the code moving
 >things there in the first place would then resort to
 >deleting the trees instead.

Yes - anything not moved will be deleted directly by RmTreeDefer
 > > 2. How can I tell TrashClean to take a couple days vacation while I sort 
 > > out the consistency of other 130 machines?
 > 
 > Well, as I said, you could set $Conf {TrashCleanSleepSec} to a high value, 
 > but
 > the point really is that you want to avoid the backups being trashed in the
 > first place. What I'd probably do is stop BackupPC completely while I sort
 > things out. Presuming that is not possible, I'd set $Conf {BackupsDisable} = 
 > 2
 > in the main config.pl to disable all backups by default, and then re-enable
 > them one by one for each host I have checked and corrected in the host.pl 
 > file.
 > Note that I have not tested this. Perhaps someone could confirm that backups
 > are not expired for hosts with BackupsDisable'd ...

Note that BackupPC_dump code has the following
usage comment:
# -e   Just do an dump expiry check for the client.  Don't do anything
#  else.  This is used periodically by BackupPC to make sure that
#  dhcp hosts have correctly expired old backups.  Without this,
#  dhcp hosts that are no longer on the network will not expire
#  old backups.

So, expiry is still called even if Backups are disabled... however,
the code for BackupPC_dump seems to exit before expiry if
BackupsDisable == 2 so you should be ok... but I would check/test as
Holger suggests...

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/

Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread backuppc
Phil Kennedy wrote at about 12:37:10 -0400 on Wednesday, March 20, 2013:
 > Hi,
 > I recently had a somewhat odd system failure (poorly configured software 
 > RAID) that lead to a *very* old set of BackupPC config files being 
 > loaded. On one windows machine (possibly more), the default SMB share 
 > was reset to C$ instead of E$. The full count keep, plus the min keep 
 > values were also set lower than we wanted them. BackupPC naturally 
 > marked all the old E$ directories as trash, and has removed them from 
 > the browse backup list.
 > 
 > The good news (besides the fact that the previous admin of this box is 
 > several states away and out of arms reach) is that the backup 
 > directories and their data still exist under the 
 > /var/lib/backuppc/pool/hostname/backup number/ directory. They just 
 > don't show up when you browse the backups. My question is two fold;
 > 
 > 1. How do I get the directories back in the list? (I'm assuming this 
 > involves one of the .rrd files?) Again, the data *is* there, it's just 
 > not web accessible.

I don't have any 'rrd' files on my system ever... I don't believe
rrdtool (and its ilk) have anything to do with regular browsing of
backups.

In fact, if /var/lib/backuppc/pool/hostname/backup number/ truly
exist, then they should all show up in the browser, provided that the
'backups' file in each 'hostname' directory hasn't been
corrupted. Indeed, as soon as backups are identified for expire (in
BackupPC_dump) they are moved to trash (using RmTreeDefer) and then
immediately deleted from the 'backups' file. Only later does
BackupPC_trashClean come along to actually perform the deletions of
the backup trees from the trash...

So, I guess I'm not sure why they would still be in the pc tree but
now browsable. Perhaps there is a permissions issue?

 > 
 > 2. How can I tell TrashClean to take a couple days vacation while I sort 
 > out the consistency of other 130 machines?

Well if it's not in the trash yet, then probably the cleanest way
would be to increase the config variables FullKeepCnt and IncrKeepCnt
to some large number.

If it's already in the trash... well you have to do something a little
kludgey... I probably would just change the permissions on the trash
directory so that user backuppc can't go there... or change the
ownership of the subdirectories... this will give some log error
messages (I know this since I once unintentionally had a permission
error in my trash that prevented its deletion and led to just such
error messages).

Alternatively one could temporarily set BackuppPC_trashClean to
/bin/true or something like that...

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Holger Parplies
Hi,

Phil Kennedy wrote on 2013-03-20 12:37:10 -0400 [[BackupPC-users] Stop 
TrachClean / Return directories backup list]:
> Hi,
> I recently had a somewhat odd system failure (poorly configured software 
> RAID) that lead to a *very* old set of BackupPC config files being 
> loaded. On one windows machine (possibly more), the default SMB share 
> was reset to C$ instead of E$. The full count keep, plus the min keep 
> values were also set lower than we wanted them. BackupPC naturally 
> marked all the old E$ directories as trash, and has removed them from 
> the browse backup list.
> 
> The good news (besides the fact that the previous admin of this box is 
> several states away and out of arms reach) is that the backup 
> directories and their data still exist under the 
> /var/lib/backuppc/pool/hostname/backup number/ directory.

You probably mean "pc", not "pool".

> They just don't show up when you browse the backups.

First of all, let's imagine what happens in the case you described.

1.) The backup definition (SmbShareName, BackupFilesOnly, BackupFilesExclude)
is changed.
=> There should be no effect on existing backups. Future backups will be
   done according to the new definition. This will involve backing up the
   wrong data (meaning you get a somewhat bogus backup). tar/smb
   incrementals won't work well, because they use a reference date,
   meaning you only get files in the new set that were changed since the
   reference backup was done. rsync will tend to transfer large amounts
   of data. If you do a *full* backup with the changed definition
   and then change back to the correct definition, you will, again,
   transfer large amounts of data. IncrLevels may make that even more
   complicated.
   In any case, I would recommend forcing a full backup after changing
   back to the correct settings and not doing any backups with the
   incorrect settings if possible. See $Conf {BackupsDisable}, which should
   also avoid expiring backups.

2.) The history settings (FullKeepCnt, IncrKeepCnt, etc.) are changed.
=> On the next invocation of BackupPC_dump (with the correct options) for
   the host, normally on the next backup *attempt*, backups will be expired
   as defined by the new settings. This means that all backups no longer
   to be kept will be moved to $TopDir/trash. As I read the code, they
   will get a name consisting of time, process id, and counter. That means
   you will have a hard time identifying where they came from (because
   neither host name nor backup number will be visible any longer). The
   modification date of the subdirectories might give you a hint
   concerning the order they belong in. Note, though, that
   BackupPC_trashClean will become active by default every 5 minutes and
   delete everything in $TopDir/trash. You can increase the interval by
   setting $Conf {TrashCleanSleepSec}, but you should note that upon
   startup trashClean will empty the trash once before sleeping.
   You should be able to protect items already in the trash by setting
   permissions accordingly. trashClean just tries to unlink/rmdir the
   items, so if you 'chmod a= ...' a non-empty directory, it will simply
   fail and leave the directory where it is (for a file, this would
   obviously not work). Revoking write permission on $Topdir/trash itself
   should also stop trashClean, but I believe the code moving things there
   in the first place would then resort to deleting the trees instead.

> My question is two fold;
> 
> 1. How do I get the directories back in the list? (I'm assuming this 
> involves one of the .rrd files?) Again, the data *is* there, it's just 
> not web accessible.

No. The rrd-files are a Debian add-on, I believe. They don't influence
BackupPC operation at all. You might be looking for
'BackupPC_fixupBackupSummary'. Which version of BackupPC are you using (and,
for that matter, which Linux distribution)?
And which data is where? What you are describing does not seem to be
consistent with what should be happening. Can you confirm that under
/var/lib/backuppc/pc/* the  directories you are expecting are really
all still there?

> 2. How can I tell TrashClean to take a couple days vacation while I sort 
> out the consistency of other 130 machines?

Well, as I said, you could set $Conf {TrashCleanSleepSec} to a high value, but
the point really is that you want to avoid the backups being trashed in the
first place. What I'd probably do is stop BackupPC completely while I sort
things out. Presuming that is not possible, I'd set $Conf {BackupsDisable} = 2
in the main config.pl to disable all backups by default, and then re-enable
them one by one for each host I have checked and corrected in the host.pl file.
Note that I have not tested this. Perhaps someone could confirm that backups
are not expired for hosts with BackupsDisable

Re: [BackupPC-users] Lost /var/lib/backuppc

2013-03-20 Thread Curtis Vaughan
Really odd, but I finally got it running based on the advice everyone gave.
Thanks!


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running commands from the command line

2013-03-20 Thread Holger Parplies
Hi,

Zach La Celle wrote on 2013-03-20 09:53:04 -0400 [[BackupPC-users] Running 
commands from the command line]:
> I'm trying to manually run commands like BackupPC_nightly and a custom 
> BackupPC_deleteFiles.pl script from the command line.  The reason is 
> that I accidentally backed up a large amount of data that should not be 
> backed up.
> 
> When I try to run these commands, I get the error "No language setting" 
> inside of the BackupPC/Lib.pm perl module.

you're lucky there, because you *never run BackupPC_nightly from the command
line*. Instead, let the server run it when appropriate, taking into account
running instances of BackupPC_link (in particular, not starting new ones while
BackupPC_nightly is running):

% BackupPC_serverMesg BackupPC_nightly run

Remember to run that *as the backuppc user*. Depending on how your system is
set up, you might have to use something like 'su -s /bin/bash - backuppc'. The
same is true of any command that uses Lib.pm, but there should be an error
message to that effect ...

Regards,
Holger

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-20 Thread Phil Kennedy
Hi,
I recently had a somewhat odd system failure (poorly configured software 
RAID) that lead to a *very* old set of BackupPC config files being 
loaded. On one windows machine (possibly more), the default SMB share 
was reset to C$ instead of E$. The full count keep, plus the min keep 
values were also set lower than we wanted them. BackupPC naturally 
marked all the old E$ directories as trash, and has removed them from 
the browse backup list.

The good news (besides the fact that the previous admin of this box is 
several states away and out of arms reach) is that the backup 
directories and their data still exist under the 
/var/lib/backuppc/pool/hostname/backup number/ directory. They just 
don't show up when you browse the backups. My question is two fold;

1. How do I get the directories back in the list? (I'm assuming this 
involves one of the .rrd files?) Again, the data *is* there, it's just 
not web accessible.

2. How can I tell TrashClean to take a couple days vacation while I sort 
out the consistency of other 130 machines?

Thanks,
~Phil

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Running commands from the command line

2013-03-20 Thread Zach La Celle
I'm trying to manually run commands like BackupPC_nightly and a custom 
BackupPC_deleteFiles.pl script from the command line.  The reason is 
that I accidentally backed up a large amount of data that should not be 
backed up.

When I try to run these commands, I get the error "No language setting" 
inside of the BackupPC/Lib.pm perl module.

The service runs and functions fine.

How do I set up my environment to be able to run BackupPC scripts from 
the command line?  Does it involve somehow reading in the config.pl script?

-Zach

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Lost /var/lib/backuppc

2013-03-20 Thread Tyler J. Wagner
On 2013-03-19 23:46, Curtis Vaughan wrote:
> So, I had a separate drive dedicated to /var/lib/backuppc where all the 
> backups were stored and other obviously important directories. But that 
> drive experienced total failure and I've replaced it. What do I need to do 
> to get backuppc to reconfigure that drive with all the directories it 
> needs. When I try to start backuppc it complains that it can't create a 
> test hardlink between files in directories thereunder.

Hi Curtis,

You should take the other's advice first and see what's there. Never
blindly run commands people like me put in email unless you understand what
they'll do. That said, here's the quick setup instructions, assuming a
normal Ubuntu install with a "backuppc" user:

1. Make sure the new filesystem is a Linux filesystem with hardlink
support, like ext4, reiserfs, or xfs. If not, reformat.

2. Run as root:

cd /var/lib/backuppc
mkdir cpool log pc pool trash
chown -R backuppc:backuppc /var/lib/backuppc

Regards,
Tyler

-- 
"Anyone who truly understands UI design realizes that every preference
option is an admission of defeat: it's there because you couldn't just
get it right the first time."
   -- Jamie Zawinski

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Lost /var/lib/backuppc

2013-03-20 Thread ashka
Do you have a more detailed piece of log ?
AFAIK, just create cpool, log, pc, pool and trash folders, and maybe 
create a pc/xxx folder per host, where xxx is the defined address in 
BackupPC.

Le 20.03.2013 00:46, Curtis Vaughan a écrit :
> So, I had a separate drive dedicated to /var/lib/backuppc where all the
> backups were stored and other obviously important directories. But that
> drive experienced total failure and I've replaced it. What do I need to do
> to get backuppc to reconfigure that drive with all the directories it
> needs. When I try to start backuppc it complains that it can't create a
> test hardlink between files in directories thereunder.
>
> Thanks!
>
>
> --
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_d2d_mar
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/