Re: [BackupPC-users] Dump Databases

2014-11-20 Thread Carl Cravens
On 11/20/2014 07:03 AM, Fanny Alexandra Oyarzún Bórquez wrote:
> I databases postgres and mysql to back and I wonder if is it possible to add 
> two script DumpPreUSerCmd?, eg
> DumpPreUSerCmd $sshPath -q -x -l root $host 
> /usr/local/sbin/automysqlbackup.sh;$sshPath -q -x -l root $host 
> /usr/local/sbin/autopgsqlbackup.sh

What we do is call a single script named 'prebackup', which then runs all the 
scripts it finds in /usr/local/prebackup.d/ using GNU 'run-parts'.

#!/bin/bash

LOCKFILE="/var/lock/local-prebackup.lock"

dotlockfile -l -p "$LOCKFILE"

run-parts --exit-on-error /usr/local/prebackup.d

dotlockfile -u "$LOCKFILE"

-- 
Carl D Cravens (ra...@phoenyx.net)
Talk is cheap because supply inevitably exceeds demand.

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] lastlog

2014-10-30 Thread Carl Cravens
Why not dump the SQL data live to a text file and back that up?  That's what 
mysqldump is for... you can back up everything you need to restore an entire 
database without ever taking down the mysql server.  And this makes restores 
much simpler... if a developer drops an table by mistake, you don't have to 
restore the *entire* database back to a previous checkpoint, you can just 
extract the plain-text records needed to recreate that table and its data.

Check the options... the default settings don't capture everything, but with 
the right settings you can capture users, permissions etc.

On 10/30/2014 01:46 PM, Moorcroft, Mark (ARC-TS)[ERC, Inc.] wrote:
>
>
>> Mine 'have' /var/log/lastlog.  Just not the problem with TB+ apparent
>> sizes. Do you have a uid of -1  - or very, very large uids?
>
> Every one of my systems list last log at 505GB with a vanilla ³ls² and all
> are root:root.
>
>
>> Maybe that was when everyone else took the -1 uid out...
>
> ?
>
>> The straightforward approach is to have script on the host that runs
>> the mysqldump command so you can use a DumpPreUserCmd like:
>
>> $sshPath -q -x -l root $host /path/to/script.
>> and then back up the dump output instead of the live database.  Or if
>> downtime is OK you can stop the database, then copy its files.
>
>
> Thanks, I am now stopping mysql to back up the files. I just restrict the
> backup to the time when I have mysql being stopped. I could use the
> pre/post commands, but I have enough latitude to be confident about
> completion.
>
> --
> Les Mikesell
>   lesmikes...@gmail.com
>
>
>
>
>


-- 
Carl D Cravens (ra...@phoenyx.net)
A bit of tolerance is worth a megabyte of flaming.

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running BackupPC and Nagios web interfaces from the same box

2014-10-14 Thread Carl Cravens
On 10/14/2014 02:53 PM, xpac wrote:
> Is there a way to make it so that the BackupPC interface doesn't have to user 
> "backuppc" as the user in httpd.conf?  Or some other way I can do this?

Take a look at the suexec Apache module.  It lets you specify which user:group 
each virtual host runs as.  I use this frequently.

-- 
Carl D Cravens (ra...@phoenyx.net)
A man about to speak the truth should keep one foot in the stirrup.

--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] QNX 4.23A

2014-09-19 Thread Carl Cravens
Can you compile rsync from source?  That's what I used to have to do on HP-UX 
and other systems that didn't come with GNU tools... install GCC and then 
compile everything I wanted.  (Heck, that's what I used to do with everything 
back on BSD/OS.  I don't miss the days of having to compile Perl and Apache 
from scratch for every upgrade.)

I'd compile rsync and ssh from source and go from there.

On 09/17/2014 01:58 PM, Gerald Brandt wrote:
> On 2014-09-17 1:53 PM, Dimitri Maziuk wrote:
>> On 09/17/2014 01:50 PM, Gerald Brandt wrote:
>>
>>> never mind, just found rshd and ftpd as well.  Tucked away in a subdir.
>> You should be able to substitute rsh for ssh after editing .rhosts.
>>
> still no rsync though.
>
> --
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce
> Perforce version control. Predictably reliable.
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Slashdot TV.  Video for Nerds.  Stuff that Matters.
http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 504 Gateway Time-out on Backuppc web interface

2014-09-03 Thread Carl Cravens
I'm assuming Ubuntu is close enough to stock Debian.  The main BackupPC log is 
in /var/lib/backuppc/log.  If you need to look at compressed logs, use 
BackupPC_zcat, as the developer didn't use standard gzip-compatible compression.

My startup section looks like this:

2014-09-03 14:00:48 Reading hosts file
2014-09-03 14:00:48 BackupPC started, pid 19467
2014-09-03 14:00:48 Running BackupPC_trashClean (pid=19485)
2014-09-03 14:00:48 Next wakeup is 2014-09-04 00:00:00

I expect if BackupPC is getting an error, you'll see it this log.

If it's failing silently, try starting it outside the init script so 
start-stop-daemon doesn't eat its output...

sudo -u backuppc /usr/share/backuppc/bin/BackupPC

...and see what output you get.

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 504 Gateway Time-out on Backuppc web interface

2014-09-03 Thread Carl Cravens
# ps -ef |grep BackupPC
root  7014  6189  0 08:31 pts/000:00:00 grep BackupPC
backuppc 14760 1  0  2013 ?00:37:44 /usr/bin/perl 
/usr/share/backuppc/bin/BackupPC -d
backuppc 14791 14760  0  2013 ?1-17:52:08 /usr/bin/perl 
/usr/share/backuppc/bin/BackupPC_trashClean

If you don't see output something like this, then BackupPC isn't running.

Considering it didn't find a valid pid in /var/run/backuppc/BackupPC.pid when 
you tried to restart, I'd say BackupPC wasn't running.

On 09/03/2014 06:40 AM, Tom Fallon wrote:
> Hi Kenneth
>
> thanks for taking the time to reply Hard to tell if backuppc is
> running or not. I've tried service backuppc status but that does not
> appear to be a valid command.
>
> Checking the backuppc documentation I should be able to run
> /usr/share/backuppc/bin/BackupPC_serverMesg status info but this
> appears to do nothing. No error response and no output. Same command
> on a working backuppc server shows wall of text which per the
> documentation "If it looks cryptic and confusing, and doesn't look
> like an error message, then all is ok."
>
> If I try restarting backuppc via service backuppc reload or service
> backuppc restart I get the following error:
>
> No process in pidfile '/var/run/backuppc/BackupPC.pid' found running;
> none killed.
>
> If I do service backuppc start it "seems" to start - * Starting
> backuppc...
>
> regards Tom
>
> On 30/08/2014 22:03, Kenneth Porter wrote:
>> --On Saturday, August 30, 2014 2:18 PM +0100 Tom Fallon
>>   wrote:
>>
>>> The web interface on one of our backuppc servers is not
>>> responding – it gives error The gateway did not receive a timely
>>> response from the upstream server or application.
>> Is the backuppc service running?
>>
>>
>>
>> --
>>
>>
Slashdot TV.
>> Video for Nerds.  Stuff that matters. http://tv.slashdot.org/
>> ___ BackupPC-users
>> mailing list BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project:http://backuppc.sourceforge.net/
>
>
>
> --
>
>
Slashdot TV.
> Video for Nerds.  Stuff that matters. http://tv.slashdot.org/
>
>
>
> ___ BackupPC-users
> mailing list BackupPC-users@lists.sourceforge.net List:
> https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:
> http://backuppc.wiki.sourceforge.net Project:
> http://backuppc.sourceforge.net/
>

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Queue Order / Queue Pools

2014-08-28 Thread Carl Cravens
I can't complete all the backups in a day with a maxbackups of 1.  It's 
unreasonable to micromanage the blackout periods for 70+ hosts to keep certain 
ones from overlapping.

I've coped with the problem, but it's something that an 
only-one-host-at-a-time-from-this-group feature would make a lot easier to deal 
with.

On 08/28/2014 09:45 AM, Les Mikesell wrote:
> On Thu, Aug 28, 2014 at 8:11 AM, Carl Cravens  wrote:
>> This is a feature I've wanted as well.  I have virtual machines running on 
>> multiple hypervisors, and I'd prefer to only run one backup per hypervisor 
>> at a time.  If I could do that, I could get away with backups during the 
>> day... but if I back up the main file server and the groupware server at the 
>> same time, tech support starts getting calls about things being slow.
>>
>
> Have you tried taking MaxBackups down to 1 to see if backups will
> still complete in the available time?   Or skewing the blackout
> periods for the host and guest systems?
>
-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Queue Order / Queue Pools

2014-08-28 Thread Carl Cravens
This is a feature I've wanted as well.  I have virtual machines running on 
multiple hypervisors, and I'd prefer to only run one backup per hypervisor at a 
time.  If I could do that, I could get away with backups during the day... but 
if I back up the main file server and the groupware server at the same time, 
tech support starts getting calls about things being slow.

On 08/28/2014 08:03 AM, Les Mikesell wrote:
> On Thu, Aug 28, 2014 at 7:01 AM, Andreas Schnederle-Wagner -
> Futureweb.at  wrote:
>>
>> Is it possible that backuppc allows you to put your hosts into groups (aka
>> queues) so that you could limit the number of backup jobs running from any
>> specific queue/host group?
>>
>
> No you can't do that by groups, but unless you have a mix of fast and
> slow networks/hosts you will probably get the best performance by
> setting the overall concurrency (MaxBackups) to 2 or so since you have
> the same problem server-side.   You can have some control over timing
> by setting the blackout periods differently and you can initially skew
> them by doing a 'stop' (even if a backup isn't running) and setting a
> number of hours to defer the next run.The approximately 24-hour
> scheduling will maintain the skew afterward.
>
-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem: infinite backup on some hosts

2014-06-27 Thread Carl Cravens
Is the directory it hangs on empty?  I've had BackupPC/rsync hang when /usr/src 
is empty on a couple of machines.

On 06/25/2014 07:03 AM, Alexander Mikhnovets wrote:
> Hello to everyone!
>
> Sorry for my English.
>
> We have a lot of backuppc server installations. And thousands of backup jobs.
> But sometimes backup job runs infinitely, and killed by backuppc after some 
> time (time out option in config file).
> In final we have partial backup with dirs and file, but not all.
> Log records:
> 2014-06-23 13:32:40 full backup started for directory /home/httpd/wgw-eu/logs
> 2014-06-23 14:27:18 full backup started for directory 
> /home/httpd/wgw-eu/wg_wallet
> 2014-06-24 16:27:19 Aborting backup up after signal ALRM
> 2014-06-24 16:27:20 Got fatal error during xfer (aborted by signal=ALRM)
> 2014-06-24 16:27:20 Saved partial dump 0
> 2014-06-25 03:00:01 full backup started for directory /home/httpd/wgw-eu/logs
> 2014-06-25 03:32:58 full backup started for directory 
> /home/httpd/wgw-eu/wg_wallet
>
> Deleting and adding backup job didn't help.
>
>
> Server side environment:
>
> OS: Ubuntu 12.04.4 (precise) LTS (kernel: 3.2.0-56-generic)
> backuppc: 3.2.1-2ubuntu1.1 (in Ubuntu notation)
> openssh-client: 1:5.9p1-5ubuntu1.4 (in Ubuntu notation)
> rsync: 3.0.9-1ubuntu1 (in Ubuntu notation), 3.0.9  protocol version 30 (in 
> rsync --version)
> rsync capabilities:
>  64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
>  socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
>  append, ACLs, xattrs, iconv, symtimes
>
> Process list part (ps -eFH):
> backuppc 61504 1  0 18908 19712  10 Apr11 ?00:54:16   
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC -d
> backuppc 61507 61504  0 14098 13324  10 Apr11 ?09:28:39 
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC_trashClean
> backuppc 15459 61504  0 32526 66276  15 03:00 ?00:03:27 
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump jobname
> backuppc 15512 15459  0 0 0   2 03:00 ?00:00:58   [ssh] 
> 
> backuppc 15637 15459  0 0 0  13 03:00 ?00:04:00   
> [BackupPC_dump] 
> backuppc 19583 15459  0 11786  6508   2 03:32 ?00:00:00   
> /usr/bin/ssh -q -x somehost.com  /usr/bin/sudo 
> /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D 
> --links --hard-links --times --block-size=2048 --recursive 
> --checksum-seed=32761 --ignore-times . /home/httpd/wgw-eu/wg_wallet/
> backuppc 19584 15459  0 42870 102052  4 03:32 ?00:00:18   
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump jobname
>
> Traces parts (strace -f -p ):
>
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump jobname (PID: 15459):
> Process 15459 attached - interrupt to quit
> select(8, [6], NULL, [6], NULL
>
> /usr/bin/perl /usr/share/backuppc/bin/BackupPC_dump jobname (PID: 19584):
> Process 19584 attached - interrupt to quit
> select(16, [10], NULL, [10], NULL
>
> /usr/bin/ssh -q -x somehost.com  /usr/bin/sudo 
> /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D 
> --links --hard-links --times --block-size=2048 --recursive 
> --checksum-seed=32761 --ignore-times . /home/httpd/wgw-eu/wg_wallet/ (PID: 
> 19583):
> Process 19583 attached - interrupt to quit
> select(7, [3 4], [], NULL, NULL
>
>
>
>
> Client side environment:
>
> OS: CentOS 5.10
> openssh-server: 0:4.3p2-82.el5.x86_64 (in CentOS notation)
> sudo: 0:1.7.2p1-29.el5_10.x86_64 (in CentOS notation), 1.7.2p1 (in sudo -V)
> rsync: 0:3.0.6-4.el5_7.1.x86_64 (in CentOS notation), 3.0.6  protocol version 
> 30 (in rsync --version)
> rsync capabilities:
>  64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
>  socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
>  append, ACLs, xattrs, iconv, no symtimes
>
> Process list part (ps -eFH):
> root  6460 1  0 16202  1212  28 Apr15 ?00:00:00   
> /usr/sbin/sshd
> root  5111  6460  0 22719  3320  15 03:32 ?00:00:00 sshd: bpc 
> [priv]
> bpc   5113  5111  0 22755  1868  31 03:32 ?00:00:00   sshd: 
> bpc@notty
> root  5114  5113  0 26640  2252   7 03:32 ?00:00:00 
> /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids --perms --owner 
> --group -D --links --hard-links --times --block-size=2048 --recursive 
> --checksum-seed=32761 --ignore-times . /home/httpd/wgw-eu/wg_wallet/
> root  5115  5114  0 16597  1376   8 03:32 ?00:00:00   
> /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D 
> --links --hard-links --times --block-size=2048 --recursive 
> --checksum-seed=32761 --ignore-times . /home/httpd/wgw-eu/wg_wallet/
> Traces parts (strace -f -p ):
>
> sshd: bpc [priv] (PID 5111):
> Process 5111 attached - interrupt to quit
> read(6,
>
> sshd: bpc@notty (PID 5113):
> Process 5113 attached - interrupt to quit
> select(12, [3 6 9 11], [], NU

Re: [BackupPC-users] Restore from commandline

2014-06-11 Thread Carl Cravens
On 06/11/2014 10:41 AM, Kenneth Bergholm wrote:
> I have a problem ragrding restoring files, if I try to restore from
> the browser it times out when it tries to list the files it is over
> 40 files in the folder (oden)...

What errors or behavior are you seeing?  Using a '.' or '/' doesn't matter in 
my experience on Linux, as you're specifying a path or filename to restore.  
('.' means "the current directory", which would be relative to the share 
specified.)

-s is technically not a folder, it's the *share* name.  If you are backing up 
(on Linux) /, /boot, /usr and /home (listed as separate shares in the config), 
and you want to restore Fred Fish's directory "foo" from his home directory 
(which is /home/fredfish), your command would look like this:

Backuppc_tarCreate -h oden -n -1 -s /home fredfish/foo > restore.tar

If you're just backing up one large share, "/" (the root filesystem that 
contains everything on Linux), your command would look like this...

Backuppc_tarCreate -h oden -n -1 -s / home/fredfish/foo > restore.tar

Add a -l to the command to just list the matched filenames... what does it 
return?

(I do restores from the command-line all the time, using netcat to stream 
directly to the target machine.)

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync file list "myth"?

2014-05-15 Thread Carl Cravens
> rsync will load the whole directory tree at both ends before starting
> to walk for the comparison

I keep seeing statements to this effect, but it hasn't been true for rsync 
defaults for years.  From the rsync manpage:

Beginning with rsync 3.0.0, the recursive algorithm used is now an 
incremental scan that uses much less memory than before and begins the
transfer  after  the  scanning  of the first few directories have been 
completed.  This incremental scan only affects our recursion algo‐
rithm, and does not change a non-recursive transfer.  It is also only 
possible when both ends of the transfer are at least version 3.0.0.

3.0.0 was released in March of 2008.

--no-inc-recursive will turn this off, but I don't see that (or any flags that 
assume that) in the default configuration.

This algorithm does mean that hard-linked files may result in rsync 
transferring data that already exists on the target because it hasn't 
discovered the hard-link yet.

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Have to set Apache User to BackupPC?

2014-05-15 Thread Carl Cravens
Debian does it by making index.cgi setuid backuppc.  Now that's a binary and 
not a script, and I don't know if that's the standard BackupPC or if the Debian 
maintainer has written a setuid wrapper.

Another way to do it is to set up a separate virtual host for BackupPC and use 
suexec to run the entire virtual host as the backuppc user.  I use this a lot 
on my webservers.


 ServerName backuppc.sampledomain.net

 SuexecUserGroup backuppc backuppc

...



suexec is an Apache module, so you have to install/enable it first.


On 05/15/2014 02:52 AM, 李欣 wrote:
> Hi,
>
> I am trying to install and configure BackupPC on a CentOS 6.3 server by 
> following this link:
>
> http://wiki.centos.org/HowTos/BackupPC
>
> In Configure Apache section,
> it says I have to change User *apache *to User *backupPC*.
>
> Do I have to set User to backupPC?
> Is there any way to get around it?
>
> I am trying to backup a local website server,
> and it has to run the website by User apache.
>
> Thanks in advance.
> LI Xin
>
>
> **
>
>
> --
> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
> Instantly run your Selenium tests across 300+ browser/OS combos.
> Get unparalleled scalability from the best Selenium testing platform available
> Simple to use. Nothing to install. Get started now for free."
> http://p.sf.net/sfu/SauceLabs
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>


-- 
Carl D Cravens (ra...@phoenyx.net)
ANY system works with enough hammer thumps.

--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc and KVM virtual machine

2014-01-03 Thread Carl Cravens
The VM isn't an issue (my BackupPC has run in a KVM guest for over two years), 
but using "file-based" storage (a filesystem inside a file stored on another 
filesystem) is extremely inefficient (we recently ran benchmarks on file versus 
raw device... the performance is awful in comparison).

Use a raw device (raw disk, LVM volume, etc) with the virtio driver.  Your OS 
and backup storage can reside on the same raw device, or they can be separate 
raw devices, or if you insist, the OS can be virtual file storage.  We just use 
raw devices (LVM logical volumes) for everything.

My disk configuration looks like this (virsh dumpxml ):

 
   
   
   
   
   
 

"/dev/diskbackup/diskbackup" is a logical volume named "diskbackup" in a volume 
group named "diskbackup".  (I didn't configure this... traditionally, this 
would be "/dev/vgdiskbackup/lvdiskbackup" or such.)

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC for laptops?

2013-12-19 Thread Carl Cravens
On 12/04/2013 01:34 PM, Russell R Poyner wrote:
> In my new position I'm looking for options for backing up laptops and
> tablets. Most of these machines rarely connect to our wired network or
> vpn.

We solved this by creating a script on the client side that rsync's the user's 
data to a central file server (which is backed up by BackupPC).  That way the 
client has control over when the backup runs.  The user actually has control... 
they get a pop-up when the client wants to do a backup, and they have the 
option of postponing it, in case they're on a slow connection.  They can 
manually launch it at any time, as well.  (And they get a warning... it's on 
their own head if they postpone the backups indefinitely and then want us to 
recover a file they never allowed to be backed up.)

We opened a (alternate) port in the firewall for rsync (via ssh) to connect to 
the file server, and the .authorized_keys entry restricts the connection to 
running the specific rsync server-mode on a specific directory.  So the laptops 
come through the firewall and don't have to be connected to VPN to do a backup.

I can share the code for all that if anybody's interested.


For Android tablets, I use FolderSync, by Tacit Dynamics.  (It's not doing an 
"Android" backup.  If you want that, there are specific apps for that.  I use 
My Backup Pro, by Rerware.)

FolderSync supports Dropbox, Sugar, etc, but I use it over SFTP to our own 
server.  It supports two-way sync, and I use it to sync a data directory.  I 
can drop files on the "network" side and they appear on my tablet 
automatically, and vice-versa.  Sadly, it doesn't yet support rsync, but the 
author is planning to add it.  It's pretty good, but it can't do file 
comparisons... it transfers the whole file if any meta-data changes.

For iOS... well, those users are SOL.  IT never said we'd support them.

-- 
Carl D Cravens (ra...@phoenyx.net)
Talk is cheap because supply inevitably exceeds demand.

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first time full and then only increamental backup

2013-12-19 Thread Carl Cravens
This is something it took me two years of using BackupPC before I realized... 
BackupPC doesn't treat "full" and "incremental" quite the way you expect.

When BackupPC does an rsync incremental, it compares to the last full for 
purposes of deciding what to download and *then* does deduplication against the 
pool.

If you do a full on Sunday, and then a user saves a 100G file to the client, 
every rsync incremental will transfer the entirety of that 100G of data, over 
and over, until that file is backed up in the next full.  So if you take one 
full and never take a full again, your incremental sizes will keep growing, and 
BackupPC will keep downloading the same data over and over.

If you do what you want, you'll actually be creating the problem you're trying 
to avoid.

rsync full backups don't download all the raw data every time... the main 
reason they take longer is running checksums against every file instead of just 
the files that have changed meta-data.  If a full transferred all the data 
every time, full backups of my remote server would take hours instead of 30 
minutes.  It's still taking advantage of rsync to not transfer bytes it doesn't 
need.

--
Carl D Cravens (ra...@phoenyx.net)
"Then I will smite thee with adware!" - Bucky Katt


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Carl Cravens
My experience troubleshooting I/O performance over iSCSI is that Ext4 
journaling has a much higher CPU overhead than XFS does.  Papers I've read show 
evidence that "modern" XFS journaling scales better (better performance) than 
Ext4 as disks grow larger.  http://lwn.net/Articles/476263/

As a sysadmin, I like XFS management tools better than I do Ext4's.

On 12/02/2013 01:15 PM, Hans Kraus wrote:
> Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
>> Hi,
>> I'm using BackupPC 3.2.1-4 (official Debian 7 package).
>> I'm going to configure an external storage (Coraid) in order to backup 
>> several
>> server (mostly Linux).
>> What kind of file system do you suggest?
>> Array is 7 TB large (raid6).
>> Thank you very much
>
> Hi,
> I've chosen Ext4, standig before the same problem some months ago. The
> reason behind this decision was that Ext4 seemed the best 'general
> purpose' FS. Maybe one of the developers can shed more light on this.
>
> Regards, Hans
>
>
>
> --
> Rapidly troubleshoot problems before they affect your business. Most IT
> organizations don't have a clear picture of how application performance
> affects their revenue. With AppDynamics, you get 100% visibility into your
> Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
> http://pubads.g.doubleclick.net/gampad/clk?id=84349351&iu=/4140/ostg.clktrk
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Starting with backuppc

2013-11-11 Thread Carl Cravens
JLuc,

Did you run the init.d script as root?  The best way to start a process 
normally started at boot on Debian or a derivative (such as Ubuntu) is with...

$ sudo invoke-rc.d backuppc start

This ensures that it starts in the same environment that it does at boot, and 
doesn't pick up any oddities from your own environment.

In most cases, the package manager (APT) will start a newly-installed process 
automatically.

$ ps -ef |grep BackupPC

will show if BackupPC is running:

$ ps -ef |grep BackupPC
raven  770   590  0 11:05 pts/300:00:00 grep BackupPC
backuppc  2437 1  0 Nov09 ?00:00:02 /usr/bin/perl 
/usr/share/backuppc/bin/BackupPC -d
backuppc  2464  2437  0 Nov09 ?00:00:52 /usr/bin/perl 
/usr/share/backuppc/bin/BackupPC_trashClean

BackupPC's "config.pl" is not the same as other "configure.pl" scripts... it's 
not a script you run, but a configuration file that BackupPC loads when it 
starts.  (BackupPC stores configuration and statistic data in Perl scripts that 
it can just "run" to load.)  If you were to run it by hand, nothing would 
happen, because it just sets a bunch of variables and exits.

Assuming BackupPC is running, I suspect the core problem is likely your 
webserver configuration or access.  You said "my computer" so I assume BackupPC 
is installed on your personal workstation.  Try...

http://127.0.0.1/backuppc/

If you don't get prompted for a username and login, what happens instead?  If 
you get an error, report the error in detail and that will get us further along.

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupFilesExclude

2013-02-15 Thread Carl Cravens
You don't need the trailing /* with rsync.  I use that in a few places, but 
only because it excludes everything under the directory, but backs up the 
directory itself.  Useful for restoring directory meta-data.

Note that the excludes are simply passed to the underlying transport (rsync, 
tar, etc)... see "INCLUDE/EXCLUDE PATTERN RULES" in the rsync man page for how 
they will get handled.

Mine typically looks like this (which is what the examples in the docs look 
like):

$Conf{BackupFilesExclude} = {
 '/mydata' => [
   '/vmware',
   '/xen',
   '/kvm'
 ]
};

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Some usefull features for the future

2013-02-13 Thread Carl Cravens
I've built up quite a list of features I could really use...

* prioritization (the ERP system gets top priority every 24 hours, no matter 
how many hosts are further past-due)
* exclusive groups (don't back up more than one host in group "virtual host 1" 
at a time)
* profiles (this is a Windows machine, use these defaults instead of the global 
defaults, and profiles can apply to groups)
* additive/subtractive include/exclude (instead of overriding the global 
exclude list, modify it, so that changes to the global get inherited)
* notes (ability to add notes to host entries in the web interface)

I've seriously considered hacking on BackupPC myself.  Seeing that development 
on the current version is stalled and "substantial rewrites" tend to be 
still-born in my experience, I become more and more inclined to do that.

On 01/16/2013 02:32 PM, backu...@kosowsky.org wrote:
> Jonathan Schaeffer wrote at about 10:19:52 +0100 on Wednesday, January 16, 
> 2013:
>   > Hi list,
>   >
>   > I was recently asked by some coworkers during a backuppc presentation
>   > for some usefull functionalities missing in the current stable release.
>   > Those requests are :
>   > - managing roles for the users connected to the web UI and assigning
>   > clients to a group so that users belonging to a role are able to operate
>   > only on some of the clients
>   > - being able to see the size of a directory in the web UI when browsing
>   > a client's backups.
>   >
>   > What do you think about it, does it sound feasable ?
>
> Not a question of technical feasibility but rather a question about
> the state of active development
>
> No further development is occurring on 3.x. The last release was a
> bug fix release a few years back.
>
> The lead (and sole) developer was reportedly working on a substantial
> re-write on a 4.x tree; however, the last communication from him on
> this topic was almost 2 years ago and he has rarely even participated
> in general on the list since then. So, not clear that there is any
> active development going on now. Of course, for all we know, he could
> be plowing ahead in silence...
>
> So, I guess unless you are planning on adding such changes, it is most
> unlikely to happen anytime soon if at all. If you do implement it,
> there really isn't any established process now for merging your
> additions into new releases.
>
>
> --
> Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
> and much more. Keep your Java skills current with LearnJavaNow -
> 200+ hours of step-by-step video tutorials by Java experts.
> SALE $49.99 this month only -- learn more at:
> http://p.sf.net/sfu/learnmore_122612
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-Visualize - visual plot of host backup durations

2013-02-11 Thread Carl Cravens
Also merged.  Not a common occurrence, but it would cause the script to keep 
failing until a successful backup occurred.  Thanks.

On 02/06/2013 05:04 AM, Jonathan Schaeffer wrote:
> avoid the error when parsing a host that has never been backed up

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-Visualize - visual plot of host backup durations

2013-02-11 Thread Carl Cravens
Thanks.  That was dumb of me... I've done a lot of Perl coding (since Perl 4), 
but I'm a little rusty.  I don't use host sorting much, so I hadn't noticed it 
wasn't working right.

On 02/02/2013 08:19 PM, The Lunatic wrote:
>
>
> On 01/28/2013 11:46, Carl Cravens wrote:
>> I've written a little tool to help analyze BackupPC "scheduling"
>> that I thought others might find useful.  It is meant to generate
>> plots that can be viewed from the web, but it's just as easily used
>> from the command line.
>>
>> https://github.com/ravenx99/backuppc-visualize/
>>
>
> This is interestingthough the sort by host doesn't work the way
> I expectso I changed it
>
> --- bpcviz-gatherdata.orig2013-02-02 18:59:43.227136515 -0600 +++
> bpcviz-gatherdata 2013-02-02 19:51:38.473136369 -0600 @@ -104,7
> +104,7 @@ if ( $sorttype eq 'time' ) { @data = sort { $b->[3] <=>
> $a->[3] } @data; } else { -@data = sort { $b->[0] <=> $a->[0] }
> @data; +@data = sort { $a->[0] cmp $b->[0] } @data; }
>
> I think hacked something up in a small html page that shows the
> image, and mod'd CGI/View.pm to show additional types using the same
> process as docs.
>
> I opted to add the extra links this way, because I wanted the plots
> to be considered internal to the BackupPC system and work whether
> the viewer is going direct or through a reverse proxy
>
> Host sorting is helpful, because on some of my bigger hosts, I break
> up what is backed up and then use lockfile so that some of the
> portions are mutually exclusive.  And, now I can see the impact of a
> big full
>

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC-Visualize - visual plot of host backup durations

2013-01-28 Thread Carl Cravens
I've written a little tool to help analyze BackupPC "scheduling" that I thought 
others might find useful.  It is meant to generate plots that can be viewed 
from the web, but it's just as easily used from the command line.

https://github.com/ravenx99/backuppc-visualize/

Over the past year+ that we've been using BackupPC at my company, we've 
struggled with getting all our machines backed up every 24 hours.  We have over 
60 Linux and Windows hosts, and over half of those are virtual machines all 
attached to a single EMC storage server, and a single full backup of all the 
hosts is at 2 TB of data after dedupe.

Because of the load backups create on our VMs, our default is to blackout 6 AM 
to 6 PM, and then allow only trivial machines to back up during the day.  We 
restrict the number of concurrent backups to 3, due to the serious load it 
creates on a VM host when more than one of its guests is being backed up at the 
same time.

This has caused a problem with getting backups of critical machines on a 
24-hour cycle.  In order to see how the backups are interacting, I wrote 
BackupPC-Visualize to create a broken-bar plot (a "timeline") indicating the 
time during which each backup ran over the last N days.  This aggregate view of 
the data quickly identified our problem machines.  (Our solution has been to 
use cron to force the full backups of problem machines to run on the weekend.  
A kludge, and I'd like to find a more elegant solution.)

bpcviz consists of a simple Perl script that gathers and massages the 
/var/lib/backuppc/pc//backups data, and a Ploticus plot script to convert 
the data to visual form.  I tried to use Gnuplot initially, but it won't do 
horizontal plots, and it handles multiple-interval time data very poorly (via a 
kludge).

Let me know if you find it useful, and I'm open to considering feature 
requests... you can open an "issue" on Github tagged as "enhancement", or just 
drop me an email if Github is a barrier for you.

(I hope the author of BackupPC doesn't object to the name... it's the most 
obvious way to make it clear that this tool works with BackupPC.)

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Offsite copy

2012-06-26 Thread Carl Cravens
I manage my offsite disaster-recovery backup (which is rsync'd over the net) by:

+ Creating a directory of links that point only to the most recent full and 
incremental for each host.  (Incrementals always go against the last full.)
+ rsync each host directory individually.  This breaks dedupe between hosts, 
but it's a compromise against rsync not handling the huge file list well.
+ Makes a tarball of the current backuppc config and copies it to the offsite, 
so there's always copy of the config "in the clear", with documentation on how 
to bring backuppc up on the remote box.

So my offsite backup always contains the most recent backup image.  It's not a 
"drop in place" snapshot, but it's up to date and contains all the information 
necessary to quickly bring backuppc up on the offsite box, but doesn't contain 
unnecessary incrementals or older fulls.  My offsite is about 1.8 TB, my local 
is 2.6 TB.

I have scripts I can share.

On 06/26/2012 12:33 PM, Timothy J Massey wrote:
> shorvath  wrote on 06/26/2012 01:02:00 PM:
>
>  > I wouldn't  want to rsync  /var/lib/backuppc as this is not in a
>  > format that can be readily used.
>  > What I'm after is a ready  to use snapshot, As it looks on the
>  > server I'm backing up or what it would look like if using the
>  > archive host feature but just not in tar format.
>
> While I'm at it, another point:  you say that you want a snapshot.  Great:  I 
> think you should have a snapshot, too.
>
> Unfortunately, BackupPC does not provide snapshots.  It provides files.  
> Under most circumstances (and under Windows, under *all* circumstances), 
> BackupPC is not going to allow you to do a bare-metal restore, and certainly 
> not easily.  Frankly, asking it to is a lot to ask.  But it does a tremendous 
> job of allowing you to back up huge collections of individual files, as well 
> as keeping an entire history of those files, all in a space-optimized way.
>
> There *are* tools that do a great job of snapshots, including bare-metal 
> restore.  However, they won't easily allow you to restore a single file, and 
> won't allow you to keep multiple copies of your data efficiently.  Clonezilla 
> is a good one to look at.  Using LVM snapshots is another.  If you're using 
> virtualized hosts, your virtualization engine should have snapshot 
> capabilities already (and snapshots are one of the best reasons to be using 
> virtualization in the first place).
>
> File-level and snapshot backups are complementary.  There is some overlap, 
> but one does not cover everything that the other does.  One is a hammer, one 
> is a screwdriver.  Use the right tool for the job.
>
> Tim Massey

-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Administrator

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/