On 11/20/2014 07:03 AM, Fanny Alexandra Oyarzún Bórquez wrote:
I databases postgres and mysql to back and I wonder if is it possible to add
two script DumpPreUSerCmd?, eg
DumpPreUSerCmd $sshPath -q -x -l root $host
/usr/local/sbin/automysqlbackup.sh;$sshPath -q -x -l root $host
Why not dump the SQL data live to a text file and back that up? That's what
mysqldump is for... you can back up everything you need to restore an entire
database without ever taking down the mysql server. And this makes restores
much simpler... if a developer drops an table by mistake, you
On 10/14/2014 02:53 PM, xpac wrote:
Is there a way to make it so that the BackupPC interface doesn't have to user
backuppc as the user in httpd.conf? Or some other way I can do this?
Take a look at the suexec Apache module. It lets you specify which user:group
each virtual host runs as. I
Can you compile rsync from source? That's what I used to have to do on HP-UX
and other systems that didn't come with GNU tools... install GCC and then
compile everything I wanted. (Heck, that's what I used to do with everything
back on BSD/OS. I don't miss the days of having to compile Perl
# ps -ef |grep BackupPC
root 7014 6189 0 08:31 pts/000:00:00 grep BackupPC
backuppc 14760 1 0 2013 ?00:37:44 /usr/bin/perl
/usr/share/backuppc/bin/BackupPC -d
backuppc 14791 14760 0 2013 ?1-17:52:08 /usr/bin/perl
/usr/share/backuppc/bin/BackupPC_trashClean
If
I'm assuming Ubuntu is close enough to stock Debian. The main BackupPC log is
in /var/lib/backuppc/log. If you need to look at compressed logs, use
BackupPC_zcat, as the developer didn't use standard gzip-compatible compression.
My startup section looks like this:
2014-09-03 14:00:48 Reading
This is a feature I've wanted as well. I have virtual machines running on
multiple hypervisors, and I'd prefer to only run one backup per hypervisor at a
time. If I could do that, I could get away with backups during the day... but
if I back up the main file server and the groupware server at
easier to deal
with.
On 08/28/2014 09:45 AM, Les Mikesell wrote:
On Thu, Aug 28, 2014 at 8:11 AM, Carl Cravens ccrav...@excelii.com wrote:
This is a feature I've wanted as well. I have virtual machines running on
multiple hypervisors, and I'd prefer to only run one backup per hypervisor
Debian does it by making index.cgi setuid backuppc. Now that's a binary and
not a script, and I don't know if that's the standard BackupPC or if the Debian
maintainer has written a setuid wrapper.
Another way to do it is to set up a separate virtual host for BackupPC and use
suexec to run the
rsync will load the whole directory tree at both ends before starting
to walk for the comparison
I keep seeing statements to this effect, but it hasn't been true for rsync
defaults for years. From the rsync manpage:
Beginning with rsync 3.0.0, the recursive algorithm used is now an
The VM isn't an issue (my BackupPC has run in a KVM guest for over two years),
but using file-based storage (a filesystem inside a file stored on another
filesystem) is extremely inefficient (we recently ran benchmarks on file versus
raw device... the performance is awful in comparison).
Use a
This is something it took me two years of using BackupPC before I realized...
BackupPC doesn't treat full and incremental quite the way you expect.
When BackupPC does an rsync incremental, it compares to the last full for
purposes of deciding what to download and *then* does deduplication
On 12/04/2013 01:34 PM, Russell R Poyner wrote:
In my new position I'm looking for options for backing up laptops and
tablets. Most of these machines rarely connect to our wired network or
vpn.
We solved this by creating a script on the client side that rsync's the user's
data to a central
My experience troubleshooting I/O performance over iSCSI is that Ext4
journaling has a much higher CPU overhead than XFS does. Papers I've read show
evidence that modern XFS journaling scales better (better performance) than
Ext4 as disks grow larger. http://lwn.net/Articles/476263/
As a
JLuc,
Did you run the init.d script as root? The best way to start a process
normally started at boot on Debian or a derivative (such as Ubuntu) is with...
$ sudo invoke-rc.d backuppc start
This ensures that it starts in the same environment that it does at boot, and
doesn't pick up any
You don't need the trailing /* with rsync. I use that in a few places, but
only because it excludes everything under the directory, but backs up the
directory itself. Useful for restoring directory meta-data.
Note that the excludes are simply passed to the underlying transport (rsync,
tar,
I've built up quite a list of features I could really use...
* prioritization (the ERP system gets top priority every 24 hours, no matter
how many hosts are further past-due)
* exclusive groups (don't back up more than one host in group virtual host 1
at a time)
* profiles (this is a Windows
Thanks. That was dumb of me... I've done a lot of Perl coding (since Perl 4),
but I'm a little rusty. I don't use host sorting much, so I hadn't noticed it
wasn't working right.
On 02/02/2013 08:19 PM, The Lunatic wrote:
On 01/28/2013 11:46, Carl Cravens wrote:
I've written a little tool
Also merged. Not a common occurrence, but it would cause the script to keep
failing until a successful backup occurred. Thanks.
On 02/06/2013 05:04 AM, Jonathan Schaeffer wrote:
avoid the error when parsing a host that has never been backed up
--
Carl D Cravens (ccrav...@excelii.com), Ext
I've written a little tool to help analyze BackupPC scheduling that I thought
others might find useful. It is meant to generate plots that can be viewed
from the web, but it's just as easily used from the command line.
https://github.com/ravenx99/backuppc-visualize/
Over the past year+ that
I manage my offsite disaster-recovery backup (which is rsync'd over the net) by:
+ Creating a directory of links that point only to the most recent full and
incremental for each host. (Incrementals always go against the last full.)
+ rsync each host directory individually. This breaks dedupe
21 matches
Mail list logo