On May 27, 2008, at 4:55 PM, Kurt Jasper wrote:
Jonathan Dill wrote:
SyncBackSE can make use of Shadow Copy and can usually backup files
that would be skipped by rsync, it is also very efficient, especially
when using FastBackup mode.
Thanks for mentioning FastBackup.
I won't use
On Apr 20, 2008, at 5:39 PM, Benjamin Staffin wrote:
I've run into a frustrating problem that affects only one of my backup
clients. This particular client is configured as though it is two
hosts to backuppc, such that part of its contents go to a compressed
pool and the rest goes to an
On Apr 18, 2008, at 11:32 AM, Tony Schreiner wrote:
And yes, I struggle with what needs be backed up. The users
(bioinformatics research) can generate a couple of 100 GB of data
every day, some of it very large files, some of it hectathousands of
small files, some of which needs to be saved,
On Apr 12, 2008, at 2:10 AM, Beth Morgan wrote:
What factors most influence the speed of backups? The machine I'm
using is a Celeron 1200 MHz processor with 2 gb RAM. The OS is
CentOS. BackupPC version 3.1.0.
So far, the best success I've had is to only allow one machine to
backup at a
On Apr 18, 2008, at 2:34 PM, Tony Schreiner wrote:
I don't want to impose quotas, as appealing an idea as that sounds.
The machines are for grant funded work by a relatively small number
of total users. There are legitimate reasons for them to be
generating the amount of data that they are.
On Apr 15, 2008, at 4:54 PM, Tim Hall wrote:
Hi can anyone comment on back jobs with lots
of files effecting transfer time?
I have 2 big backup jobs which are taking too
long over a WAN link. Would it be advisable to
break the jobs up into many smaller jobs with
fewer files / job? Would
partimage is another efficient tool that I have used for cloning disks,
it skips blocks that the filesystem thinks are free / not in use,
supports a network mode, but the filesystem can't be in use, although it
could be read-only. If you use xfs, you could use xfs_freeze to freeze
the
Hmm, interesting, basically an rsync wrapper. I was also thinking: How
about an unprivileged account with sudo access to run rsync as root? I
found this discussion:
http://lists.samba.org/archive/rsync/2004-August/010439.html
Turns out that is also in the BackupPC FAQ:
Joe Krahn wrote:
I did some searching and found that several people have expressed
interest in a block-device feature for rsync, but nothing has come of it
yet. I also found DRBD (Distributed Replicated Block Device), which
probably does exactly what you want.
I just found this interesting
Nils Breunese (Lemonbit) wrote:
Jonathan Dumaresq wrote:
2 hdd for the os (Mirror RAID-0?)
RAID-0 is not a mirror, RAID-1 is.
I took it to mean that he was referring to RAID 0+1 (two RAID-0 striped
sets that are mirrors of each other). In any case, RAID 1+0 would be
safer
Sean Carolan wrote:
We are using backuppc in our production environment and it's working
well. I would like to start doing regular backups of the BackupPC
server to external USB drives. The problem we've run into is that
there are so many files, rsync dies with an 'out of memory' error.
I
Hello folks,
For Linux / Unix servers, it is kind of tempting to just change system
time to UTC, forget about future modifications to DST, and get used to
making sense of the logs in UTC time. However, things like cron jobs
and BackupPC blackout times are now going to be time shifted as if
Paul Fox wrote:
why would any of this be easier than letting the system track DST
itself? even on my ancient RH7.2-based server, updating the
zoneinfo files took me all of 5 minutes. and for anything more
modern, a semi-automatic upgrade (i.e. apt-get update; apt-get
upgrade) took care of
Maybe this is a shot in the dark, I have already asked on the Linux
PowerEdge mailing list, just hoping that someone has had a very similar
problem and can help narrow down all of the possibilities.
I have a PowerEdge 1900 with Ubuntu Dapper 6.06.1 LTS x86_64 with dual
Xeon 5110 processors,
Guus Houtzager wrote:
Ok, more details please, you're being too vague. Is your linux box
crashing (kernel oops, freeze, spontaneous reboot) or is just the backup
failing? What version of backuppc are you running? If you get an oops,
can you post it here? What are the last lines of the LOG file
Paul Coughlin wrote:
Has anyone tried (hopefully successfully!) to put BackupPC on a host
like 1and1.com http://1and1.com?
I do not have root access, but I do have SSH and full permissions to
my partition.
I don't know about 1and1 specifically, but I backup my Dreamhost.com web
hosting
I just upgraded apache2 (on Debian Etch) and it looks like the way some
of the authz modules are handled changed. First, it complained about
AuthGroupFile, then I made a symbolic link from mods-available to
mods-enabled for authz_groupfile.load and that fixed that error. Now
it's complaining
Jonathan Dill wrote:
Starting web server (apache2)...Syntax error on line 10 of
/etc/backuppc/apache.conf:
Invalid command 'AuthUserFile', perhaps misspelled or defined by a
module not included in the server configuration
failed!
Got it: I also needed to link authn_file.load
Jonathan
Jonathan Dill wrote:
Jonathan Dill wrote:
Starting web server (apache2)...Syntax error on line 10 of
/etc/backuppc/apache.conf:
Invalid command 'AuthUserFile', perhaps misspelled or defined by a
module not included in the server configuration
failed!
Got it: I also needed to link
Alex Schaft wrote:
Alex Schaft wrote:
I've just installed BackupPC to replace a simple script that tarred
two folders to a remote machine. The script took about 3 hours for
40gigs of data. BackupPC has now been running for almost 7 hours (1AM
to 7AM), and hasn't finished yet.
Was the tar
20 matches
Mail list logo