On Sat, Nov 13, 2010 at 3:20 AM, Tyler J. Wagner wrote:
> On Fri, 2010-11-12 at 21:09 -0500, B. Alexander wrote:
>> I have also found that a clean install of Debian plus an apt-get
>> dselect-upgrade (I capture the package lists daily) takes about the
>> same amount of t
On Fri, Nov 12, 2010 at 5:21 PM, Tyler J. Wagner wrote:
> B,
>
> You are doing two things differently from my own config:
>
> 1. You're using lots of RsyncShareNames, rather than just / and using
> BackupFilesOnly and BackupFilesExclude. In fact, I've never used more
> than one RsyncShareName. Do
I am using "--checksum-seed=32761" in RsyncArgs,
which enables it for everything.
--b
On Fri, Nov 12, 2010 at 3:10 AM, Tyler J. Wagner wrote:
> On Thu, 2010-11-11 at 17:26 -0500, B. Alexander wrote:
>> I don't think so, Les. I have been watching the backup as it runs
Nope. All hosts are Debian Linux, The backup machine runs unstable.
On Thu, Nov 11, 2010 at 4:33 PM, Jeffrey J. Kosowsky
wrote:
> Are you backing up a Windows client?
> Under cygwin 1.5 rsync, there used to be problems with it hanging in
> mid-backup.
>
> B. Alexander wrote at
I don't think so, Les. I have been watching the backup as it runs (as
Tyler suggested earlier in the thread), and if I change the order of
the directories in RsyncShareName, the last file that gets backed up
changes, but it is the same file, whether during an incremental or
full.
--b
On Thu, Nov
ael wrote:
>
>
> On Fri, Nov 12, 2010 at 4:31 AM, B. Alexander wrote:
>>
>> Okay, I set the $Conf{PartialAgeMax} to 0, rebooted the box (to clear
>> out the zombie BackupPC_dump processes and cleared out that partial
>> backup...
>>
>> Ran a full, stil
Okay, I set the $Conf{PartialAgeMax} to 0, rebooted the box (to clear
out the zombie BackupPC_dump processes and cleared out that partial
backup...
Ran a full, still hung at the same place. So I stopped that (remember,
at this point, there should not be a partial that it is running
against, unless
_dump processes) and
maybe clear out all of the backups stored for this server (since I
migrate them off of the box anyway) and see what happens.
--b
On Wed, Nov 10, 2010 at 9:51 AM, B. Alexander wrote:
> On Wed, Nov 10, 2010 at 9:22 AM, Tyler J. Wagner wrote:
>
>> One thing I do to identi
On Wed, Nov 10, 2010 at 9:22 AM, Tyler J. Wagner wrote:
> One thing I do to identify a file causing problems with a backup is to
> use the "tree" command (see your package manager) on the running backup:
>
> cd /var/lib/backuppc/pc/HOSTNAME
> tree NEW
>
> The last file will be the file currently
I'm seeing something sort of wierd on my backup machine. I have a host
set up for the backup machine itself. I have set $Conf{BackupsDisable}
= '1'; so that it does not back up regularly, and also does not
(obviously) try to back up the pool.
I run a backup of it every 30 - 60 days, which is about
Hi,
I have several questions regarding backing up over a wireless lan.
This started as a specific question, but I am going to do a brain
dump, because I'm not sure what is interrelated. I beg your
indulgence. :)
To give some background, I have a Nokia N810 and a Nokia N900. Both
devices are Linu
What I do is use openvpn (http://openvpn.net) to access services behind my
firewall. This gives me very granular control, since clients require a
certificate signed by the right (internal) CA cert.
At that point, the device with vpn access becomes an extension of the
internal network and can acces
Timothy,
Its really fairly simple to set up an rpm build environment. First, you need
a .rpmmacros that will live in your home directory. Say, for example, your
home directory is in /home/tomer, your .rpmmacros file would look like:
%_topdir /home/tomer/rpm
%_tmppath /var/tmp
#%debug_package %{ni
I build a native package for a couple of reasons:
- So that the package data base stays up to date, and you don't
accidentally install a conflicting package down the road that might break ;
- so that the package manager tells me before I try to build if I am
missing anything.
Buildin
Well, it should be fairly easy to roll an rpm for the new version,
especially if you can modify the spec file from 3.1.0...
Is this not an option?
--b
On Fri, Oct 15, 2010 at 9:44 AM, Timothy Omer wrote:
> CentOS release 5.4 (Final)
> BackupPC 3.1.0 via Yum
>
> Hey all,
>
> I installed BackupP
Was there a major change to the way that pooling is handled from 3.1.2 to
3.2.0? Rangel's email about the graphs disappearing made me look at mine. I
have them, but I noticed that the pool size dropped significantly, from
325GB to 250GB during week 37, about the time I installed 3.2.0. I'm not
comp
ough, considering that only squid and iptables are the main softwares
> used? Of course, I have some tiny tools like webalizer (which involves the
> use of /var/www) and webmin (which I hope has all its configs in /etc/).
> Also crontabs are vital for me...
>
> Many thanks,
> F.
>
Hi Flavio,
If I am honest, I only back up critical files on the system (/boot,
/lib/modules, /home, /root, /etc, /usr/local, etc) using backuppc, because I
can regenerate a Debian base install without packages in about 15 minutes,
and with a relatively small package list like is generally on a fir
I have been running backuppc on Debian sid for over five years without a
problem. I don't upgrade it daily, mainly when a high-priority/security
related fix is released or when backuppc is upgraded.
IMHO, unstable tends to be more stable than testing. It also tends to be a
rolling release, which i
Thanks. I'll have a look.
On Wed, Sep 8, 2010 at 10:58 AM, Royden Yates wrote:
>
> - Original message -
> > Has anyone tried the deb that I rolled? I haven't had a chance to install
> > myself, but was mildly surprised to get no feedback. I figure that means
> > either it worked or nobody
Has anyone tried the deb that I rolled? I haven't had a chance to install
myself, but was mildly surprised to get no feedback. I figure that means
either it worked or nobody's tried it. :)
--b
--
This SF.net Dev2Dev email
It was attached to my last email in this thread.
--b
On Mon, Aug 30, 2010 at 8:58 AM, Saturn2888 <
backuppc-fo...@backupcentral.com> wrote:
> Ok then, how do I get a hold of this version?
>
>
> B. Alexander wrote:
> >
> > > From what I saw (I'm running rea
ntu Lucid on that machine so I don't know if it'd be compatible.
> I'm also not really liking to test things when they're associated with my
> backup machine, haha. Test is bad, working is good.
>
>
> B. Alexander wrote:
> > Here is the one I built. Note that I
They are for me. I use rsync and my FullKeepCnt is set to:
$Conf{FullKeepCnt} = [ 1, 0, 1, 0, 0, 1 ] ;
Which nets me 5 incrementals, 1 weekly and a yearly, which works out well
for me.
Is that what you are asking?
--b
On Thu, Aug 26, 2010 at 2:59 AM, Innop wrote:
> Hello,
>
> Is it that incr
I have emailed the Debian maintainer for backuppc, Ludovic Drolez, to see
what his plan are for backuppc. I have rolled my own 3.2.0 deb, but haven't
installed it yet. If I don't hear from him or if he has no plans, I may try
to do a Non-maintainer upload.
More information once i hear from him.
-
Actually, (and I just realized that I needed to update this), the "Child is
aborting" message came because I rebooted the client.
I don't know why, but everything started behaving itself this week... I had
a bunch of "urgency=high" Debian updates, so both the server and the client
were upgraded. I
Hey folks,
I've been using BackupPC to back up my network for something like 4 years. I
am quite comfortable with that part. This query is more general. All of the
machines on my network are Linux, so my experience is very...one sided.
When I back up a Linux box, my goal is to preserve not only t
Sorry about that. Backup is done using rsync.
I bumped the verbosity to 4, and perusing through the logs, but didn't see
anything glaringly wrong. The end of the last bad xferlog showed:
Skipping info/zenoss-stack.prerm (same attr)
Skipping info/zlib1g.list (same attr)
Skipping info/zlib1g.md5s
Hey,
I have a single virtual host (OpenVZ) that never completes a backup. Neither
incrementals nor fulls complete. I don't see any errors in the logs:
2010-06-15 18:14:19 incr backup started back to 2010-06-12 07:00:02
(backup #624) for directory /lib/modules
2010-06-15 18:14:19 incr backup start
(Damn, I thought I had posted it to the list, but I can't find it)
I was thinking about setting up something like this by editing the config.cf.
I was more concerned with dividing by distribution, but I guess you could do
it by function. What I was thinking of doing is setting a variable that
cont
Or if you are into overkill (or have other tasks that you want to manage
semi-centrally), you might want to take a look at cfengine. For those that
are not familiar, cfengine is less of a configuration management tool than
it is a "promise engine." By that, I mean that you give it promises, such as
I do something similar in cfengine. In essence (I can post the files here if
there is interest), insure the backuppc user is created, make sure it has
keys. I also use the backuppc user to perform backups (backuppc -> root was
just had too many security implications), so I make sure that the necess
Actually, I haven't ever used the gui othere than to start/stop backups and
for status. All of the config file munging I do is on the command line.
I'll start playing with this and report back.
--b
On Fri, May 14, 2010 at 9:07 AM, Bowie Bailey wrote:
> If you are maintaining your config files
I just wanted to thank Craig and the crew who wrote backuppc, since it saved
me yet again. The hard drive in my daughter's machine died (and died hard,
nothing was recoverable). I was able to throw another hard drive in it,
build from a Debian CD and restore in less than 2 hours.
One thing that I
For me, I use a backup client mainly for unique apps and data. Since most of
my boxes run Debian, I have pretty much figured out the directories to
backup/restore to save the box.
I back up the following Debian-related directories:
/var/backups
/var/cache/apt (less /var/cache/apt/archives)
/var/l
Thats funny. The first thing that popped into my mind is something a former
coworker used to say, "the beatings will continue until morale improves."
Seriously, though, and without turning this into a hard drive flame war,
because I know we all have our war stories, but is there a particular brand
The easiest way to get exim working is to
dpkg-reconfigure exim4-config
then answer the questions. For the record, I run backuppc on Debian/sid, and
have been running it for about 4 years.
--b
On Thu, Apr 29, 2010 at 3:42 PM, Eddie Gonzales wrote:
> My issues were mainly do to my lack of linux
Thanks Tyler,
However, for snapshotting to work, don't you have to have at least as much
space for the snapshot as you do for the original partition? I currently am
using nearly 60% of the VG just for backups, which is the root of the
problem. It wasn't apparent in my scanning of the article (I'll
If this is covered somewhere in the wiki or elsewhere, please point me to
it.
With the problems I had with disk space on my backup server recently, I am
considering moving it to a machine that has SATA and a 1.5TB drive. However,
I would rather not start over again, I want to migrate the data over
On Sun, Apr 18, 2010 at 11:40 PM, Les Mikesell wrote:
> B. Alexander wrote:
> > The main problem with this machine is that it has two IDE drive
> > slots. Any more, the largest IDE drives I have been able to find are
> > 500GB. So I have a 500 and a 250 in the machine.
>
the mount point, would doing an rsync -avH
copy everything over correcetly?
Thanks again,
--b
On Sun, Apr 18, 2010 at 3:58 PM, Matthias Meyer wrote:
> B. Alexander wrote:
>
> > Hi all,
> >
> > I shot myself in the foot, and need to pick your brains about how to
> >
Hi all,
I shot myself in the foot, and need to pick your brains about how to
recover. My backup machine has a 500GB drive using LVM for my backup
partition. I managed to fill it up. Currently, backups are not running
(which makes me nervous), and the backuppc partition on this machine is at:
/dev
42 matches
Mail list logo