Hello,
After a longer downtime, my backuppc daemon came back online and
started a full backup of all hosts, which was quite
a resource-intensive process.
In 6.97 days, there'll be again a full backup of all hosts. I wonder
how I could go about distributing that over the period of the week
so tha
Hi Martin,
Sure. Take a look at
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Distribute_Full_Backups
Cheers,
Stephen
On Thu, 11 Nov 2010, martin f krafft wrote:
> Hello,
>
> After a longer downtime, my backuppc daemon came back online and
> started a full backup of all hosts,
I start the backups with the option "full backup" manually, and then the
next full backup will be in 6,97 days ;-).
But I don't think, that the next full backup will be such an amount of
work load.
Are there many changing files? At my servers the changes of each server
are within 5 GB each. Bu
Okay, I set the $Conf{PartialAgeMax} to 0, rebooted the box (to clear
out the zombie BackupPC_dump processes and cleared out that partial
backup...
Ran a full, still hung at the same place. So I stopped that (remember,
at this point, there should not be a partial that it is running
against, unless
On Fri, Nov 12, 2010 at 4:31 AM, B. Alexander wrote:
> Okay, I set the $Conf{PartialAgeMax} to 0, rebooted the box (to clear
> out the zombie BackupPC_dump processes and cleared out that partial
> backup...
>
> Ran a full, still hung at the same place. So I stopped that (remember,
> at this point
It's a reiserfs3 filesystem, which I have been using for as long as I
have been using backuppc. Unfortuantely, it doesn't use inodes in the
traditional way. However, when I ran an fsck.reiserfs3, I did not see
any errors like that that jumped out at me.
Whats more, if it were filling up all its re
Paranoid person that I am, I followed the instructions here (
http://backuppc.sourceforge.net/faq/localhost.html) to run tar in a way that
would not allow the backuppc user to become root.
I tried wrapping tar up using the script in the documentation:
#!/bin/sh -f
exec /bin/tar -c $*
And that wor
>> you want rsync -H
>> i've used rsync -qPHSa with some success. however, if you have lots of
>> links, and not terribly much memory, rsync gobbles memory in proportion to
>> how many hardlinks it's trying to match up. so, ironically, i use
>> storebackup to make an offsite copy of my backuppc
On 11/11 02:12 , Rob Poe wrote:
> Memory use is not an issue, as the backup servers are dedicated machines
> at each site (2 sites).
Even with dedicated machines, the memory usage still can grow to intractable
levels. If you're only dealing with a few tens of GB and a few million
files, it's not
On Thu, Nov 11, 2010 at 02:43:12PM -0500, Frank J. Gómez wrote:
> Paranoid person that I am, I followed the instructions here (
> http://backuppc.sourceforge.net/faq/localhost.html) to run tar in a way that
> would not allow the backuppc user to become root.
>
> I tried wrapping tar up using the s
Are you backing up a Windows client?
Under cygwin 1.5 rsync, there used to be problems with it hanging in
mid-backup.
B. Alexander wrote at about 13:40:53 -0500 on Thursday, November 11, 2010:
> It's a reiserfs3 filesystem, which I have been using for as long as I
> have been using backuppc. Unf
On 11/11/2010 3:33 PM, Jeffrey J. Kosowsky wrote:
> Are you backing up a Windows client?
> Under cygwin 1.5 rsync, there used to be problems with it hanging in
> mid-backup.
Or if it is Linux/unix, could you have hit a sparse (dbm type) file that
appears large when you read it even though it does
On 11/11 02:43 , Frank J. Gómez wrote:
> Paranoid person that I am, I followed the instructions here (
> http://backuppc.sourceforge.net/faq/localhost.html) to run tar in a way that
> would not allow the backuppc user to become root.
Is there a reason you're using tar rather than rsync? I run rsyn
I don't think so, Les. I have been watching the backup as it runs (as
Tyler suggested earlier in the thread), and if I change the order of
the directories in RsyncShareName, the last file that gets backed up
changes, but it is the same file, whether during an incremental or
full.
--b
On Thu, Nov
Nope. All hosts are Debian Linux, The backup machine runs unstable.
On Thu, Nov 11, 2010 at 4:33 PM, Jeffrey J. Kosowsky
wrote:
> Are you backing up a Windows client?
> Under cygwin 1.5 rsync, there used to be problems with it hanging in
> mid-backup.
>
> B. Alexander wrote at about 13:40:53 -050
Hm. You know, I thought I'd tried that. Maybe I left out the quotes. That
did the trick. Thanks!
On Thu, Nov 11, 2010 at 2:58 PM, John Rouillard wrote:
> On Thu, Nov 11, 2010 at 02:43:12PM -0500, Frank J. Gómez wrote:
> > Paranoid person that I am, I followed the instructions here (
> > http
Tar just seemed simpler.
On Thu, Nov 11, 2010 at 4:59 PM, Carl Wilhelm Soderstrom <
chr...@real-time.com> wrote:
> On 11/11 02:43 , Frank J. Gómez wrote:
> > Paranoid person that I am, I followed the instructions here (
> > http://backuppc.sourceforge.net/faq/localhost.html) to run tar in a way
>
Sorry to be banging on the list so much lately, but I've got another issue I
don't understand...
I've modified a copy of BackupPC_archiveHost, with the changes being:
- I'm piping output to gpg after compression and before splitting
- I want my archive filenames in this format: 0.$host.tar.
On 11/11/2010 5:21 PM, Frank J. Gómez wrote:
> Sorry to be banging on the list so much lately, but I've got another
> issue I don't understand...
>
> I've modified a copy of BackupPC_archiveHost, with the changes being:
>
> * I'm piping output to gpg after compression and before splitting
>
That's basically what I've done... the command that should run is:
BackupPC_tarCreate -t -h $host -n $bkupNum -s $share . | /bin/gzip
| /usr/bin/gpg -r $gpgUser --encrypt | /usr/bin/split -b 65 -
$outLoc/0.$host.tar$fileExt.
For some reason I'm getting stuff on that split error, though. I'll
Hi
i have a machine that have about 100GB of data to backup, via
ssh+rsync, but its network connection is about 1Mb, so it will take ages
to do even the first backup.
i already have on the backuppc server one old backup of
that machine, done via plain rsync (not backuppc).
Take the machine to the remote and pre-load from local???
On 11/11/2010 7:39 PM, higuita wrote:
Hi
i have a machine that have about 100GB of data to backup, via
ssh+rsync, but its network connection is about 1Mb, so it will take ages
to do even the first backup.
i already have
On Fri, Nov 12, 2010 at 01:39, higuita wrote:
> Hi
>
> i have a machine that have about 100GB of data to backup, via
> ssh+rsync, but its network connection is about 1Mb, so it will take ages
> to do even the first backup.
>
> i already have on the backuppc server one old backup of
>
23 matches
Mail list logo