On 19/09/2012 11:59, Mark Coetser wrote:
> On 19/09/2012 11:21, Mark Coetser wrote:
>>
>> sent 150123974 bytes received 1745633674066 bytes 13566176.70 bytes/sec
>> total size is 2663529076313 speedup is 1.53
>>
>> real2144m46.748s
>> user144m11.80
On 19/09/2012 11:21, Mark Coetser wrote:
> On 18/09/2012 17:34, Timothy J Massey wrote:
>> Mark Coetser wrote on 09/18/2012 10:21:42 AM:
>>
>> > I am busy running a full clean rsync to time exactly how long it will
>> > take and will post results com
On 18/09/2012 17:34, Timothy J Massey wrote:
> Mark Coetser wrote on 09/18/2012 10:21:42 AM:
>
> > I am busy running a full clean rsync to time exactly how long it will
> > take and will post results compared to a clean full backup with
> > backuppc, I can tell you th
On 17/09/2012 17:16, Mark Coetser wrote:
> On 17/09/2012 17:01, Les Mikesell wrote:
>> On Mon, Sep 17, 2012 at 7:59 AM, Mark Coetser wrote:
>>
>>> Surely disk io would affect normal rsync as well? Normal rsync and even
>>> nfs get normal transfer speeds its o
On 17/09/2012 17:01, Les Mikesell wrote:
> On Mon, Sep 17, 2012 at 7:59 AM, Mark Coetser wrote:
>
>> Surely disk io would affect normal rsync as well? Normal rsync and even
>> nfs get normal transfer speeds its only rsync within backuppc that is slow.
>>
>
&g
On 17/09/2012 14:50, Tim Fletcher wrote:
> You are being hit by disk io speeds, check you dont have atime turned on on
> the fs. Also it's worth considering tar instead of rsync for this sort of
> work load.
>
> --
Hi
Surely disk io would affect normal rsync as well? Normal rsync and even
nfs g
Hi
backuppc 3.1.0-9.1
rsync 3.0.7-2
OK I have a fairly decent spec backup server with 2 gigabit e1000 nics
bonned together and running in bond mode 0 all working 100%. If I run
plain rsync between the backup server a
On 10/02/2012 15:39, Les Mikesell wrote:
> On Fri, Feb 10, 2012 at 1:27 AM, Mark Coetser wrote:
>>
>> Not that I am aware of, I am using the full path too all commands and if
>> run as "su - backuppc" "/full/path/to/script" as defined in the host.pl
>
ts
Thank you,
Mark Adrian Coetser
m...@tux-edo.co.za
On 10/02/2012 09:10, Les Mikesell wrote:
> On Fri, Feb 10, 2012 at 12:01 AM, Mark Coetser wrote:
>> Hi
>>
>> debian squeeze
>> backuppc3.1.0-9
>>
>>
>> If I run my bash script from the conso
Hi
debian squeeze
backuppc3.1.0-9
If I run my bash script from the console as the backuppc user it runs
100% and works as expected, if run from backuppc while running the
backup the script is run and can be seen in ps but it just never exits
unless I kill the process then the backup c
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:backuppc-users-
> [EMAIL PROTECTED] On Behalf Of Mark Coetser
> Sent: 01 July 2008 05:57 PM
> To: backuppc-users@lists.sourceforge.net
> Subject: [BackupPC-users] recover from failure
>
> Hi all
>
> I h
Hi all
I have a box that was running sme server 7.x with backuppc storing backups
onto an external USB drive, the server had a drive failure and I am trying
to restore the data through a Debian box running backuppc, I have the USb
drive mounted under /var/lib/backuppc and I have chowned the whole
> 'mt status' should show your current block size and
> 'mt setblock' should let you change it. Tar's default
> is normally 10kb. I'm not sure what backuppc uses when
> writing the archive. Linux systems usually default to
> 512 byte tape blocks and work as long as the reads/writes
> divide even
> 'mt status' should show your current block size and
> 'mt setblock' should let you change it. Tar's default
> is normally 10kb. I'm not sure what backuppc uses when
> writing the archive. Linux systems usually default to
> 512 byte tape blocks and work as long as the reads/writes
> divide even
Hi People
I am testing my archive to tape and I tried to restore the archive and I get
the following error.
tar xvf /dev/st0
tar: /dev/st0: Cannot read: Cannot allocate memory
tar: At beginning of tape, quitting now
tar: Error is not recoverable: exiting now
anyone know why, I think it may have
Hi Ppl
I was running a backup by mounting a share via nfs onto the backuppc host,
then using rsync method via backuppc to do the backup, the only problem with
this was that if the nfs mount fell over the backups stopped.
I changed the config to use smb and connect directly to the client host, I
t
> Hi
>
> I'm currently giving BackupPC a try and I've just set up my first rsyncd
> backup. What I don't get is how to set on a per host basis tha path where
> to
> store the backuped data. By default (on my Gentoo system) it goes
> in /var/lib/backuppc/pc/$HOST and I would like it to be in anothe
Its probably a permissions/ownership problem, you need to ensure that the
permissions of /data/backuppc are the same as what /var/lib/backuppc used to
be.
Thank you,
Mark Adrian Coetser
[EMAIL PROTECTED]
http://www.tux-edo.co.za, http://www.thummb.com
cel: +27 76 527 8789
_
Hi Ppl
I am trying to write a small bash script that I can use to archive certain
hosts to tape using cron. How would I get the command line to update the CGI
interface so that users can see whether the job completed successfully or
not and also to show whether the job is running currently or not.
Hi All
I have created a little bash script that sits in /usr/local/sbin and is
owned by backuppc:backuppc I copied the command that is used to run the
archive through the cgi but I think I am missing something, following is the
command that I am running but it seems to try and backup * in the pwd
> -Original Message-
> From: Les Mikesell [mailto:[EMAIL PROTECTED]
> Sent: 26 June 2006 06:30 PM
> To: Mark Coetser
> Cc: backuppc-users@lists.sourceforge.net
> Subject: Re: [BackupPC-users] archive error
>
> On Mon, 2006-06-26 at 18:02 +0200, Mark Coetser wrote
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:backuppc-users-
> [EMAIL PROTECTED] On Behalf Of Les Mikesell
> Sent: 26 June 2006 05:46 PM
> To: Mark Coetser
> Cc: backuppc-users@lists.sourceforge.net
> Subject: Re: [BackupPC-users] archive error
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:backuppc-users- [EMAIL PROTECTED] On Behalf Of
> Oliver Vecernik
> Sent: 21 June 2006 08:59 PM
> To: backuppc-users@lists.sourceforge.net
> Subject: Re: [BackupPC-users] archive error
>
> Craig Barratt schrieb:
> > Please increase
> No, the archives created by the archivehost mechanism are
> tar images with optional compression. You don't need a
> working backuppc to restore them - just feed them to tar.
>
> --
> Les Mikesell
>[EMAIL PROTECTED]
>
Hi All
Ok I seem to have some other issue with the tape archive, whe
> What are the permissions on /dev/st0?
Hi Les
That was the problem thank you, for interests sake it a Debian systerm and
there is a tape group that owns /dev/st0, the archive is running currently
and hopefully wont have any errors, Do you know if its possible to do a
restore from an archived tape
Hi Ppl
I have the same issue with a simple archive host, I tried adding the
backuppc user to the disk group but still getting the following error in the
log. How can I debug backuppc so that I can see where the problem lies ?
Error
Archive failed (Error: /usr/share/backuppc/bin/BackupPC_tarCrea
Hi Ppl
I am having a little trouble getting this running, I have read the docs etc
Here is my config.pl for the archive host
# Set this client's XferMethod to archive to make it an archive host:
$Conf{XferMethod} = 'archive';
# The path on the local file system where archives will be written:
#
27 matches
Mail list logo