Hi all,
first, I wanna thank you for a great backup tool and very, very helpfull
list!
Second - there still are some mysterials for me. ;-)
I don't understand the summaries on my host status pages.
Below, I copied a file site/count reuse summarym taken from a host I
backup with rsync/ssh.
I
Could it be that the problem is with my NFS, because if I mount the NFS it
takes also around 30 seconds until it is mounted. I'm mounting my NFS with
mount -t nfs 192.168.0.5:/home/backuppc /nas.
-Ursprüngliche Nachricht-
Von: Dan Pritts [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag,
hi -- i have a couple of hosts at remote sites that i've started
doing (partial) backups on. since i use tar everywhere locally,
that's how i configured them at first. i realized later that that
wasn't what i wanted, so i switched them to rsync, and immediately
did a full backup on each. no
Simon Köstlin wrote:
Could it be that the problem is with my NFS, because if I mount the NFS it
takes also around 30 seconds until it is mounted. I'm mounting my NFS with
mount -t nfs 192.168.0.5:/home/backuppc /nas.
It might (or might not) help to specify options for rsize and wsize in
Timothy J. Massey wrote:
We now want to add a new host, which happens to be running the exact
same operating system. They're not mirror images of each other, but
they are naturally going to share a large number of common files. Let's
assume that the new server contains 1GB worth of files,
Les Mikesell [EMAIL PROTECTED] wrote on 01/26/2007 12:00:18 PM:
Timothy J. Massey wrote:
We now want to add a new host, which happens to be running the exact
same operating system. They're not mirror images of each other, but
they are naturally going to share a large number of common
Please help to solve problem with Tar exited with error 65288
I have backup-server COMP1 (linux) and client COMP2 (linux too). Backup
should made through SSH, tar-method by root. The config.pl is following:
$Conf{XferMethod} = 'tar';
$Conf{TarClientPath} = '/bin/tar';
$Conf{TarShareName} =
Hi all,
I just would like to know if it is possible to make an incremental
backup of a host every hour.
I don't know how to set the value for $Conf{IncrPeriod} since it juste
take a value counted in days.
Thanks a lot
Phong Nguyen
Axone S.A.
Geneva / Swiss
Hi
I have BackupPC 3.0.0.0 Beta3. I have a windows 2000 server with a few
shares. I have set up backuppc to backup a share called root$. BackupPC
will do a full dump of windows 2000 but refuses to do incremental
backups. I have installed printer monitoring software on some of our
servers
Hello list,
I tried to make a backup by backuppc on debian sarge install, and the
server client is a Debian etch kernel: 2.6.8-2-386.
When I start the full backuppc for this machine, I got a error with
the follow message:
[EMAIL PROTECTED]:~$ /usr/bin/perl
Timothy J. Massey wrote:
I think there is a quick-fix here by doing a 'cp -a' of an existing
similar host directory to the new one before the first backup run.
That is an interesting solution. That would work for rsync (assuming
my speculation is correct).
Yes, it isn't necessary for
Les Mikesell [EMAIL PROTECTED] wrote on 01/26/2007 03:23:45 PM:
Timothy J. Massey wrote:
I would love to see this abstracted a little more into a copy-host
feature, that could copy a host to a new host, either within the same
pool or to a different pool. After reading about how the
Les Mikesell [EMAIL PROTECTED] wrote on 01/26/2007 04:04:43 PM:
Timothy J. Massey wrote:
It seems to me, then, that the documentation is *wrong*: rsync
does not
compare against the pool, *ever*; only against a previous backup
(most
likely the next-highest backup level, but I have
Hi,
Timothy J. Massey wrote on 26.01.2007 at 15:44:25 [Re: [BackupPC-users] Long:
How BackupPC handles pooling, and how transfer methods affect bandwidth usage]:
Holger Parplies [EMAIL PROTECTED] wrote on 01/26/2007 02:48:29 PM:
I'm a bit confused by the terms 'host' and 'server'.
[...]
Timothy J. Massey wrote:
As BackupPC_tarExtract extracts the files from smbclient or tar, or as
rsync runs, it checks each file in the backup to see if it is identical
to an existing file from any previous backup of any PC.
Timothy J. Massey wrote:
As a start, how about a utility that simply clones one host to another
using only the pc/host directory tree, and assumes that none of the
source files are in the pool, just like it would during a brand-new
rsync backup?
That would be better than nothing, but if
Hi,
Les Mikesell wrote on 26.01.2007 at 20:53:11 [Re: [BackupPC-users] Long: How
BackupPC handles pooling, and how transfer methods affect bandwidth usage]:
Timothy J. Massey wrote:
In reality, I'm still talking about a
custom BackupPC client, but instead of targeting the host, I'm
Holger Parplies wrote:
3.) Reduced bandwidth requirement
Because both ends understand the pooling mechanism, you have to transfer
identical files at most once, and that only if they are not in the offsite
pool yet.
I don't think you'd want to count on the pools being identical,
18 matches
Mail list logo