Toni Van Remortel wrote:
And I have set up BackupPC here 'as-is' in the first place, but we saw
that the full backups, that ran every 7 days, took about 3 to 4 days
to
complete, while for the same hosts the incrementals finished in 1
hour.
That's why I got digging into the principles of Back
Les Mikesell wrote:
> How are you measuring the traffic?
ntop
Anyway, I'm preparing a separate test setup now, to be able to do
correct tests (so both BackupPC and an rsync tree are using data from
the same time).
Test results will be here tomorrow.
But I don know that BackupPC does use more band
Holm Kapschitzki wrote:
i habe 4 older ide devices a 160 gb and i want to backup a few client
hosts. So i cannot use one single device for backuppc. I read the doku
and read something of configuring "topdir" to config the path where
the
data is backupt. On the other hand i read on debian pack
enter the ZFS troll :)
if you run opensolaris or a BSD with ZFS you can use ZFS as a zraid and you
get the benefits of LVM, Raid5, and filesystem level compression all in one.
I have noticed ZFS to be very resource friendly under heavy load even with
compression and zraid enabled.
search google
Look in the config.pl file (if debian, it's probably
/etc/backuppc/config.pl).
If you have four 160GB drives, I would suggest using MD/LVM to create one
large logical volume. The "best" arrangement would probably be something
like a RAID 5 with all four drives, and maybe an LVM volume on top of
Les Stott wrote:
> Hi all,
>
> Got BackupPC-3.0.0 installed from source on CentOS 5. Compression Level
> on the pool is 3.
>
> Been running this nicely on a wide number of systems for some time.
>
> I have found some odd errors from rsync while trying to backup large
> .tgz files (2-4gb)
>
> md4
Hello,
i habe 4 older ide devices a 160 gb and i want to backup a few client
hosts. So i cannot use one single device for backuppc. I read the doku
and read something of configuring "topdir" to config the path where the
data is backupt. On the other hand i read on debian package topdir is
hardcode
Paddy Sreenivasan wrote:
> I'm a developer in Amanda (http://amanda.zmanda.com) project. Is anyone
> using Amanda and Backuppc together?
I'm using them separately with some of the same hosts as targets.
> I'm interested in integrating Backuppc with Amanda. Amanda will be
> the media manager (su
On Mon, 26 Nov 2007, Paddy Sreenivasan wrote:
> I'm a developer in Amanda (http://amanda.zmanda.com) project. Is anyone
> using Amanda and Backuppc together?
I've always thought that Bacula was a better fit to be integrated (how
tightly or loosely is debatable) with BackupPC.
Looking at this h
I'm a developer in Amanda (http://amanda.zmanda.com) project. Is anyone
using Amanda and Backuppc together?
I'm interested in integrating Backuppc with Amanda. Amanda will be
the media manager (support for tapes and other media) consolidator of data
from a group of BackupPC clients. Amanda's ap
Toni Van Remortel wrote:
>> Could you give us some numbers? How much traffic are you seeing for
>> a BackupPC backup compared to a 'plain rsync'?
> Full backup, run for the 2nd time today (no changes in files):
> - - BackupPC full dump : killed it after 30mins, as it pulled all data
> again (2.8G
PS: I hacked BackupPC to skip the '--ignore-times' argument addition for
full backups.
--
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 451 92 26 - [EMAIL PROTECTED]
-
This SF.net email is sponsore
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nils Breunese (Lemonbit) wrote:
> It might be because BackupPC doesn't run the equivalent of rsync
> -auv. See $Conf{RsyncArgs} in your config.pl for the options used
> and remember rsync is talking to BackupPC's rsync interface, not a
> stock rsync.
T
Toni Van Remortel wrote:
>>> How can I reduce bandwidth usage for full backups?
>>>
>>> Even when using rsync, BackupPC does transfer all data on a full backup,
>>> and not only the modified files since the last incremental or full.
>> That's not true. Only modifications are transfered over the ne
Alexander Lenz wrote:
Hi there, BackupPC Community,
what is the easiest way to let our users (we are about 30 here)
browse and restore
the backups that were made from their machines ?
- Without granting admin access to them. -
We'd need some restricted accounts to access backuppc via http,
Toni Van Remortel wrote:
Nils Breunese (Lemonbit) wrote:
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true
Hi there, BackupPC Community,
what is the easiest way to let our users (we are about 30 here) browse and
restore
the backups that were made from their machines ?
- Without granting admin access to them. -
We'd need some restricted accounts to access backuppc via http, which should
allow exclusiv
Nils Breunese (Lemonbit) wrote:
> Toni Van Remortel wrote:
>> How can I reduce bandwidth usage for full backups?
>>
>> Even when using rsync, BackupPC does transfer all data on a full backup,
>> and not only the modified files since the last incremental or full.
> That's not true. Only modification
Toni Van Remortel wrote:
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full
backup,
and not only the modified files since the last incremental or full.
That's not true. Only modifications are transfered over the network
whe
How can I reduce bandwidth usage for full backups?
Even when using rsync, BackupPC does transfer all data on a full backup,
and not only the modified files since the last incremental or full.
I would love to see BackupPC performing this simple task:
- cp -al $last new
- rsync -au --delete host:/s
John Pettitt wrote:
I'm getting an out of memory on large archive jobs - this in a box with
2GB of ram which makes me thing there is a memory leak someplace ...
Writing tar archive for host jpp-desktop-data, backup #150 to output
file /dumpdir/jpp-desktop-data.150.tar.gz
Out of memory durin
Nelson Serafica wrote:
> Can Backuppc disable the incremental backup? I only want full backup
> since I created a script on the client that will archive all backups
> and the Backuppc server will get the file via rsync.
BackupPC_dump does a simple check on the FullPeriod and IncrPeriod (read
the so
Can Backuppc disable the incremental backup? I only want full backup since I
created a script on the client that will archive all backups and the Backuppc
server will get the file via rsync. I want this to be done once a week. Is this
possible?
Can I schedule certain full backup to each differe
23 matches
Mail list logo