3. Yes, there is certainly some confusion in client/host or host/server
naming schemes :-) Actually, I could imagine that the rsync compression
could be a reason for writing the custom perl version, which BackupPC
use: You just don't uncompress and store the already compressed file...
But I doubt t
3. Sorry, I think of the machines being backed up as clients, but BackupPC
does call them hosts. rsync supports compressed transfers but that's not
the scheme used for storage by BackupPC.
4. You may be thinking of the tasks that check for unreferenced files and
recalculate the total pool size, wh
Hi Robert,
1-2) This is what I would expect, I am currious if there is a way to
gradually compress the files; not all at once.
3) By the host, I meant host being backed up. And I am sure, it is not
used for the compression, unless compress option of rsync is used. But I
guess, this is uncompresse
Hi Jan,
I think this is correct, but there are other experts who might chime in to
correct me.
1. Migration will not result in compression of existing backups. It just
allows V4 to consume the V3 pool.
2. After compression is turned on, newly backed up files will be
compressed. Existing backups
Hi,
I have few questions related to compression
Currently, I have BackupPC 3 installed on Intel NUC with 4 core pentium,
and since the compression significantly decreased backup speeds, I have
turned it off. I am about to switch to v4, so it might be worth to
reconsider, since the increments are
On Wed, Feb 1, 2017 at 2:53 AM, Jan Stransky
wrote:
>
> 3) Full backup of each dataset as separate host, then second with
> already filled pool. Preferably from SSD to SSD to not be IO limited.
>
In practice if you use the --checksum-seed option with rsync the
timing of the 3rd full is the one th
Hi,
as I have installed BackupPC, I was pleasantly surprised with
compression effectiveness (data savings), but unpleasantly surprised
with its CPU dependence.
Therefore, I am thinking about preparing some compression CPU
performance benchmark for BackupPC. Potential new users or HW buyers
wo
On Sun, Oct 12, 2014 at 11:24 PM, Christian Völker wrote:
> Hi all,
>
> I remember having read about restoring single files from command line
> needs some BackupPC specific script or tricks to uncompress the files
> when using ocmpression for BackupPC.
>
> For a new instance I'm thinking of storin
On Mon, Oct 13, 2014 at 06:24:10AM +0200, Christian Völker wrote:
> I remember having read about restoring single files from command line
> needs some BackupPC specific script or tricks to uncompress the files
> when using ocmpression for BackupPC.
I assume you mean using BackupPC_zcat.
> For a
Hi all,
I remember having read about restoring single files from command line
needs some BackupPC specific script or tricks to uncompress the files
when using ocmpression for BackupPC.
For a new instance I'm thinking of storing the files without compression
to be able to easily restore them direc
On Wed, Jan 13, 2010 at 09:25:47AM +0100, Thomas Scholz wrote:
> we using backuppc on an quad core system. Our backupprocess using only on
> core
> for poolcompression. Is there a way to get Compress::Zlib working
> multithreaded?
You might want to run multiple backups in parallel... But AFAIK
Hi,
we using backuppc on an quad core system. Our backupprocess using only on core
for poolcompression. Is there a way to get Compress::Zlib working
multithreaded?
regards
Thomas Scholz
--
Netzbewegung GmbH | Pforzheimer Straße 132 | 76275 Ettlingen |
Geschäftsführer: Alwin Roppert, Dietmar
Tino Schwarze wrote:
> On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote:
>
>> Excellent it looks that fixed it.
>>
>> That's kinda lame you can't just change the TopDir.
>
> Well it's a typical bootstrap problem. Where are you supposed to find
> your configuration file if it's relative t
On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote:
> Excellent it looks that fixed it.
>
> That's kinda lame you can't just change the TopDir.
Well it's a typical bootstrap problem. Where are you supposed to find
your configuration file if it's relative to ${TopDir}? Therefore
${TopDir}
Excellent it looks that fixed it.
That's kinda lame you can't just change the TopDir.
Thanks for the help.
Heath
On Nov 10, 2009, at 2:00 PM, Les Mikesell wrote:
> Heath Yob wrote:
>> I've changed the TopDir to /CLIENTBACKUPS.
>>
>> pc and cpool directories are in there now.
>>
>> I'm getting a
Heath Yob wrote:
> I've changed the TopDir to /CLIENTBACKUPS.
>
> pc and cpool directories are in there now.
>
> I'm getting a bunch of errors like this on my PC clients:
> 2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink
If you install from the tarball, there is a confi
I've changed the TopDir to /CLIENTBACKUPS.
pc and cpool directories are in there now.
I'm getting a bunch of errors like this on my PC clients:
2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink
Thanks,
Heath
On Nov 10, 2009, at 8:55 AM, Les Mikesell wrote:
Heath Yob w
Heath Yob wrote:
> According to my config.pl file : $Conf{CompressLevel} = '9';
>
> So that's correct.
>
> ppo-backup:/CLIENTBACKUPS# du -sh cpool/
> 12K cpool/
> ppo-backup:/CLIENTBACKUPS# du -sm cpool/
> 1 cpool/
>
> There's nothing in my cpool directory.
Does that /CLIENTBACKUPS direct
According to my config.pl file : $Conf{CompressLevel} = '9';
So that's correct.
ppo-backup:/CLIENTBACKUPS# du -sh cpool/
12K cpool/
ppo-backup:/CLIENTBACKUPS# du -sm cpool/
1 cpool/
There's nothing in my cpool directory.
Thanks,
Heath
On Nov 10, 2009, at 1:34 AM, Adam Goryachev wrot
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthias Meyer wrote:
> Heath Yob wrote:
>
>> It appears that I'm not getting any compression on my backups at least
>> with my Windows clients.
>> I think my mac clients are being compressed since it's actually
>> stating a compression level in the h
Heath Yob wrote:
> It appears that I'm not getting any compression on my backups at least
> with my Windows clients.
> I think my mac clients are being compressed since it's actually
> stating a compression level in the host summary.
>
> I have the compression level set to 9.
>
> I have the Comp
It appears that I'm not getting any compression on my backups at least
with my Windows clients.
I think my mac clients are being compressed since it's actually
stating a compression level in the host summary.
I have the compression level set to 9.
I have the Compress::Zlib perl library instal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastien Sans wrote:
> Hello,
>
> The compression system of the pool in BackupPc is great, it save a lot
> of place, but I didn't found how to compress the tranfers in order to
> save my bandwidth.
> I tryed to modify the command line in "rzync" and
On 10/13 02:56 , Tomasz Chmielewski wrote:
> > $Conf{RsyncClientCmd} = '$sshPath -C -o CompressionLevel=9 -c blowfish-cbc
> > -q -x -l rsyncbakup $host $rsyncPath $argList+';
>
> Unless you're using an obsoleted SSH protocol in version 1, setting
> CompressionLevel does not make any sense - SSH
Carl Wilhelm Soderstrom schrieb:
> On 10/10 11:54 , Sebastien Sans wrote:
>> The compression system of the pool in BackupPc is great, it save a lot
>> of place, but I didn't found how to compress the tranfers in order to
>> save my bandwidth.
>
> Use compression in your ssh transport.
> Here's an
On 10/10 11:54 , Sebastien Sans wrote:
> The compression system of the pool in BackupPc is great, it save a lot
> of place, but I didn't found how to compress the tranfers in order to
> save my bandwidth.
Use compression in your ssh transport.
Here's an example I typically use:
$Conf{RsyncClientC
dan wrote:
> rsync does provide compression. rsync -z is compressed. just put this
> either in your main config or in the specific hosts config for the rsync
> command.
This won't work - backuppc's rsync implementation in perl (that supports
working against the compressed archive file) don't
rsync does provide compression. rsync -z is compressed. just put this
either in your main config or in the specific hosts config for the rsync
command.
ssh can compress the ssh tunnel that you create.
you can also compress ipsec tunnels like openvpn or cisco vpn.
Backup cant recognize it. The
Sebastien Sans schrieb:
> Hello,
>
> The compression system of the pool in BackupPc is great, it save a lot
> of place, but I didn't found how to compress the tranfers in order to
> save my bandwidth.
> I tryed to modify the command line in "rzync" and "tar" modes to
> activate compression (i adde
Hello,
The compression system of the pool in BackupPc is great, it save a lot
of place, but I didn't found how to compress the tranfers in order to
save my bandwidth.
I tryed to modify the command line in "rzync" and "tar" modes to
activate compression (i added -z options to use gz compression), t
Rich Rauenzahn wrote:
> John Pettitt wrote:
>>>
>>>
>> What happens is the newly transfered file is compared against candidates
>> in the pool with the same hash value and if one exists it's just
>> linked, The new file is not compressed. It seems to me that if you
>> want to change
John Pettitt wrote:
What happens is the newly transfered file is compared against candidates
in the pool with the same hash value and if one exists it's just
linked, The new file is not compressed. It seems to me that if you
want to change the compression in the pool the way to go i
Rich Rauenzahn wrote:
>
>
> I know backuppc will sometimes need to re-transfer a file (for instance,
> if it is a 2nd copy in another location.) I assume it then
> re-compresses it on the re-transfer, as my understanding is the
> compression happens as the file is written to disk.(?)
>
> Woul
Craig Barratt wrote:
> You're right.
>
> Each file in the pool is only compressed once, at the current
> compression level. Matching pool files is done by comparing
> uncompressed file contents, not compressed files.
>
> It's done this way because compression is typically a lot more
> expensive
Ok, thanks a lot for this information.
It's very interessting.
Regards,
Romain
John Pettitt <[EMAIL PROTECTED]>
Envoyé par : [EMAIL PROTECTED]
05/12/2007 09:06
A
Rich Rauenzahn <[EMAIL PROTECTED]>
cc
"backuppc-users@lists.sourceforge.net"
Objet
Re: [Backup
Craig Barratt wrote:
> Rich writes:
>
>
>> I don't think BackupPC will update the pool with the smaller file even
>> though it knows the source was identical, and some tests I just did
>> backing up /tmp seem to agree. Once compressed and copied into the
>> pool, the file is not updated with fu
tt <[EMAIL PROTECTED]>
05/12/2007 08:00
A
Rich Rauenzahn <[EMAIL PROTECTED]>
cc
Romain PICHARD/Mondeville/VIC/[EMAIL PROTECTED],
"backuppc-users@lists.sourceforge.net"
Objet
Re: [BackupPC-users] Compression level
Rich writes:
> I don't think BackupPC will up
Rich writes:
> I don't think BackupPC will update the pool with the smaller file even
> though it knows the source was identical, and some tests I just did
> backing up /tmp seem to agree. Once compressed and copied into the
> pool, the file is not updated with future higher compressed copies.
>
[EMAIL PROTECTED] wrote:
>
> Hello,
>
> I would like to have an information about compression level.
>
> I'm still doing several tests about compression and I would like to
> have your opinion about something :
> I think that there is a very little difference between level 1 and
> level 9.
> I to
Hello,
I would like to have an information about compression level.
I'm still doing several tests about compression and I would like to have
your opinion about something :
I think that there is a very little difference between level 1 and level
9.
I tought that I will be more.
For example, wit
I'm debating the question: to compress or not to compress.
The benefits of compression are obvious:
1) Backups take less space
The drawbacks I've come up with:
1) Requires more cpu
2) One more thing to go wrong (are errors in compression very likely?)
3) Typically, the largest files are already
"David Rees" writes:
> On 5/11/06, Lee A. Connell <[EMAIL PROTECTED]> wrote:
> >
> > I noticed while monitoring backuppc that it doesn't seem to compress
> > on the fly, is this true? I am backing up 40GB's worth of data on a
> > server and as it is backing up I monitor the disk space usage on th
On 5/11/06, Lee A. Connell <[EMAIL PROTECTED]> wrote:
I noticed while monitoring backuppc that it doesn't seem to compress on the
fly, is this
true? I am backing up 40GB's worth of data on a server and as it is backing up
I monitor
the disk space usage on the mount point and by looking at tha
I noticed while monitoring backuppc that it doesn’t
seem to compress on the fly, is this true? I am backing up 40GB’s
worth of data on a server and as it is backing up I monitor the disk space
usage on the mount point and by looking at that information it doesn’t
seem like compression is h
44 matches
Mail list logo