Gandalf-
Sounds like you need a bigger backup server.
BackupPC keeps the tranfer logs compressed, even the most recent one.
Typical log sizes for my largest host (768GB, 6.7 Million files) which
also has a significant amount of churn. You can see that the Full, (backup
65) even compressed, the
On Fri, 1 Sep 2017 11:50:25 -0500
Les Mikesell wrote:
> Large, changing files can be a problem, but log files tend to be
> highly compressible.
Yup, I confirm, a large VM file (50G) on one laptop extends the backups
time from a few hours to 2 days when it has been used :/
JY
-
2017-09-01 18:50 GMT+02:00 Les Mikesell :
> Large, changing files can be a problem, but log files tend to be
> highly compressible.
rotated log files are already compressed by the client.
--
Check out the vibrant tech com
On Fri, Sep 1, 2017 at 11:35 AM, Gandalf Corvotempesta
wrote:
> 2017-09-01 18:29 GMT+02:00 Les Mikesell :
>> Unless you have a huge turnover in data, keeping more backups will not
>> take a lot more space on the server. There is only one copy kept of
>> each unique file, no matter how many backup
2017-09-01 18:29 GMT+02:00 Les Mikesell :
> Unless you have a huge turnover in data, keeping more backups will not
> take a lot more space on the server. There is only one copy kept of
> each unique file, no matter how many backups you keep. And, since it
> is compressed it will take less space t
On Fri, Sep 1, 2017 at 11:16 AM, Gandalf Corvotempesta
wrote:
> 2017-09-01 17:10 GMT+02:00 Ray Frush :
>> BackupPC's retention rules are not necessarily the easiest to understand.
>> Your proposed schedule would result in having only 7 days of backups, which
>> is probably not what you want.
>
> Y
2017-09-01 17:10 GMT+02:00 Ray Frush :
> BackupPC's retention rules are not necessarily the easiest to understand.
> Your proposed schedule would result in having only 7 days of backups, which
> is probably not what you want.
Yes, only 7 days of backups is what I want.
I don't have enough space on
Longish answer below...
On Fri, Sep 1, 2017 at 3:22 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2017-08-31 16:33 GMT+02:00 Ray Frush :
> > The values you'll want to check:
> > $Conf{IncrKeepCnt} = 26; # This is the number of total 'unfilled'
> > backups kept.
> >
>
2017-08-31 16:33 GMT+02:00 Ray Frush :
> The values you'll want to check:
> $Conf{IncrKeepCnt} = 26; # This is the number of total 'unfilled'
> backups kept.
>
> $Conf{FillCycle} = 7;# This is how often a filled backup is kept (1 per
> week) which strongly influences the next setting
>
> $
>
>
> I don't really know the details of bpc4. I think it always fills the
> most recent run and works backwards to clean up the old copies so you
> should have whatever files are still there even if some were somehow
> deleted. Still, if your pool filesystem is totally corrupted all
> bets are
On Thu, Aug 31, 2017 at 11:51 AM, Gandalf Corvotempesta
wrote:
> >
> Yes, now it's clear.
> But my issue is not bandwidth but time. A longer backup will increase
> load on the
> host for more time.
Yes, you need to pick your tradeoff between knowing your backup copy
is exactly correct and the amo
On Thu, Aug 31, 2017 at 10:45 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
> Ok but let's simulate a crash in your example:
>
> On day 2, before the incremental backup, the filled one (day0) is lost.
> Is backup made on day1 still available with "all" files or only with
>
2017-08-31 18:51 GMT+02:00 Gandalf Corvotempesta
:
> Yes, on the run everyhting missing is synced. But what about a restore?
*on the NEXT run
--
Check out the vibrant tech community on one of the world's most
engaging tec
2017-08-31 18:44 GMT+02:00 Les Mikesell :
> With rsync xfers, only the changes are going to be transferred. The
> difference in a backuppc full and incremental is that the incremental
> will use the rsync feature of comparing the timestamp and length of
> the files to quickly skip unchanged files,
2017-08-31 18:34 GMT+02:00 Ray Frush :
> I'll extend the example
>
> Day 0 : Full backup 100GB transfered
> Day 1 : add 5GB , Incremental runs, 5GB transferred
> Day 2 : add 5GB , Incremental, ~5GB transferred
> Day 3 ; add 5 GB, Full runs. ALL files check-summed. Files with
> identica
On Thu, Aug 31, 2017 at 11:23 AM, Gandalf Corvotempesta
wrote:
>
> So, with a "full" run, the second "full" is still seen as an
> incremental by rsync?
> Let's assume a 100GB host.
> bpc will backup that host for the first time. 100GB are transferred.
> The next day, only 5GB are added on that hos
On Thu, Aug 31, 2017 at 10:23 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
>
> So, with a "full" run, the second "full" is still seen as an
> incremental by rsync?
> Let's assume a 100GB host.
> bpc will backup that host for the first time. 100GB are transferred.
> The next
2017-08-31 17:54 GMT+02:00 Les Mikesell :
> I guess I'm missing why you would ever want to delete anything
> manually. With bpc the actual files are going to be in the pool
> anyway and you almost certainly don't want to delete anything manually
> from there because you'd lose things that are pool
2017-08-31 17:32 GMT+02:00 Ray Frush :
> With BackupPC 4.x we only take a 'full' every 90 days, and because we're
> using rsync, subsequent fulls aren't as painful as the first one. We run
> the full to ensure that all checksums match to avoid silent data corruption
> on th storage
So, with a "f
On Thu, Aug 31, 2017 at 10:24 AM, Gandalf Corvotempesta
wrote:
>
> I would like to use BPC (i've used v3 many years ago with success
> ,tried v4 last year and it was a total mess due to a bug now fixed)
> but the ability to delete (brutally, from command line, not from BPC)
> and backup point is m
On Thu, Aug 31, 2017 at 9:16 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2017-08-31 16:33 GMT+02:00 Ray Frush :
>
> Thanks for the reply.
> In this case, you are making some full backups.
> I don't want to run any full backup except for the first one, like
> with rsnapsho
2017-08-31 17:18 GMT+02:00 Les Mikesell :
> Also, note that backuppc's compression and pooling across host will
> likely at least double the history you can keep online unless your
> data is mostly unique and already compressed.
This is not an issue for me.
My biggest concern is how bpc handle a b
On Thu, Aug 31, 2017 at 9:33 AM, Ray Frush wrote:
>
> To answer your second question: BackupPC does a good job of managing the
> 'filled' (think 'full') backups if you decide to delete one. I have found,
> that BackupPC is pretty good at self healing from issues. We had a number
> of backups i
2017-08-31 16:33 GMT+02:00 Ray Frush :
> BackupPC, is relatively easy to setup for a schedule like you propose. We
> keep a 30 day backup history with a few extra weeks tacked on to get out to
> ~70 days, so the values below reflect our schedule:
>
> The values you'll want to check:
> $Conf{IncrK
Gandalf-
BackupPC, is relatively easy to setup for a schedule like you propose. We
keep a 30 day backup history with a few extra weeks tacked on to get out to
~70 days, so the values below reflect our schedule:
The values you'll want to check:
$Conf{IncrKeepCnt} = 26; # This is the number
Additionally, what happens if I delete/lost/break the full backups ?
Any subsequent incremental backups will be broken or automatically the
following incremental backup would become a "full" like with
rsnapshot?
2017-08-30 21:54 GMT+02:00 Gandalf Corvotempesta
:
> Hi to all
> Currently I use rsnap
Hi to all
Currently I use rsnapshot with success to backup about 20 hosts
Our configuration is simple: every night I'll start 4 concurrent backups
keeping at least 10 days of old backups
In this way, due to rsnapshot hardlinks , I'm able to restore any file up
to 10 days ago or to keep the backup
27 matches
Mail list logo