Re: [Bacula-users] Progressive VFull Questions

2018-05-24 Thread Bill Arlofski
Hello Alfred,

A few things jump right out at me about your sample config...

Consider standard Full job vs an Incremental or Differential. You do not have
three different Jobs defined, each with a different level. This does not work
in the context of Bacula.  With Bacula, you define one job, and then run it at
different levels at specified times.

It is a similar case with Virtual Full jobs.

With VFulls, you define ONE Job. Then, you set up a schedule that runs
Incrementals on all the days you want, and a Virtual Full level on the day and
time you wish the Virtual Full to be triggered.

I always recommend that the Job is configured with "Level = Incremental", so
that whenever you run it manually, it will already be probably what you want.
If you need it to be a full, or Virtual Full run, then this can be changed
before submitting the job, or on the bconsole command line, for example:

* run job=someJob level=Full



If you have different pools for your normal Fulls and Incrementals, they will
both require a "NextPool = " setting where "" is the Pool you wish to
keep your Virtual Fulls in. Of course the "" will need to be the same in
the Full and Incremental pools.  Additionally, you can also set/override the
NextPool in the schedule, so it would not be required in the Pools at all.


Your schedules do not need the Accurate=yes option since it is set in your Job
already. They also do not need to specify the "Level=" except for the
VirtualFull Run line since the Level is already defined in the Job as 
Incremental.


For the schedule, it might look something simple like:
8<
Schedule {
 Name = "PVF_Schedule"
 Run = at 23:00
 Run = Level=VirtualFull Priority=12 sun at 23:30
}
8<

So, basically, what that example Schedule says is:
- Run Incrementals every day at 23:00. The Job has "Level = Incremental"
  defined, so no need to add it in the Schedule
- Run the VFull on Sunday at 23:30 with a different priority than normal
  jobs. This will ensure that the Incrementals finish before the VFull
  starts.

Additionally, you will want to add the "AllowDuplicateJobs=yes" to your Job
configuration so that if the Sunday Incremental has not finished, the Sunday
Virtual Full can be queued and it will wait. Otherwise it will be
automatically cancelled.


Also, keep in mind that since you have set "Backups To Keep = 95", this will
require that you run one Full (this will happen automatically the first time
Bacula is told to run the Job with Level=Incremental when no Full exists), and
then at least 96 Incrementals before you can run a Virtual Full.

In this case, the easiest way to do this is to simply comment out the
"Level=VirtualFull" line in the PVF_Schedule until you have the Full backup
and all the required Incrementals. If you do not do this, then, each Sunday
when the Schedule triggers the VirtualFull backup, the Job will fail with the
error "not enough backups" - until you have the required 96 jobs. :)


I think this should get you onto the right track. If real sample configs are
required, I think I can probably dig something up from my test environments,
or build some from scratch.  :)

Best regards,

Bill

-- 
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Intervention needed for backup

2018-05-24 Thread Donna Hofmeister
Here's an example of the email I'm getting from Bacula (7.4.7):

24-May 06:05 bacula.allegro.com-sd JobId 104: Job
MinervaBackup.2018-05-23_23.05.00_53 is waiting. Cannot find any appendable
volumes.
Please use the "label" command to create a new Volume for:
Storage:  "FileChgr1-Dev1" (/santa/bacula/backup)
Pool: File
Media type:   File1

I went into bconsole and labeled a file named
-- MinervaBackup.2018-05-23_23.05.00_53. Bacula seems happy with this name
but I was immediately flooded with the following messages (this is just an
example):

bacula.allegro.com-dir JobId 104: Purging oldest volume
"MinervaBackup.2018-05-22_23.05.00_14"
bacula.allegro.com-dir JobId 104: 0 File on Volume
"MinervaBackup.2018-05-22_23.05.00_14" purged from catalog.
bacula.allegro.com-dir JobId 104: Purging oldest volume
"MinervaBackup.2018-05-22_23.05.00_14"
bacula.allegro.com-dir JobId 104: 0 File on Volume
"MinervaBackup.2018-05-22_23.05.00_14" purged from catalog.
bacula.allegro.com-dir JobId 104: Purging oldest volume
"MinervaBackup.2018-05-22_23.05.00_14"
bacula.allegro.com-dir JobId 104: 0 File on Volume
"MinervaBackup.2018-05-22_23.05.00_14" purged from catalog.
bacula.allegro.com-dir JobId 104: Purging oldest volume
"MinervaBackup.2018-05-22_23.05.00_14"
bacula.allegro.com-dir JobId 104: 0 File on Volume
"MinervaBackup.2018-05-22_23.05.00_14" purged from catalog.

I guess "purged from catalog" doesn't include actually purge the "_14" file
because it's still there:

 ll /santa/bacula/backup/M*
-rw-r- 1 root root 244 May 23 08:03
/santa/bacula/backup/MinervaBackup.2018-05-22_23.05.00_14
-rw-r- 1 root root 244 May 24 08:06
/santa/bacula/backup/MinervaBackup.2018-05-23_23.05.00_53

And fwiw...clearly the "_53" file is there but it's not being written to.
For giggles, I tried to make/label a "_54" file but that errored.

So I have this backup (which is the first of six) just sitting. If I cancel
this job, the next job in line will continue to sit too...

So what's wrong?

Here's my (only real) Pool definition:

Pool
{
  Name = File
  Pool Type= Backup
  Recycle  = yes  # Bacula can automatically recycle
Volumes
  AutoPrune= yes  # Prune expired volumes
  Volume Retention = 30 days  # 30 days
  Maximum Volume Bytes = 1G   # Limit Volume size to something
reasonable
  Maximum Volumes  = 30   # Limit number of Volumes in Pool
  Maximum Volume Jobs  = 1# Force a Volume switch after 1 job
  Volume Use Duration  = 14h  # Force volume switch
  Label Format =
"Vol-${Client}-${Pool}-${Year}${Month:p/2/0/r}${Day:p/2/0/r}-${Hour:p/2/0/r}${Minute:p/2/0/r}-${JobId}-${NumVols}"
  Purge oldest Volume  = yes
}

I do, of course, have a scratch pool:

Pool
{
  Name  = Scratch
  Pool Type = Backup
}

Second interesting question -- given my "Label Format" statement, I would
expect my files to be named something like
this: Vol-minerva.allegro.com-fd-File-20180517-2305-73-19. So why are my
files named: MinervaBackup.2018-05-23_23.05.00_53?

sigh- donna

-- 
Donna Hofmeister
Allegro Consultants, Inc.
408-252-2330
Visit us on Linkedin

Like us on Facebook 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-users Digest, Vol 145, Issue 29

2018-05-24 Thread Charles Nadeau
>
> Message: 4
> Date: Wed, 23 May 2018 09:48:39 +0300
> From: Panayiotis Gotsis 
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Can I run jobs on 2 storage daemons at the
> same time from one director?
> Message-ID: <20180523064839.rss6byxvlywci...@noc.grnet.gr>
> Content-Type: text/plain; charset=us-ascii; format=flowed
>
> Hello
>
> We have a setup with two storage daemons but, up till now, I have not
> really checked whether there is concurrency. We have not yet installed
> v9 to see whether there is any major change, but generally speaking,
> the queue handling from bacula leaves a lot to be desired.
>
> The point of my response however is that I should warn you of some
> potential issues you may face with two storage daemons. For example,
> that you cannot have two different Full/Incr/Diff schedules, one for each
> storage daemon, for the same FD without some conflicts.
>
> Just my (extra) 2c
>

Panayiotis,

Yes I realised this as I thought about concurency. I now moved to a
scenario where I back up to disk then migrate to tape.
Now I wonder how to create a Verify job that would verify the copy job.
Thanks!

Charles
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-users Digest, Vol 145, Issue 29

2018-05-24 Thread Charles Nadeau
>
> Message: 2
> Date: Tue, 22 May 2018 18:37:54 +0200
> From: Tilman Schmidt 
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Can I run jobs on 2 storage daemons at the
> same time from one director?
> Message-ID:
> <1527007074.2132756.1380943488.16822AD8@webmail.
> messagingengine.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Tue, May 22, 2018, at 18:22, Charles Nadeau wrote:
> > I configured my director with 2 storage daemons (one to disks, one to
> > tapes) on 2 different machines. They both backup without problems
> > except that they can't backup up simultaneously. I can have many jobs
> > running on my disks daemon but never have jobs running on both storage
> > daemons at the same time. I set "Maximum Concurrent Jobs" to 20 for my
> > Director but still I can't have both storage daemon run jobs
> > simultaneously.> [...]
> > It is a limitation of bacula or problem(s) with my
> > configuration files?
> Should work.
> What is the reason displayed by bconsole "status dir" for the second job
> not starting?
>

Tilman,

Thanks for your comment, it lead me to the problem. I had:

Scheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Volume
===
IncrementalBackup 6  22-May-18 20:05Bigzilla-fd-File   *unknown*
IncrementalBackup 6  22-May-18 23:05Beijing-fd-File*unknown*
IncrementalBackup 6  23-May-18 04:05hpdl380g6-fd-File  *unknown*
IncrementalBackup 7  23-May-18 04:05hpdl380g6-fd-Tape  *unknown*
IncrementalBackup 6  23-May-18 12:00Backup-fd-File *unknown*
IncrementalBackup 6  23-May-18 12:00Superbackup-fd-File
*unknown*
IncrementalBackup 7  23-May-18 12:00Backup-fd-Tape *unknown*
IncrementalBackup 7  23-May-18 12:00Bigzilla-fd-Tape   *unknown*
IncrementalBackup 7  23-May-18 12:00Beijing-fd-Tape*unknown*
IncrementalBackup 7  23-May-18 12:00Superbackup-fd-Tape
*unknown*
own*
Full   Backup 6  23-May-18 18:05BackupCatalog4 *unknown*
VolumeToCatalog Verify14  23-May-18 18:05VerifyBackup-fd-file
VolumeToCatalog Verify14  23-May-18 18:05VerifyBigzilla-fd-file
VolumeToCatalog Verify14  23-May-18 18:05VerifyBeijing-fd-file
VolumeToCatalog Verify14  23-May-18 18:05Verifyhpdl380g6-fd-file


Running Jobs:
Console connected at 22-May-18 18:20
 JobId  Type Level Files Bytes  Name  Status
==
 32326  Back Full  4,189327.8 G hpdl380g6-fd-Tape is running
 32327  Back Full  0 0  BackupCatalog4is waiting for
higher priority jobs to finish
 32328  Veri Volu  0 0  VerifyBackup-fd-file is waiting
execution
 32329  Veri Volu  0 0  VerifyBigzilla-fd-file is waiting
execution
 32330  Veri Volu  0 0  VerifyBeijing-fd-file is waiting
execution
 32331  Veri Volu  0 0  Verifyhpdl380g6-fd-file is waiting
execution
 32332  Back Incr  0 0  Bigzilla-fd-File  is waiting
execution

Setting all priorities to the same number got rid of the problem. I can now
use both storage daemons at the same time!
Thanks again!

Charles
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-24 Thread Alfred Weintoegl

Hello Bill, Hello Lloyd,

may I present a VFull-Backup-Configuration in testenvironment which runs 
only on Sundays while ervery other day (mon-sat) a incremental is running.
This is a Test-Configuration with the idea that if a VFull doesn't 
succed it should be possible to repeat it within a week.
So there should be one VFull and about 3 months Incrementals (Backups To 
Keep = 95).


My questions concerning VFull: For total storage requirement always 
double quantity of a Full (Full + VFull the first time, VFull + VFull 
the following times) plus backup quantity for 95 incrementals is needed?


And is a "Volume Retention time" or "Job Retention time" still necessary 
when it says "Delete Consolidated Jobs = Yes"?




2 Schedules:

Schedule {
  Name = "WeeklyCycle"
  Run = Incremental Accurate=yes mon-sat at 22:23
}

Schedule {
  Name = "WeeklyVFullCycle"
  Run = VirtualFull Accurate=yes 1st, 2nd, 3rd, 4th, 5th sun at 03:23
}

A Job for the VFull:

Job {
Name = "VFullHomeAndEtc"
Type = Backup
Level = VirtualFull
Client = "ClientCent50-fd"
FileSet="Home and Etc"
Accurate = Yes
Backups To Keep = 95
Messages = Standard
Pool = VFull-Pool4Cent50-01
Next Pool = VFull-Pool4Cent50-02
Schedule = "WeeklyVFullCycle"
Delete Consolidated Jobs = Yes
}

And a Job for the Incremental (and First Normal-Full) Backup:

Job {
Name = "BackupClientCent50"
JobDefs = "DefaultJob"
Client = ClientCent50-fd
Full Backup Pool = Full-Pool4Cent50
Incremental Backup Pool = Inc-Pool4Cent50
FileSet="Home and Etc"
Accurate = Yes
Backups To Keep = 95
Delete Consolidated Jobs = Yes
}


2 Extra-Pools for VFull-Backup (The Incrementals have their own Pool):

Pool {
  Name = VFull-Pool4Cent50-01
  Pool Type = Backup
  Recycle = yes   # automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 100 days
  Maximum Volume Bytes = 1G  # Test Volume size
  Label Format = VFullcent50_01-
  Maximum Volumes = 20
  Next Pool = VFull-Pool4Cent50-02
  Storage = File1
}

Pool {
  Name = VFull-Pool4Cent50-02
  Pool Type = Backup
  Recycle = yes   # automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 100 days
  Maximum Volume Bytes = 1G  # Test Volume size
  Label Format = VFullcent50_02-
  Maximum Volumes = 20
  Next Pool = VFull-Pool4Cent50-01
  Storage = File2
}


And then I have 3 scripts which are running after/before catalog-backup:

# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
...snip (catalog-backup)...


  RunScript {
   RunsWhen=Before
   RunsOnClient=No
   Console = "prune expired volume yes"
 }
  RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=truncate allpools storage=File1"
 }
  RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=truncate allpools storage=File2"
 }
}

With Kind Regards
Alfred





Am 23.05.2018 um 17:09 schrieb Lloyd Brown:
I can't speak for Alfred, but I would love to see example configs of how 
to set up a progressive virtual-full, the underlying jobs, pools, etc.  
I'm sure I could muddle my way through it using the documentation, but I 
find working from examples to be a good bit easier.  Also more likely to 
not only work, but work well.


Then again, I've never even gotten traditional virtual-full working.  
It's been on my todo list for several years now.  Go figure.



--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu


On 05/22/2018 11:08 AM, Bill Arlofski wrote:

Does anyone have experience with Progressive VFull-Backups?

Yes, I do.   Do you have more questions?




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-24 Thread Kern Sibbald

Hello Alfred,

Thanks for your kind feedback.  You might be interested in the history 
of this feature.


It was first proposed by one of Bacula Systems support engineers, then 
another support engineer (a scripting guru) wrote a script to make it 
work with older Baculas.  They presented the script at a Bacula Systems 
bi-annual company meeting, and liked the concept and considered 
implementing it in Bacula.  Bareos then implemented it in their code 
(i.e. no script needed).  I thought the Bareos implementation was 
excessively complicated, so I then implemented my own design in Bacula, 
which I consider much more intuitive. The concept is a bit tricky but 
once one understands it, it turns out to be very cool :-)


Best regards,

Kern


On 05/24/2018 08:36 AM, Alfred Weintoegl wrote:

Thank you Kern,

the PVF whitepaper is a great help, because every aspect is declared 
exactly in this paper.

I didn't knew it before.

And thank you Bill also for your prompt answer.

I've only just seen the following PVF demonstration:
https://www.baculasystems.com/ml/pvf3.svg#PVF-title
and this looks unbelievable but is exactly what we want to use for our 
future backups...
(...as the Progressive Virtual Full Backup is now available also for 
the free bacula version V9.0.x).



Thanks for this great work of free SW
Alfred


Am 23.05.2018 um 11:14 schrieb Kern Sibbald:
> Hello,
>
> Perhaps the PVF whitepaper would help you if you have not already 
seen it:  www.bacula.orgt -> Documentation -> White Papers -> 
Progressive ...

>
>
> On 05/22/2018 02:08 PM, Alfred Weintoegl wrote:
>> In "New Features in 9.0.0 - Progressive Virtual Full" documentation 
it says:

>>
>> "The new directive Delete Consolidated Jobs expects a yes or no 
value that if set to yes will cause any old Job that is consolidated 
during a Virtual Full to be deleted.".

>>
>> Here are some questions of a bacula-newbie:
>>
>> What directive decides when jobs are deleted when creating a 
Virtual Full Backup:
>> a) The Retention time of the Incremental Backup Pool for 
incremental jobs ?

>> b) The Job Retention time of the incremental jobs?
>> c) Or if this job is deleted because of "Delete Consolidated Jobs = 
yes"?

>>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?

>>
>> Does anyone have experience with Progressive VFull-Backups?
>>
>>
>> thx
>> Alfred
>>
>> 
-- 


...snip
>
---
Am 22.05.2018 um 19:08 schrieb Bill Arlofski:
...snip
>
>
> Hello Alfred,
>
> Option (c) is the correct choice.
>
>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual

>> Full is finished?
>
> Jobs that are consolidated into a Virtual Full are only deleted once 
the
> Virtual Full job has successfully completed with a JobStatus of "T". 
If the
> Virtual Full fails for any reason, no Incrementals will be deleted. 
And of

> course, this is only true if the "DeleteConsolidatedJobs = yes" is set.
>
>
>> Does anyone have experience with Progressive VFull-Backups?
>
> Yes, I do.   Do you have more questions?
>
> Best regards,
>
> Bill
>
>
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-24 Thread Alfred Weintoegl

Thank you Kern,

the PVF whitepaper is a great help, because every aspect is declared 
exactly in this paper.

I didn't knew it before.

And thank you Bill also for your prompt answer.

I've only just seen the following PVF demonstration:
https://www.baculasystems.com/ml/pvf3.svg#PVF-title
and this looks unbelievable but is exactly what we want to use for our 
future backups...
(...as the Progressive Virtual Full Backup is now available also for the 
free bacula version V9.0.x).



Thanks for this great work of free SW
Alfred


Am 23.05.2018 um 11:14 schrieb Kern Sibbald:
> Hello,
>
> Perhaps the PVF whitepaper would help you if you have not already 
seen it:  www.bacula.orgt -> Documentation -> White Papers -> 
Progressive ...

>
>
> On 05/22/2018 02:08 PM, Alfred Weintoegl wrote:
>> In "New Features in 9.0.0 - Progressive Virtual Full" documentation 
it says:

>>
>> "The new directive Delete Consolidated Jobs expects a yes or no 
value that if set to yes will cause any old Job that is consolidated 
during a Virtual Full to be deleted.".

>>
>> Here are some questions of a bacula-newbie:
>>
>> What directive decides when jobs are deleted when creating a Virtual 
Full Backup:
>> a) The Retention time of the Incremental Backup Pool for incremental 
jobs ?

>> b) The Job Retention time of the incremental jobs?
>> c) Or if this job is deleted because of "Delete Consolidated Jobs = 
yes"?

>>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?

>>
>> Does anyone have experience with Progressive VFull-Backups?
>>
>>
>> thx
>> Alfred
>>
>> 
-- 


...snip
>
---
Am 22.05.2018 um 19:08 schrieb Bill Arlofski:
...snip
>
>
> Hello Alfred,
>
> Option (c) is the correct choice.
>
>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual

>> Full is finished?
>
> Jobs that are consolidated into a Virtual Full are only deleted once the
> Virtual Full job has successfully completed with a JobStatus of "T". 
If the
> Virtual Full fails for any reason, no Incrementals will be deleted. 
And of

> course, this is only true if the "DeleteConsolidatedJobs = yes" is set.
>
>
>> Does anyone have experience with Progressive VFull-Backups?
>
> Yes, I do.   Do you have more questions?
>
> Best regards,
>
> Bill
>
>
>
--
Bill Arlofski
http://www.revpol.com/bacula

-- Not responsible for anything below this line --

Am 23.05.2018 um 11:14 schrieb Kern Sibbald:

Hello,

Perhaps the PVF whitepaper would help you if you have not already seen 
it:  www.bacula.orgt -> Documentation -> White Papers -> Progressive ...



On 05/22/2018 02:08 PM, Alfred Weintoegl wrote:
In "New Features in 9.0.0 - Progressive Virtual Full" documentation it 
says:


"The new directive Delete Consolidated Jobs expects a yes or no value 
that if set to yes will cause any old Job that is consolidated during 
a Virtual Full to be deleted.".


Here are some questions of a bacula-newbie:

What directive decides when jobs are deleted when creating a Virtual 
Full Backup:
a) The Retention time of the Incremental Backup Pool for incremental 
jobs ?

b) The Job Retention time of the incremental jobs?
c) Or if this job is deleted because of "Delete Consolidated Jobs = yes"?

in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?


Does anyone have experience with Progressive VFull-Backups?


thx
Alfred

-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users