Re: [Bacula-users] (My) Job pruning seems not to work as intended

2024-10-23 Thread Bill Arlofski via Bacula-users

On 10/23/24 11:51 AM, Justin Case wrote:
>

Thanks Bill, I got it working with your help.


Excellent!   That was quick too! :)

Glad I could help!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com


signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] (My) Job pruning seems not to work as intended

2024-10-23 Thread Bill Arlofski via Bacula-users

On 10/23/24 10:59 AM, Justin Case wrote:

I used this in an admin job, but no verify/admin jobs older than 6mo got 
deleted:

   Runscript {
 RunsWhen = "Before"
 RunsOnClient = no
 Console = "delete from job where (type in ('V', 'D') or (type = 'B' and jobbytes = 
0 and jobfiles = 0)) and starttime < now()-interval '6 months';"
   }



Hello Justin,

That will not work as it currently is.

The reason is that the `Console =` option is meant to send only bconsole 
specific commands to the Director.

Also, I always recommend to not use the Console command at all and instead use the `Command =` option and point it to a small 
script for a few reasons:


- Any output from the Console command will be logged as `JobId: 0` and will not be logged to the catalog, and it is confusing 
to see random JobId: 0 log entries intermixed with other job log entries in the bacula log file.


- Using a script, all of its stdout will be logged in the job that called 
it, so you can report progress of a script simply 
by using the `echo` command in a shell script.


- You have full control of what happens in a script and the order of events. `Console =` cannot be guaranteed to run in the 
order specified in a Job (If you have more than one)


So, I would do something in a small script like:
8<
#!/bin/bash

# First, SELECT using the same SQL command so you have in your job log what 
jobs were deleted
echo -e "sql\nSELECT jobid, name from job \n\nquit\n" | bconsole

# Next, DELETE using your SQL command but it needs to be passed to bconsole
echo -e "sql\nDELETE from job where \n\nquit\n" | bconsole
8<

Notice that `echo -e` is used. This allows us to send multiple commands using the `\n` (line feed). We need this because we 
have to first put bconsole into its `sql` mode so we can send the SELECT and DELETE SQL commands.


Also notice there are two `\n` after the SQL commands and before the quit 
command. This is
so we "Terminate query mode with a 
blank line." as the bconsole sql command tells us when we enter that mode. :)



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com


signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] setting up an autochange in the cloud

2024-10-22 Thread Bill Arlofski via Bacula-users

On 10/22/24 1:05 AM, Stefan G. Weichinger wrote:


So I assume you say, it's enough to enable it in SD, right?
And let the FD encryption away to keep it simple(r).

?


Yes. I think so, BUT make sure to pay attention to dealing with managing the key files for each volume. They get stored in 
the keys directory configured by the `KEY_DIR` variable in the `/path/to/key-manager.py` script.


If a key file is lost it is (obviously) impossible to retrieve the data from that volume, rendering your jobs on that volume 
useless. Probably a good idea to set up a master key file and store that somewhere safe as insurance for such a case.




The actual path will depend on the Bacula community maintainer for your distro. 
:)


Yes, that was what I was missing (and the installed package, sure). I
added/check both things according to the mentioned video.

Now it works.

thanks to both of you!


Excellent!

Glad to partner again with Marcin help you out. :
)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com


signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] setting up an autochange in the cloud

2024-10-21 Thread Bill Arlofski via Bacula-users

On 10/21/24 1:45 AM, Stefan G. Weichinger wrote:
>

So that means for best protection I would need "storage daemon data
volume encryption"? Or even better: enable both?

I assume enabling both would add overhead in terms of CPU usage etc


Hello Stefan,

Not sure I would call FD encryption plus SD encryption "better", only because you have the added task of managing the 
keys/certs on the client(s) in addition to the SD re-encrypting the already FD-encrypted data and you needing to make sure 
the encryption key files for each SD-encrypted cloud volume are safely maintained. :)


So, more CPU use on client(s). and on SD, and more admin work, but yes, data 
would be encrypted twice in such a setup.



Is there a working example somewhere?

Just setting "Volume Encryption = yes" leads to issues labelling the
volumes here, I assume that a keypair is needed somewhere.

thanks


In addition to setting "Volume Encryption = yes" in each of your your SD's c
loud devices, you also need the following in your 
SD's top-level configuration:

8<
EncryptionCommand = "/path/to/key-manager.py getkey"
8<

The actual path will depend on the Bacula community maintainer for your distro. 
:)


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] baculabackupreport script error

2024-10-20 Thread Bill Arlofski via Bacula-users
On Saturday, October 19th, 2024 at 15:23, Chris Wilkinson 
 wrote:
>
> That's perfectly reasonable. I just got confused by the help screen thinking 
> that since the -C option is in [ ] I took that to mean optional. Now I know 
> to call with the -C option.
> 

> Thanks

Hello Chris,

Excellent. Glad it's all working for you now. :)

Make sure you are using the latest version available on Github (it's probably 
best to just clone the git repo). I find bugs every now and again and somehow 
am still adding features.

Every time I add a feature or fix a bug, my buddy and I joke that "the script 
is now 'feature complete'", and then, of course, another feature idea or bug 
comes along. lol


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com

signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] (My) Job pruning seems not to work as intended

2024-10-19 Thread Bill Arlofski via Bacula-users
On Wednesday, October 16th, 2024 at 06:15, Justin Case  
wrote:

> I am wondering why I am seeing in my catalog lots of jobs older than the 
> JobRetention defined in the pools, and also older than the default 
> JobRetention assumed for the clients.
> The volume recycling seems to work fine adhering to the VolumeRetention in 
> the pools.
> 

> To me it is a mystery, probably be cause I overlook some dependencies I am 
> not aware of.
> Can someone please help me understanding this.

Hello Justin,

>From the rest of this thread, it appears that your prunining is working as 
>configured/expected.

Where are you seeing the old jobs? bconsole? BWeb? Bacularis?

They could be coming from the `jobhisto` table.

And you can prove this by querying the `job` and `jobhisto` tables and compare 
the results.

Let us know. 



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com

signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] baculabackupreport script error

2024-10-19 Thread Bill Arlofski via Bacula-users
On Thursday, October 17th, 2024 at 04:11, Chris Wilkinson 
 wrote:

> I got your Python script working but only if I include the -C 
>  in the command line. Otherwise it complains that 
> email is missing, cannot connect to dB and a few other things. It looks like 
> it's not finding the configuration file that I put in my home directory. With 
> -C it does find it.
> 

> Is there no default for this parameter?
> 

> -Chris Wilkinson


Hello Chris,

Since there are defaults for practically everything (eg: the db name, db user, 
db password, number of hours, days, etc), the only variable required which 
cannot have a default is the email address. This must be set on the command 
line, or as an environment variable, or in a [section] of a config file.

Since the config file is not required, there is no default for it, and it can 
be located anywhere you like. I could think of no "correct" location for a 
default config file since there is Bacula Enterprise and several different 
package maintainers for Bacula Community, each choosing different places for 
"bacula stuff" to reside. :)

Also, you will notice that in the example config file, the [DEFAULT] section is 
basically blank - because everything already has some default setting. If you 
have a different name for your Bacula catalog DB, or you have set a DB user 
other than the default 'bacula', or you have set a password other than the 
default "", then you can set them in the environment, on the 
command line, or in a specified [section] of a specified config file.

So, if you specify just the -C /path/to/config/file and -S , everything else can be set/overridden in the config file, 
keeping your cron line nice and short/clean. :)


Hope this helps,
Bill

-- 
Bill Arlofski
w...@protonmail.com

signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Aggressively prune (and truncate) file volumes on Progressive backups...

2024-10-02 Thread Bill Arlofski via Bacula-users

On 10/2/24 10:00 AM, Marco Gaiarin wrote:

Mandi! Gary R. Schmidt
   In chel di` si favelave...


RTFM again, it took me about three goes to understand it - and I've been
doing backups since 9-track tape drives were vertical!


Mmmm... i've found references to scratch pool but really i've nefer
understood; if i look in docs i found a rather laconic:


https://www.bacula.org/9.4.x-manuals/en/main/Configuring_Director.html#SECTION0020161000

seems there's no whitepaper on scratch pools...


So, RTFM where?! ;-)


Hello Marco,

Typically Scratch pools are not too useful for disk volumes.

The idea behind a Scratch pool is the following:

Let's say you have two pools: "Full", and "Inc"

If you do not have a "ScratchPool" to pull from, and a RecyclePool to send recycled volumes back into, then tapes created in 
the Inc pool (for example) will never be available to be used in the Full pool - even if they are all past their
retention 
and have all been pruned and recycled. They are doomed to this Inc pool forever. :)


So, you could have job(s) writing to the Full pool all waiting on media which is technically available (pruned and living in 
the Inc pool), but not available for use by jobs in the Full pool.


So, what you do is set "ScratchPool = ScratchPoolName" and "RecyclePool = ScratchPoolName" in your Full and Inc Pools. Then, 
make sure to enable Recycling (Recycle = yes) in the pools.


Then, when tapes are labeled (using the 'label barcodes' command in bconsole), you specify the ScratchPoolName as the initial 
pool where they will be created and then Bacula will pull from the Scratch pool when a tape is needed, and put it back there 
when it is recycled so that the tape is then free to move between the Full and Inc pool as it becomes available, and as 
necessary by Bacula's needs.



I do have a use case for ScratchPool and RecyclePool settings in Disk volume 
pools... In my case I use
Josh Fisher's 
'vchanger' so each of my xTB removable drives can have 10 file volumes in either my Inc or Full offsite pools. Disk volumes 
on each physical/removable xTB disk can freely move as described above so I never run into a situation where the whole 
physical disk is full of 10GB volumes in the full pool when Inc pool volumes are needed for a job and vice versa.



Note: A pool named "Scratch" is treated a bit special by Bacula. If no `ScratchPool` is specified in a pool, Bacula will look 
for volumes with the correct MediaType in this pool.   But, you can name your scratch pools anything you like and never use 
this specific one.


For example, If an SD manages more than one tape library, you can have a set of 
pools for Lib1:
 - Lib1_Scratch, Lib1_Full, Lib1_Inc

And then a set of pools for Lib2:
 - Lib2_Scratch, Lib2_Full, Lib2_Inc


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Debug and trace

2024-10-02 Thread Bill Arlofski via Bacula-users

On 10/2/24 8:34 AM, Mehrdad R. wrote:

Hi guys
So I am trying to trace the backup to verify that files are correctly
scanned and backup especially in diff. mode, I am testing backing up a
directory on a window server(Local windows FD and SD backing up to a
local disk on the server) which contains around 200k files in around
5k directories and i am not sure if diff. backup is actually scanning
and catching all the changed files which are copied there, the usually
logs and file lists don't show anything

I came across the SET BEDUG LEVEL command and ran that in bconsole
thinking I would get some info on what is happening in FD and SD trace
files, but there is nothing in those files

Does this command work still(it was some old documentation describing
it)? The bconsole seemed to buy it, but no results
Has anyone tried this? Or maybe knows of any other way to see which
files are actually scanned, selected, etc?? Or anyother insight to
w

hat is actually happening in the backup process?

Hello Mehrdad,

There is really no need for debugging. You can see what the end results are 
with just a couple bconsole commands.

Run the Full, then make the changes and/or file additions, then run the Diff...

Then, list the files from the Full and Diff:

# echo "list files jobid=" | bconsole | tee /tmp/full.txt

# echo "list files jobid=" | bconsole | tee /tmp/diff.txt

The full step is not really necessary I'd say.

But the diff.txt file list should align with any new or modified directories 
and files that happened after the Full.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote Client Security Alerts

2024-09-17 Thread Bill Arlofski via Bacula-users

On 9/17/24 4:41 AM, Chris Wilkinson wrote:
I keep getting security alerts from a remote client backup. The backups always run to success. The IPs that are listed in the 
job log are different every time and in various locations including some in Russia but also in London and European data 
centres. There are no entries at all in the remote client bacula log. This only happens with remote client backups, never 
with local client backups.


It's not clear to me whether these alerts are coming from the DIR or being sent 
to the Director by the client.

I'm not sure whether to just ignore these or take some steps to block them. Is there an FD directive that would reject these 
perhaps?


Any advice welcomed.

Thanks

-Chris Wilkinson


Hello Chris,

Since this FD "nuc2" is (obviously) exposed to the Internet, I would enable the firewall on it, and only allow connections in 
from the Director on port 9102/TCP (default).


Best/safest way IMHO.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/11/24 1:50 AM, Bruno Bartels (Intero Technologies) wrote:
>

Hi Bill,
thank you very much for your great answer!
I am going to implement this in the next time and then get back to you.
Thank you again for that valuable hint!
Bruno


Hello Bruno,

You are welcome.

I have worked with this new feature when it was first introduced, but it has been a while since I touched it so I look 
forward to your results. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula setup hangs, need help debugging

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/11/24 1:55 AM, Simon Flutura wrote:

Hi,


I inherited a bacula setup, backing several machines up on tape.

Sadly the setup hangs, no network/cpu/io activity while jobs are running
forever.

We are running Bacula   11.0.6.

Do you have any clue where to start in debugging the setup?


Best,


Simon


Hello Simon,

Most likely Bacula is waiting for something (media is my first guess)

In bconsole, a "status director" will be the first place to start.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/10/24 5:21 AM, Bruno Bartels \(Intero Technologies\) via Bacula-users 
wrote:

Hi alltogether,
I have adjusted Bacula to cancel jobs, that are duplicates of one job (Disable "AllowDuplicateJobs", or 
"DisableDuplicateJobs" AND "CancelQueedDuplicates, which should be the same

im my understanding).
The reason for this is, that there are running some really big full backups, that take few days to finish, and meanwhile 
there is the possibility, that the same job can start in a lower level

(incremental/differential)
This works fine, except one problem:
When the new job gets canceled there is this error thrown in the logs:

Fatal error: JobId XXX already running. Duplicate job not allowed.

And Bacula is sending out a mail.

Question: Is there possibility not log this as FATAL ERROR?

I want to receive mails concerning fatal errors, so setting the message 
ressources isn't a option.

Can you please help?

Thank you in advance




Bruno


Hello Bruno,

You cannot do this the way you are currently trying because as you have seen, Bacula will cancel the job, and it will show up 
as a "non good" (canceled) job in the catalog, and you will get the failed job email.


Fortunately, there is a new feature which was added recently that should solve 
this issue for you.

It is called 'Run Queue Advanced Control' which adds a new "RunsWhen" setting for your job Runscripts. The new setting is 
"RunsWhen = queued"


The idea is that instead of using the AllowDuplicateJobs, DisableDuplicateJobs, and CancelQueedDuplicates job options to 
control if a job is allowed to be queued/started, you add a RunScript{} stanza to your job, set the RunScript's "Runswhen = 
queued", and have the RunScript's "Command =" setting point to a custom script (we have examples).


The script's returncode/errorlevel will determine whether the job enters the queue, or is just dropped and forgotten about - 
producing no canceled jo

b, and no job error email.

Using this new advanced "RunsWhen = queued" level, you should be able to 
accomplish what you are looking to do:

- Prevent same jobs from being queued when the same job is already running
- Prevent duplicate jobs from being canceled and error emails from being sent.


This new feature is available since Bacula Community version 15.0.x, which 
closely tracks Bacula Enterprise v16.0.x.

It is documented here in the Enterprise manual:

https://docs.baculasystems.com/BETechnicalReference/Director/DirectorResourceTypes/JobResource/index.html#job-resource

The section you are looking for is named: "Notes about the Run Queue Advanced 
Control with RunsWhen=Queued"

Please make sure you are running Community version v15.0.2 first, then give 
this a try and let me know if this helps.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up to USB disks - Archival of data?

2024-09-05 Thread Bill Arlofski via Bacula-users

On 9/5/24 6:48 AM, Anders Gustafsson wrote:

Hi!

What is the best or recommended process here? Ie if we back up to externa USB 
disks and want to replace the
disk every now and then and keep the old as an archival copy?

Assuming that we then want to restore a file from an old disk, that was 
disconnected three months ago. What
do we need to do?


Hello Anders,

When using multiple removable disks, I would recommend Josh Fisher's excellent 
"vchanger"

You can find it here: https://sourceforge.net/projects/vchanger/


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual backup

2024-09-04 Thread Bill Arlofski via Bacula-users

On 9/4/24 11:27 AM, David Waller via Bacula-users wrote:

Sorry forgot to add:
I am running version 15.02 on Debian and the two pool definitions are as 
follows:

Pool {

   Name = "Air-Full"

   PoolType = "Backup"

   LabelFormat = "Air-"

   ActionOnPurge = "Truncate"

   MaximumVolumeJobs = 1

   VolumeRetention = 31536000

   NextPool = "Virtual-Air"

   Storage = "FreeNAS1"

   AutoPrune = yes

   Recycle = yes

}


And

Pool {

   Name = "Virtual-Air"

   PoolType = "Backup"

   LabelFormat = "Virtual-"

   VolumeRetention = 31536000

   Storage = "FreeNAS1"

}


My understanding is that the job creates the full and incremental backup volumes in the Pool, Air-Full, by running the job 
with a level of full. I can then run the job with a level of VirtualFull and bacula will copy the full and the various 
incremental volumes to the pool Virtual-Air and consolidate into one volume. The various volumes are created in the pool 
Air-Full as expected, the failure happens when I run with a level of VirtualFull.


Both pools are on the same storage, I have not tested yet if that is the issue. If it is, and I have to have the two pools on 
separate storage, what happens with the media type as my understanding is to use separate media type if you have different 
storage devices in which case would bacula get confused on a restore?


Hello David,

8<
Status “is waiting on max Storage jobs.”
8<

This means what it says. Somewhere in the "pipeline" (in this case on the Storage) you have reached the limit on the number 
of concurrent jobs that can run on the defined storage.


When you run a VirtualFull using the same storage (perfectly fine to do so), it needs a device to read and a device to write, 
which counts as two jobs to the Storage.



What is your `MaximumConcurrentJobs` settings for the following:

- Director Storage resource: `FreeNAS1`
- The Storage Daemon itself (this is the top level limit of the number of jobs 
the SD can run concurrently)
- The Device(s) in the SD. If you only have one device, the MaximumConcurrentJobs will not matter because a device can only 
read or write during a job, not both.


Can you post your configurations for:

- The Director storage resource `FreeNAS1`
- The SD's Autochanger and devices

If your Director's storage resource `FreeNAS1` points to a single device on the SD, then you need to instead create an 
Autochanger on the SD with some number of devices and point the Director's Storage resource `FreeNAS1` at that Autochanger.


I usually start with 10 devices plus a couple more "ReadOnly" devices so there should always be a device available for 
reading during critical restores.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full and diff backups in 2 different jobs

2024-09-03 Thread Bill Arlofski via Bacula-users

On 9/3/24 4:24 AM, Mehrdad R. wrote:

Hi
Mostly to see if it is possible, i am fully aware that it can be done in one 
job,


Hello Mehrdad, this is in fact the only way Bacula works. :)

Bacula considers one Job name, one Client, and one Fileset a unit. Change any one of these and Bacula sees an entirely new 
unit requiring a Full backup before an incremental or a differential can be run.




was just wondering if there was a way to have them in
separate jobs, maybe save them on separate disks eventualy
The jobs that i have set up fail in that respect as i mentioned and
the diff job does not see the full job


What you would/could do in this case is one of a couple different things:

- Use the "FullBackupPool", "IncrementalBackupPool", and the "DifferentialBackupPool" settings in a Job so that each of these 
levels of Backups may be directed to different storage locations.


OR

- Set the "Level", "Storage", and "Pool" in each of your your schedule's "Run" l
ines.


Using the first method, you will need to set the "Storage" in each of these pools so that the correct disk location is always 
used regardless of the Pool selected.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

Oh! Just one more note...

I wrote:
8<
*disable storage=mhvtl-waa_a-Autochanger drive=0
8<

I meant to mention that the '0' in the 'drive=0' command line option is the SD's Drive Device's "DriveIndex" number, so of 
course it might be 1, 2 or whatever. :)


Drives are zero indexed, while slots are one indexed.

Except in all Tape Library web GUIs I have seen, where drives are also one 
indexed.

This makes life really fun for a backup admin. 



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

On 8/27/24 8:29 AM, Marco Gaiarin wrote:

Mandi! Bill Arlofski via Bacula-users
   In chel di` si favelave...


You cannot tell Bacula to load a cleaning tape without experiencing
these kinds of errors, because when the SD is told to
load a tape from a slot into a drive, the 'mtx-changer' script calls the 'mtx' 
utility to load, then, once that returns OK,
the script calls `mt -f /tape/nodeid status` over and over (with time/iteration 
limits and some sleep time between each call)
until it sees a "ONLINE" in the `mt status` output (in the case of a Linux 
distribution).

In the case of a cleaning tape, this "ONLINE" will never appear, and the 
mtx-changer script will always time out and fail,
then return with errorlevel 1, and the SD will complain exactly as you have 
demonstrated above.


I supposed that. Super clear! And thanks to all!


Welcome!  :)



You have two choices for cleaning tape drives with Bacula:
- Manual: Issue a disable command to the drive in bconsole, then manually 
load/unload a cleaning tape, then re-enable the drive.


Only a note: you meant 'disable job(s)', right? 'disable [command to] the
drive' seems not possible in bconsole, or i'm missing something...

Nope... You want to disable the drive in bconsole.

Then, the drive will not be used for any jobs that start on the SD - this is nice when you have library with several drives 
because other jobs can continue to be started and run on the other drives while this one is disabled:

8<
*disable storage=mhvtl-waa_a-Autochanger drive=0
  │
Automatically selected Catalog: MyCatalog   
  │
Using Catalog "MyCatalog"   
  │
3002 Device ""mhvtl-waa_a-Autochanger_Dev0" 
(/dev/tape/by-id/scsi-350223344ab001700-nst)" disabled.
8<


And a status storage=mhvtl-waa_a-Autochanger now shows the device is currently 
disabled:
8<
Device Tape: "mhvtl-waa_a-Autochanger_Dev0" 
(/dev/tape/by-id/scsi-350223344ab001700-nst) is not open.
Device is disabled. User command.
Drive 0 is not loaded.
8<


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

On 8/27/24 1:44 AM, Arno Lehmann wrote:

Hi all,

just one tiny addition -- you may want to change the "Maximum Changer
Wait" time if you want to use tape cleaning with Bill's script, because
the additional activity naturally needs some time. And having jobs that
already used a lot of time and tape capacity fail just because the tape
drive needed cleaning in between is rather inconvenient.


Thanks for the additional comment, Arno.

I figured my email was too long-winded already, but this reminds me that I probably should add this detail to the notes at 
the top of the script and the documentation. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-26 Thread Bill Arlofski via Bacula-users

On 8/26/24 8:20 AM, Marco Gaiarin wrote:


I've setup in my pool(s) the 'Cleaning Prefix' and addedd to the library the
cleaning tape; situation now is:

  *update slots storage=CNPVE3Autochanger
  Connecting to Storage daemon CNPVE3Autochanger at cnpve3.cn.lnf.it:9103 ...
  3306 Issuing autochanger "slots" command.
  Device "Autochanger" has 8 slots.
  Connecting to Storage daemon CNPVE3Autochanger at cnpve3.cn.lnf.it:9103 ...
  3306 Issuing autochanger "list" command.
  Catalog record for Volume "AAJ666L9" is up to date.
  Catalog record for Volume "AAJ667L9" is up to date.
  Volume "CLN001L9" not found in catalog. Slot=8 InChanger set to zero.

I've tried to mount cleaning cartdrige but:

  *mount storage=CNPVE3Autochanger slot=8
  3304 Issuing autochanger "load Volume , Slot 8, Drive 0" command.
  3305 Autochanger "load Volume , Slot 8, Drive 0", status is OK.
  3901 Unable to open device ""LTO9Storage0" (/dev/nst0)": ERR=t

ape_dev.c:170 Unable to open device "LTO9Storage0" (/dev/nst0): 
ERR=Input/output error


This seems also pretty logic: bacula mount the cartdrige and (try to) read
it, but this is a clieaning one...
Anyway, i've waited the command to end, then i've tried also:

  *unmount storage=CNPVE3Autochanger
  3307 Issuing autochanger "unload Volume *Unknown*, Slot 8, Drive 0" command.
  3901 Device ""LTO9Storage0" (/dev/nst0)" is already unmounted.

after that, anyway, cleaning cartdrige was back on slot 8 (probably the
previous mount failed early and anyway cleaning tape was just on the route to
slot 8).


I've the doubt i'm doing something wrong, eg i don't have to use '(u)mount
storage' command from bacula console to mount cleaning cartdrige, but use
insted direct library command via mtx-changer script, for example.


Someone have some clue? Thanks.


Hello Marco,

You cannot tell Bacula to load a cleaning tape without experiencing
these kinds of errors, because when the SD is told to 
load a tape from a slot into a drive, the 'mtx-changer' script calls the 'mtx' utility to load, then, once that returns OK, 
the script calls `mt -f /tape/nodeid status` over and over (with time/iteration limits and some sleep time between each call) 
until it sees a "ONLINE" in the `mt status` output (in the case of a Linux distribution).


In the case of a cleaning tape, this "ONLINE" will never appear, and the mtx-changer script will always time out and fail, 
then return with errorlevel 1, and the SD will complain exactly as you have demonstrated above.



You have two choices for cleaning tape drives with Bacula:

- Manual: Issue a disable command to the drive in bconsole, then manually 
load/unload a cleaning tape, then re-enable the drive.

- Try my mtx-changer drop-in replacement script: 
`https://github.com/waa/mtx-changer-python`

This script does much nicer logging of all activities (if logging is enabled), a
nd it can detect when a drive needs to be 
cleaned (if enabled), and it can automatically load a cleaning tape, wait, then unload it, then return control to the SD 
(also if enabled).


I know that a few people are using this script in production environments (some quite large), and I also use in our lab and 
production environments, but I have not gotten too much (or any) feedback about this script yet, so one more person's eyes on 
it in production would be helpful and welcomed.


In conjunction with this mtx-changer-python.py script, you may also want to check out my `tapealert` script drop-in 
replacement here: https://github.com/waa/bacula-tapealert


This script reports drive and/or tape issues using the tapealert utility and reports them back to the SD which will log the 
TapeALerts(s) reported by the drive, and can disable a drive, or a tape or both depending on the errors reported.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] no space left on Bacula server

2024-08-23 Thread Bill Arlofski via Bacula-users

On 8/23/24 12:22 PM, Adam Weremczuk wrote:

I've got the answer now: Backup failed -- Incomplete

Scheduled time: 23-Aug-2024 12:42:26
Start time: 23-Aug-2024 12:49:46
End time:   23-Aug-2024 18:54:02
Elapsed time:   6 hours 4 mins 16 secs
Priority:   10
FD Files Written:   3,459
SD Files Written:   3,459
FD Bytes Written:   726,920,886,781 (726.9 GB)
SD Bytes Written:   726,921,923,888 (726.9 GB)
Rate:   33259.6 KB/s
Software Compression:   None
Comm Line Compression:  42.8% 1.7:1
Snapshot/VSS:   yes
Accurate:   yes
Volume Session Id:  115
Volume Session Time:1723051858
Last Volume Bytes:  1,812,510,508,032 (1.812 TB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup failed -- Incomplete

What on earth has happened here?


How can we know? You have not shown us any logs, nor any listing of files in 
the dataspool directory. 🤷

In bconsole:
* ll joblog jobid=

In bash:
# ls -la /path/to/data/spool/dir


BUT, your job terminated "Incomplete" which means that once you fix whatever is wrong, you can resume this job and not have 
to start from the beginning:


* resume incomplete(and choose Job, then the jobid)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] no space left on Bacula server

2024-08-23 Thread Bill Arlofski via Bacula-users

On 8/23/24 11:33 AM, Adam Weremczuk wrote:

Hi all,

Bacula 9.6.7 on Debian 11.

I'm backing up a 777 GB folder of an external xxx server. The backup job
has been running for a number of hours.

To my surprise, Bacula backup server has just COMPLETELY run out of disk
space with 1.2 TB out of 1.7 TB being used for Bacula .spool temporary
files.

I've never seen this happening before, i.e. so much space being used.

Relevant jobs look as below:

    4139  Back Full  3,458    726.9 G xxx_backup   SD despooling Data
    4140  Back Full  8    586.2 G xxx_backup   is running

Tape space (LTO-8) looks like below:

Before: Device: Remaining Native Capacity in Partition (MiB) (10,620,751)
Now: Device: Remaining Native Capacity in Partition (MiB) (10,176,636)

Is this backup going to fail or succeed? Since it's Fri evening here I
would kind of prefer to know now...

Regards,
Adam


Hello Adam,

Bacula will begin de-spooling a data spool file when one of the following 
conditions are met:

- The spool file reaches the SpoolSize set in a Job
- The spool file reaches the MaximumSpoolSize set in a device
- The spool size reaches the MaximumJobSpoolSize set in a device
- The SpoolDirectory fills to 100% capacity (a condition that should be 
avoided, but Bacula gracefully handles this)

So, if this spool directory reaches 100% (obviously something to try to avoid), jobs will stop spooling, and start despooling 
to the tape drive(s) - one at a time per tape drive of course.


You will have failed jobs if the attribute spool (always in SD's "WorkingDirectory") fills, but Bacula should recover 
gracefully if the data spool directory fills.


However, you are using a very old version of Bacula, and this "feature" may not be in version 9.6.7 - I honestly do not know 
when this was added/fixed.


What is in the spool directory?   Maybe there are some leftover *data*.spool files leftover from failed jobs just eating up 
space?


Attribute and Data Spool files are named very clearly, so you will know which 
ones to keep and which ones may to be deleted.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-09 Thread Bill Arlofski via Bacula-users

On 8/9/24 4:51 AM, Chris Wilkinson wrote:
>> Just an aside - I realised whilst editing the jobs that the storage=“sd used for 
backup jobs" should be specified in the Job

resource, it’s not necessary (or desirable) to specify the storage in the Pool 
as well since the job overrides the pool.



Just a correction here.

If you specify a `Storage = ` in a Pool, it cannot be overridden anywhere - not in a JobDefs, not in a Job, not in a 
Schedule, not on the command line, and not even when modifying it just before the final submission of a job.


This, in my opinion is a bug as I belief that when an admin overrides something, they should be believed that they know what 
they are doing and that should be the final word. :)



This 
doesn’t seem to be the case for Copy/Migrate Jobs, the storage=“sd used for copy jobs" has the be specified in every Pool 
used for copy jobs. Am I right that there is no equivalent override mechanism for Copy/Migrate jobs?


The Storage for Copy/Migration control jobs needs to be in the source Pool, or in the Copy/Migration control job itself. I 
don't have time to test, but it may be possible to override the Pool and Storage for these in a Schedule or on the command 
line, but that would make no sense to do. :)



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Cloud Storage / Amazon S3 working with Bacula 9.6.7 on Debian 12

2024-08-08 Thread Bill Arlofski via Bacula-users

On 8/8/24 2:03 PM, Robert Heller wrote:

Hello Robert,

In this case the error you are getting - as generic as it is - is correct on 
count #2.  :)

"or no matching Media Type"



Your MediaType in the Director's Storage resource called `File1` is:
8<
Media Type = File1
8<


And the MediaType on the SD's Device `CloudStorage` is:
8<
Media Type = CloudType
8<


These two need to match.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-07 Thread Bill Arlofski via Bacula-users

On 8/7/24 1:11 PM, Chris Wilkinson wrote:

And then import the saved sql dump which drops all the tables again and 
creates/fills them?

-Chris


Hello Chris!

My bad!

I have been using a custom script I wrote years ago to do my catalog backups. It uses what postgresql calls a custom (binary) 
format. It's typically faster and smaller, so I switched to this format more than 10 years ago. I had not looked at an ASCII 
dump verison in years and I just looked now, and it does indeed DROP and CREATE everything.


So, the only thing you needed to do was create the database with the 
create_bacula_database script, then

Sorry for the static. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-07 Thread Bill Arlofski via Bacula-users

On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson  wrote:


Would it fail if no tables exist? If so, I could use the bacula create tables 
script first.


Hello Chris,

The Director would probably not even start. :)

If you DROP the bacula database, you need to run three scripts:

- create_bacula_database
- make_bacula_tables
- grant_bacula_privileges

Now you have an empty, but fully functional Bacula catalog for the Director to 
work with.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-06 Thread Bill Arlofski via Bacula-users

On 8/6/24 9:01 AM, Chris Wilkinson wrote:
I've had v11/postgresql13 running well for a long time but just recently it has started to run very slow. The Dir/Fd is on a 
Raspberry PiB with 8GB memory, Sd on a NAS mounted via CIFS over a Gbe network. I was getting a rate of ~30MB/s on the backup 
but this has dropped to ~1-2MB/s. I can see similar values on the network throughput page of Webmin. Backups that used to 
take 10h are now stretching out 10x and running into next scheduled backups. Jobs do eventually complete OK but are much too 
slow.


It remains the same after a couple of reboots of both the Pi and NAS.

I've tried my usual suite of tools e.g. htop, iotop, glances, iostat, iperf3 but none of these are raising any flags. Iowait 
is < 2%, cpu < 10%, swap is 0 used, free mem is > 80%. Iperf3 network speed testing Dir<=>Fd is close to 1Gb/s, rsync 
transfers Pi>NAS @ 22MB/s, so I don't suspect a network issue.


On the NAS, I have more limited tools but ifstat shows a similarly low incoming network rate. No apparent issues on cpu load, 
swap, memory, disk either. fsck ran with no errors.


I thought maybe there was a database problem so I've also had a try at adjusting PostgreSQL conf per the suggestions from 
Pgtune but to no effect. Postgresqltuner doesn't reveal any problems with the database performance. Postgres restarted of course.


Backup to S3 cloud is also slow by about 3x. It runs 25MB/s (22Mb/s previously) into local disk cache and then 2MB/s to cloud 
storage v. 6MB/s previously. My fibre upload limits at 50Mbs. I would have expected that a database issue would impact the 
caching equally but that doesn't seem to be the case.


So the conclusions so far are that it's not network and not database 🤔.

I'm running out of ideas now and am hoping you might have some.

-Chris Wilkinson


Hello Chris,

This is a long shot, but is there *any* chance you have disabled attribute 
spooling in your jobs? (SpoolAttributes = no)

If this is disabled, then the SD and the Director are in constant communication and for each file backed up the SD sends the 
attributes to the Director and the Director has to insert the record into the DB as each file is backed up.


With attribute spooling enabled (the default), the SD spools them locally to a file, then sends this one file at the end of 
the job and the Director batch inserts all of the attributes at once (well, in one batch operation)


Crossing my fingers on this one.🤞 :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with Mount tapes

2024-08-05 Thread Bill Arlofski via Bacula-users

On 8/5/24 9:55 AM, Dr. Thorsten Brandau wrote:

Hi
I get this message:

05-Aug 16:09 -sd JobId 908: Please mount append Volume "23L9" or label a 
new one for:

and I am confused.

Webin shows me the volume available in the pool:


23L9LTO-9   2024-08-05 16:14:36 2024-08-05 16:39:38 
485996102656Append

So, what specifically can I do now? Bacula should load the volume by itself, 
doesn't it?

Regards

Thorsten



Hello Thorsten,

You have given us basically nothing to go on here.

Can you tell us anything about this environment?

# mtx -f /dev/tape/by-id/  status  (where  is the library's node id)


The full Bacula job log:

* ll joblog jobid=908


Media list:

* @tall /tmp/medialist.txt   (open text log file)

* list media

* @tall  (close file)

Then, attach `/tmp/medialist.txt` so it does not wrap horribly in an email.


How about Bacula configuration files?

Directory's Storage, SD's 
Autochanger, and Device(s)?



Thank you,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula rescue cdrom source code

2024-08-01 Thread Bill Arlofski via Bacula-users

On 8/1/24 7:31 AM, William Rice wrote:
>

Hello I'm trying to locate the bacula-rescue cdrom source code so I can make a 
Bare Metal Recovery cdrom for bacula-9.0.8

Any help would be greatly appreciated!



Hello William,

Just a quick FYI: Bacula's Bare Metal recovery for Windows and Linux are 
Enterprise products, not available to the community.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-07-15 Thread Bill Arlofski via Bacula-users

On 7/15/24 9:26 AM, Marco Gaiarin wrote:


We have found that a dir (containing mostly home directories) with roughly
one and a half million files, took too much time to be backud up; it is not
a problem of backup media, also with spooling it took hours to prepare a
spool.

There's some strategy i can accomplish to reduce backup time (bacula
side; clearly we have to work also filesystem side...)?

For example, currently i have:

 Options {
   Signature = MD5
   accurate = sm
 }

if i remove signature and check only size, i can gain some performance?


Thanks.


Hello Marco,

The typical way to help with this type of situation is to create several Fileset/Job pairs and then run them all 
concurrently. Each Job would be reading a different set of directories.


Doing something like backing user home directories that begin with [a-g], [h-m], [n-s], [t-z] in four or more different 
concurrent jobs.



A coup
le FileSet examples that should work in how I described:
8<
FileSet {
  Name = Homes_A-G
  Include {
Options {
  signature = sha1
  compression = zstd
  regexdir = "/home/[a-g]"
}
Options {
  exclude = yes
  regexdir = "/home/.*"
}
  File = /home
  }
}

FileSet {
  Name = Homes_H-M
  Include {
Options {
  signature = sha1
  compression = zstd
  regexdir = "/home/[h-m]"
}
Options {
  exclude = yes
  regexdir = "/home/.*"
}
File = /home
  }
}

...and so on...
8<


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrate job

2024-07-03 Thread Bill Arlofski via Bacula-users

On 7/3/24 7:39 AM, Stefan G. Weichinger wrote:


I can't get that migrate job running.


[...snip...]

> I have diskbased volumes in Storage "File" and want to migrate them to
> physical tapes in Storage "HP-Autoloader", Pool "Daily"

Hello Stefan,

Something is quite wrong here... :)

And a lot of extra information is missing.


Your status storage shows that it is reading from the "daily" pool, using the tape drive "HP-Ultrium", and it is wanting to 
write to the "Daily" pool and also use the same tape drive - Of course, it is an impossibility to read and write from one 
device at the same time. :)


This is clearly not what you described as what you want.


8<
> Running Jobs:
> Writing: Full Backup job VM-vCenter JobId=3498 Volume="CMR933L6"
>   pool="Daily" device="HP-Ultrium" (/dev/nst0)
>   spooling=0 despooling=0 despool_wait=0
>   Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>   FDSocket closed
> Reading: Increme
ntal Migrate job migrate-to-tape JobId=3497 Volume=""
>   pool="Daily" device="HP-Ultrium" (/dev/nst0) newbsr=0
>   Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>   FDSocket closed
8<



Job {
Name = "migrate-to-tape"
Type = "Migrate"
Pool = "File"
NextPool = "Daily"
JobDefs = "DefaultJob"
PurgeMigrationJob = yes
Enabled = yes
MaximumSpawnedJobs = 20
SelectionPattern = "."
SelectionType = "OldestVolume"
}


The `SelectionPattern` setting means nothing here since you have specified `SelectionType 
= "OldestVolume"`.  From the docs:
8<
The Selection Pattern, if specified, is not used.
8<



Pool {
Name = "Daily"
Description = "daily backups"
PoolType = "Backup"
MaximumVolumes = 30
VolumeRetention = 864000
VolumeUseDuration = 432000
Storage = "HP-Autoloader"
}



OK, this is the destination pool.

We don't 
see the source pool.


Typically, I set the NextPool in the source pool, but setting it in a Schedule or the Copy/Migration control job is OK too. 
We will need to see more...



Can you show:

- The 'File" Pool

- The "DefaultJob" JobDefs


In bconsole:

* ll joblog jobid=3497
* ll joblog jobid=3498


It seems to me from what I see so far, that you may have not restarted the SD, or not reloaded the Director after making 
changes to the settings the Migration Control and Pool and we are somewhere mid-stream between changes.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Use of Multiple Tape Drives at Once

2024-07-01 Thread Bill Arlofski via Bacula-users

On 7/1/24 2:27 PM, Kody Rubio wrote:

Hi Bill,

The following is the output of 'status director' when the jobs are waiting.

  597  Back Full    146,973    667.8 G    fileserver-active         is running
  599  Back Incr          0         0               obsdata4                   
is waiting for higher priority jobs to finish



It seems that "obsdata4" it is waiting for a higher priority job.
Although the fileserver-active job does not have a priority set in the config.
Also, obsdata4 has a Priority of 1 (highest), so I am currently unsure on why 
this is happening.


Hello Kody,

The default Priority is 10 if not set.

So, you have a priority 10 job running `fileserver-active`, and a job with a 
different priority (1) waiting `obsdata4`

The message `is waiting for higher priority jobs to finish` is a bit of a red herring. It does not matter if the other jobs 
have a higher or lower priority set, this will be the (misleading) message logged.


If you want more than one job to run at the same time, they will need to be set to the same priority. Sure, you can also play 
with different priorities, and then enable "AllowMixedPrioritiy" in several places (your Job resources), but this will just 
lead to more confusing results, I am afraid. It is generally recommended to stick to the same priority for your normal backup 
and Restore Jobs, and then set the small number of "special" jobs (Admin, CopyControl, MigrationControl, Verify, etc) to 
something else so they they do not run as the same time as "normal" jobs.


Check the default "Catalog" job to see that while the default Backup jobs are `priority = 10`, that job is 11 or 12 to make 
sure it is run only when all the other normal, nightly backups jobs have completed so that the catalog has all of the 
information for all of the night's jobs - except, of course itself. :)


You will need to check the Director's Storage resource simply called `Autochanger` and make sure that it has 
`MaximumConcurrentJobs` > 1, and that your jobs expected to run at the same same time have the same priority and you should 
be able to make some progress.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Use of Multiple Tape Drives at Once

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 1:33 PM, Kody Rubio wrote:

I am searching for advice and/or expertise in allowing Bacula to use multiple 
tape drives at once.

I currently have two separate jobs that use separate Pools, that run around the same time. Although the second job is always 
waiting for the other job to finish. While the second tape drive is open, I would like for it to use the second drive and not 
have to wait for the other drive to be finished.

I also read that setting "MaximumConcurrentJobs = 1" for each will allow this 
but I get the same result.

Below is my configuration for the devices:


Hello Cody, what does `status director` show when these jobs are "wating"

What does your Director Storage resource look like?


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 8:50 AM, Chris Wilkinson wrote:

Oops - typo in my message. It should be

*Set storage resource 'FDStorageAddress='

Which is what actually have but wrote it here wrong.☹️


No worries. I saw that and knew what you meant. :)


I'm not aware of NAT reflection being set in the router; no such option that I can find. I can't see sending local backup 
data out and back is an issue since the data is encrypted but anyway it's set up now as you suggested so moot.


Well, what I mean is that your local FD -> SD traffic would hit the firewall, pass through it to its external IP, then be 
NAT'ted back into your local network. So, nothing going to the Internet from local, but still using up the firewall's network 
bandwidth and CPU cycles unnecessarily.


I think you seem to be in good shape now. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 2:54 AM, Chris Wilkinson wrote:

I made the additional changes you suggested.

*Removed FD port open on remote router
*Set storage resource 'Address='
*Set storage resource 'FDStorageAddress='

I re-tested local and remote backups and these seem to be working fine.


Excellent! \o/

Next up Client Initiated Backups!   heh


These changes were not absolutely required as local backups continued to work when I had storage resource 'Address=FQDN of remote site>' and without the 'FDStorageAddress=' directive. I presume this was because I had opened ports 9101-9103 
to the DIR/SD host on the local router as part of my previous attempts and I haven't undone any of them.


I didn't say it yesterday, but I was suspecting that if local and remote FD -> SD connections for backups were all still 
working after you set the Storage's Address to the external IP of the firewall, then it mu
st be that NAT reflection was 
set/enabled on your firewall.


Yes sure, that will work, but do you really want all of your backup data 
traversing your firewall? :)



Thanks for your help

-Chris-


You're welcome!

I am glad that my curiosity to set this exact configuration up last week (for 
fun) was well timed.  :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-26 Thread Bill Arlofski via Bacula-users

On 6/26/24 3:31 PM, Chris Wilkinson wrote:
>

Your tips were bang on. I implemented this and it is working.


Excellent!  \o/


The other steps required were to forward ports on the routers at each end 


This should not be necessary at the remote site(s). The remote clients will be making outbound calls to the Internet, unless 
you have NAT inside NAT or something "interesting" going on at the remote site(s).  :)



> and change the DIR storage resource Address= from a local lan address to the 
public FQDN.

Uff yes, sorry, I missed a step!  :)


The part I missed (which you solved differently, but not the best way if this Storage is needed to be used by other internal 
Clients) is to instead leave the SD's `Address = ` alone and set the `FDStorageAddress = firewall>` in the Director's configuration resource for this (and any other external) Clients.


This way, all your normal internal backups/clients that use this same SD can 
conn
ect to it using its normal `Address = internal IP or FQDN>` and only these external clients will have the FDStorageAddress set to connect to the Main site's 
external firewall IP or FQDN.




Thank you.

-Chris-


You're welcome!

Hope these additional tips help to clean things up even more.

Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-26 Thread Bill Arlofski via Bacula-users

On 6/26/24 10:44 AM, Chris Wilkinson wrote:

I'm seeking some advice on configuring the backup of a remote client.

Up till now all clients were located on the same local lan that hosts the Director, File and Storage Daemons. The whole lan 
is behind a nat'd router. One of these clients has now moved to a remote site, also behind a nat'd router so my existing FD 
for this client doesn't work.


As I understand Bacula, the sequence of operations is:
DIR > FD : command to begin
FD > SD : send data from fd to sd
and there will be messages to the DIR also.


Hello Chris,


It is more like:

1: DIR --> SD
2: DIR --> FD   (unless FD is configured to connect to the DIR), then it is FD 
--> DIR
3: FD  --> SD   (unless the Director's Client resource has "SDCallsClient = yes"), 
then it is SD --> FD


For this to work for a remote client, all Daemons must be addressable by FQDNs and therefore the use of local addresses is 
not possible.


One thought that occurs to me is that router ports 9101-9103 can be opened to address the Daemons as :port. This 
won't work for the SD which a mounted cifs share due to the storage being a locked down NAS with no possibility of installing 
an SD.


Appreciate any thoughts or suggestions you might have.


The "best" way to do this is configure your remote FD(s) to call into the Director. They can be configured to make the 
connection on a schedule, or to try to make the connection and stay connected - reconnecting at startup, and when disconnected.


You will need to configure the firewall on the SD side to allow and forward the 
connection into the DIR and the SD.

There is a section int he manual about Clients behind NAT, and also Client 
initiated backups. If you get stuck, just ask...

For fun, I recently just configured and tested this exact type of "FD calls 
Director" configuration here.

I know, who does this for fun, right? lol 😆🤷🤦


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-20 Thread Bill Arlofski via Bacula-users

On 6/20/24 8:58 AM, Marco Gaiarin wrote:


But, now, a question: this mean that in spool data get interleaved too? How
they are interleaved? File by file? Block by block? What block size?


No. When you have jobs running, take a look into the SpoolDirectory. You will see a 'data' *.spool file and an 'attr' *.sppol 
file for each job running.




Once that is hit, the spoofles are written to tape, during which active
jobs have to wait because the spool is full.


There's no way to 'violate' this behaviour, right?! A single SD process
cannot spool and despool at the same time?


An SD can be spooling multiple jobs wile *one* and only one Job spool file is 
despooling to one drive.

Add another drive and and the same is still true, but the SD can now be despooling two jobs at the same time while other jobs 
are spooling, and so on as you add drives.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-8 tape report error

2024-06-20 Thread Bill Arlofski via Bacula-users

On 6/20/24 4:54 AM, Adam Weremczuk wrote:


OMG...

The only reason the script started failing was a different device
assignment caused by a reboot:

ls -al /dev | grep sg.
crw-rw  1 root tape 21,   0 Jun 19 12:25 sg0
crw-rw  1 root disk 21,   1 Jun 12 17:42 sg1

Once I've changed sg1 to sg0 it started working like a charm again :)


Hello Adam,

I am glad you found it. It should have been the first thing I recommended to 
check. 🤷🤦

You might be interested in my `mtx-changer-python.py` drop-in replacement for the `mtx-changer` script and/or my 
`bacula-tapealert.py` drop-in replacement for the `tapealert` script that Bacula ships with.


Both determine the /dev/sg# node automatically, on-the-fly, preventing this issue with Bacula when a /dev/sg# node changes 
after a reboot.



The `bacula-tapealert.py` currently requires the Python `doctopt` module but I will swap that out and replace it with 
`argparse` shortly. I have already done this for the `mtx-changer-python.py` and a couple other scripts.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-8 tape report error

2024-06-17 Thread Bill Arlofski via Bacula-users

On 6/17/24 11:38 AM, Adam Weremczuk wrote:

Hi Gary,

I know what you mean by "hijacking a thread" now.
I was reading that message when I decided to post my question from the
same window.

Today I cleared the tape drive with an official cleaning cartridge but
it made no difference to the error:

-

sg_raw -o - -r 1024 -t 60 -v /dev/sg1 8c 00 00 00 00 00 00 00  00 00 00
00 04 00 00 00
  cdb to send: [8c 00 00 00 00 00 00 00 00 00 00 00 04 00 00 00]
SCSI Status: Check Condition

Sense Information:
Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
   Raw sense data (in hex), sb_len=21, embedded_len=21
  70 00 05 00 00 00 00 0d  00 00 00 00 20 00 00 00
  00 00 00 00 00

Error 9 occurred, no data received
Illegal request, Invalid opcode



-


If it was some kind of a hardware fault I should be seeing other
problems, shouldn't I?

All backups and Bacula scripts keep completing without errors, it's just
this one command that fails.
I've even run a big restore as a test and it all looks perfectly fine.

I don't have any spare hardware try different configuration.
Is there anything else to try to determine the root cause?
Why would a reboot alone trigger it?

Regards,
Adam


Hello Adam,

Not sure if this will be any help at all, but I just checked and see that in my `mtx-changer-python.py` drop-in replacement 
script, to check for tape cleaning required messages, I am using:

8<
 sg_logs --page=0xc /dev/sg##
8<

Then I look for:
8<
Cleaning action not required (or completed)

or

Cleaning action required
8<

To determine if I need to automatically find and load a cleaning tape before 
returning contr
ol to the SD...

I had been using `tapealert` instead of sg_logs, but I found that tapeinfo clears all alert messages on the drive, so they 
could never be caught by the SD when it calls tapeinfo in the tapealert script. Sg_logs just reports what I need without 
clearing the other flags that the SD needs to know when a drive or tape is bad/damaged, etc.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Bill Arlofski via Bacula-users

On 6/11/24 10:45 AM, Marco Gaiarin wrote:


Sorry, i really don't understand and i need feedback...

I've read many time that tapes are handled better as they are, sequential
media; so they need on storage:

Maximum Concurrent Jobs = 1


Hello Marco,

If you are using DataSpooling for all of your jobs, this setting is somewhat redundant because Bacula will de-spool exactly 
one Job's Data Spool file at a time.


With DataSpooling enabled in all jobs, the only "interleaving" that you will have on your tapes is one big block of Job 1's 
de-spooled data, then maybe another Job 1 block, or a Job 2 block, or a Job 3 block, and so on, depending on which Job's 
DataSpool file reached the defined maximum Job Spool size at particular times throughout the backup window, or when one hits 
the total MaximumSpool size and begins de-spooling.


If, on the other hand, you have many clients and enough network bandwidth, you can disable Data Spooling, and increase the 
Ta

pe Drive's MaximumConcurrentJobs setting and Bacula will stream and interleave 
the data from all the concurrently running jobs.

But, you can probably never really guarantee that all jobs will be streaming enough data concurrently to saturate the link to 
the Tape Drive, so using DataSpooling to *fast*, local, SSD, flash, or NVMe etc drives is probably a better and more 
consistent solution.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Error on Full Backups

2024-06-05 Thread Bill Arlofski via Bacula-users



And I see on jobid=777, the Bacula SD reports that the volume is 'read-only', so maybe that tape's lock slider is in the lock 
position?


The jobid=777 logs do not include what tape volume was being used. (which is 
odd, unless you snipped those lines)
8<

04-Jun 01:41 -sd JobId 777: Writing spooled data to Volume. Despooling 
1,000,000,242,693 bytes ...

> 04-Jun 01:41 -sd JobId  777: Fatal error: block.c:163 [SF0205] Attempt to write on 
read-only Volume. dev="LTO9-1" (/dev/nst0)
8<


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Error on Full Backups

2024-06-04 Thread Bill Arlofski via Bacula-users

On 6/5/24 12:08 AM, Dr. Thorsten Brandau wrote:

Hi,
I get from time to time errors when backup up via my LTO-9 autochanger. This happens mostly when I do a full backup (ca. 30 
TB data), sometimes when using differential backup.
I am using the updated mtx-changer-script by Bill Arlofski, as the one out of the box was always running in timeouts when 
tape changing was needed.


This error popped up recently. And persisted now for several full backups, now 
additionally for a differential one.

Anyone any idea where the problem is and how to solve it?

Thank you.
Cheers


Hello Dr. Thorsten Brandau,

What does my script log at this time?

Most likely the mtx unload command is failing due to an issue with the change/library.  Just a guess of course, but a guess 
based on experience. :)


Also, thanks for letting me know you are using this script. I have no idea who 
has even tested it in the wild.

Keep in mind, you can increase the logging level in the config file if necessary, but I am still going with an mtx changer 
error, and I think this might get logged, regardless.


And, who knows, it seems like you might also be reaching the Bacula SD's Device 'MaximumChangerWait' timeout, and my script 
is being killed by the SD. Perhaps sometimes the time it takes your library to change a tape is right on the edge of the 
default for this Bacula SD timeout threshold and all that needs to be done is to increase that timeout.


But as I read, I see that the default for this timeout is 5 mins, and your log paste seems to show that your SD kills my 
script at about the 8 minute mark. Close enough for horseshoes and hand grenades? Maybe but seems a bit off for computers. :)🤷


And, FYI keep an eye on the Gitlab repository because I have been making 
changes/improvements to it. ;)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk Volumes Showing 1B Read-Only

2024-06-03 Thread Bill Arlofski via Bacula-users

On 6/3/24 12:03 PM, Ken Mandelberg wrote:


My Volumes are all Disk files. Several now show up as 1B Read-Only. In fact, as files "ls -l" shows them at their correct 
size, with modification dates that go back correctly to when they were filled.


This is likely due to my transition from Ubuntu 23.10 to 24.04. There was a period during the transition where the file 
system containing these backup volumes was either not mounted or had ownerships set incorrectly.


I'm guessing that bacula noticed that and marked those backup files 1B Read-Only. These files are the oldest of the backup 
files, the slightly more recent ones are fine.


Is there any way to convince bacula that they are good?



Hello Ken,

What does this bconsole command show?:

* list volume=xxx

If they have a volstatus of `Error`, and they really are good volumes on disk you can just try changing their volstatus back 
to Append with:


* update volstatus=Append volume=


B
UT, keep in mind that if they are old, then they will probably be past their retention periods and Bacula will probably 
immediately recycle and re-use them. If this is OK, then you are all set. Otherwise, if the data on them is important to you 
then you should disable these volumes until you are sure there is no data that you might need/want to restore:


* update enabled=no volume=

or

* update volstatus=Read-Only volume=


Then, Bacula will not touch these volumes except to read for restores, copies, 
migrations, or verifies.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] binaries for Ubuntu 24.04

2024-05-31 Thread Bill Arlofski via Bacula-users

On 5/31/24 9:59 AM, d...@bornfree.org wrote:


Currently there are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories
for Bacula CE versions 15.0.2 and 13.0.4.  Will there be?



Yes. The builds for Bacula Enterprise packages for this very new platform are currently going through testing. Community 
packages should follow soon. I cannot give an ETA, of course. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-31 Thread Bill Arlofski via Bacula-users

On 5/31/24 8:56 AM, Marco Gaiarin wrote:



If you *really* want to automatically delete failed jobs (I personally don't 
think this is a good idea), you can use a
RunScript in an Admin type Job like:


Why in and 'Admin' job? I've tried to add something like:

Run After Job = /etc/bacula/scripts/deleteFailedJobs "%c" "%l"

to the backup job; script effectively get run, but seems no parameters get
passed to them.


First, you are passing them incorrectly.  Just quote the whole line like:
8<
Run After Job = "/etc/bacula/scripts/deleteFailedJobs %c %l"
8<

Second, this will most likely *not* work - and it is why I offered an Admin job 
as a solution.

If you do this, I am not sure exactly what will happen because (behind the scenes) the job is really still running when the 
RunAfterJob is triggered. So you would be trying to delete a job from the catalog while it is still running, and most likely, 
the Director would re-insert/update 
the job after your script deleted it, and u, I can only imagine what trouble this 
might cause.


Stick with the Admin job and the script is my advice here.



The script (little modification of yours) simply filter by client name (eg,
delete jobs of that client, not overral failed jobs) and run only for
VirtualFull level jobs.

Rationale: if a correct VirtualFull job happen, i can safely delete also the 
failed jobs


VirtualFull jobs will never pull in a failed job. They only collect and 
consolidate Backup jobs
that have terminated "Backup OK" (jobstatus='T' in the catalog), so you can 
deleted them any time you like.



time this Admin job is run will be in the Admin Job's joblog. Alternately, you 
can trigger the script from cron, and the
bconsole output will be in the email that cron sends.


The script run by hand works as expected, but clearly i prefere to run from 
bacula.


Yes, as I recommended, but I always try to offer optional solutions w
hen I can. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-29 Thread Bill Arlofski via Bacula-users

On 5/24/24 2:39 AM, Marco Gaiarin wrote:



I suspect that 'job counter' get resetted if and only if all jobs in a
volume get purged; this lead me to think that my configuration simpy does
not work in a real situation, because sooner or later jobs get 'scattered'
between volumes and virtual job of consolidation stop to work, so jobs and
volume purging.


Sorry, i need feedback on that. I restate this.


Seems to me that if i use 'job based retention' on volumes, eg:

Maximum Volume Jobs = 6

on the pool, simply does not work. Because the 'job counter' on the volume
get resetted if and only if *ALL* job on that volume get purged.

If i have a volume in state 'Used' because got 6 job within, and i
purge/delete jobs but not all, media state does not switch to 'Append', and
even if i put manually in 'Append' mode, bacula reject the volume and put on
'Used' state because have reached 'Maximum Volume Jobs'.
If i delete *ALL

* job in that volume, get correctly recycled.



It is right?


There's some 'knob' i can tackle with to make volume management more
'aggressive'?


Why not set in your Pool(s) `MaximumVolumeJobs = 1`

Then control your JobRetention and FileRetention periods as needed in your Pool(s). Typically you will set JobRetention > 
FileRetention if you want to aggressively manage the amount of storage your catalog uses, but it is usually best if possible 
to set JobRetention = FileRetention.


This way, each file volume will have one job on it, and when that job is pruned from the catalog, the volume will be 
pruned/purged/truncated/recycled.   (I am going on zero hours sleep this morning, so some details may be sketchy  :)


Don't try to force Bacula to use some arbitrary (small) number of file volumes. Let Bacula manage your volumes based on your 
chosen Job/File retention times, disk space available, etc.


You will want to limit your volumes sizes (MaximumVolum
eBytes = xxGB for example), and you will want to limit the number of 
volumes allowed in a Pool (MaximumVolumes = xxx).  This way with a little bit of calculations you can make sure that Bacula 
never fills your partition to capacity.  You can monitor this as time goes on, and you can make adjustments as needed.


If you have multiple Pools and you want Bacula to be able to freely move volumes from different pools when they are 
available, and so they don't get "stuck" in one pool forever, you can use the Pool's `ScratchPool` and `RecyclePool` pool 
settings and then create a Scratch Pool that all Pools would point both of those settings to.


If you prefer to have volumes stay in a pool they were initially created in 
forever, ignore that previous paragraph. :)

Not sure if anyone has answered, but to delete a job, the bconsole `delete jobid=xxx` is what you want. This will delete the 
Job and Files records from the catalog, and free up any volume(s) used in the job being prun

ed for re-use as described above.

If you *really* want to automatically delete failed jobs (I personally don't think this is a good idea), you can use a 
RunScript in an Admin type Job like:

8<
Job {
  Name = Admin_Delete_Failed_Jobs
  Type = Admin
  ...other settings...

  RunScript {
RunsWhen = before
RunsOnClient = no
Command = /opt/bacula/scripts/deleteFailedJobs.sh
}
8<

Then, in that `/opt/bacula/scripts/deleteFailedJobs.sh` script, something like:
8<
#!/bin/bash

bcbin="/opt/bacula/bin/bconsole"
bccfg="/opt/bacula/etc/bconsole.conf"

# The "gui on" removes commas in jobids so you don't have to use tr, or sed to 
do it
failed_jobids=$(echo -e "gui on\nlist jobs jobstatus=f\nquit\n" | $bcbin -c $bccfg | grep 
"^| [0-9]" | awk '{print $2}')

for jobid in $failed_jobids; do
  echo -e "delete yes jobid=$jobid\nquit\n" | $bcbin -c $bccfg
done
8<

If you do it this way (using the `Command =` in the Admin
Job's RunScript, the bconsole output of the jobs being deleted each 
time this Admin job is run will be in the Admin Job's joblog. Alternately, you can trigger the script from cron, and the 
bconsole output will be in the email that cron sends.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] missing repositories

2024-05-11 Thread Bill Arlofski via Bacula-users

On 5/11/24 7:46 AM, d...@bornfree.org wrote:


There currently are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories
for Bacula CE versions 15.0.2 and 13.0.4.  Please promptly build the
repositories.  Much appreciated.

---


On 5/9/2024 12:25 PM, d...@bornfree.org wrote:


There are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories for 15.02
and 13.04.  Please promptly build the repositories.  Thank you.

---


Hello,

Repeating the same request every two days is not how this list works.

Please be patient for the people responsible for creating the repositories have time to get this completed. These tasks are 
typically done in peoples' free time. It is possible that they have not even seen your first message yet - ie maybe on vacation?


In the mean time, you can easily build and install from source. This is the way I have been using Bacula Community for about 
20 years now. It is not difficult.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell TL2000 Tape Library with HHLTO7 drive

2024-05-04 Thread Bill Arlofski via Bacula-users

On 5/4/24 6:44 PM, Neil Balchin wrote:

Everything is working but on every tape job I get this error:

bacularis-sd JobId 5: Warning: Alert: Volume="LR7782L7" alert=25: ERR=A 
redundant interface port on the tape drive has failed. Failure of one interface port in a 
dual-port configuration, e.g. Fibrechannel.
Any Ideas on how to get rid of it ?  Backups appear to be working just fine 
I’ve tested restore jobs several times


How is your tape drive connected?  ie: Is it actually a dual connected fibre 
channel? Is one link actually down?

The Bacula tapealert script, which calls the `tapeinfo` utility is reporting that there is a TapeAlert[25] error. This is not 
a Bacula issue specifically, just an external script/utility which the SD calls that is reporting back to the SD what the 
drive itself is telling tapeinfo when queried.


Who manages the hardware ie: Tape library, drive(s), and Linux server that this 
SD runs on? Maybe they can assist?

I'd have a look in that direction.

Let us know what you find. It is always nice to see an issue's primary cause 
reported after a solution is found.

Good luck! :)


Thank you,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple interfaces for storage daemon

2024-05-01 Thread Bill Arlofski via Bacula-users

On 5/1/24 9:51 AM, Andrea Venturoli wrote:

On 4/29/24 19:17, Bill Arlofski via Bacula-users wrote:

Hello and thanks a lot for your time and attention.


Hello Andrea,

You are welcome. :)



My first guess (without seeing any logs or configurations) is that there
is a `MaximumConcurrentJobs` setting set to low causing the bottleneck.


I don't think so, otherwise it would never work (opposed to sometimes
working, sometimes not).


Thank you for the status director output.

It helps to confirm that my first guess of a MaximumConcurrentJobs setting was 
the correct one:
8<
 259  Back Incr  0  0  bbb   is waiting on max Storage jobs
8<

Now we need to track it down. :)

Things that will help us when things are currently blocked:

- The Director's Storage resource referred to in the running Jobs

- status director (full header and running jobs section, obfuscated where 
necessary)

- status storage (the whole thing, o
bfuscated where necessary)


P.S. I also already have some recommendations to change up the way the SD is currently configured which should be helpful 
too.  We will get to those once we identify the issue at hand.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple interfaces for storage daemon

2024-04-29 Thread Bill Arlofski via Bacula-users

On 4/28/24 8:23 AM, Andrea Venturoli wrote:



However, my most important question was: given I configured a single SD
with three different HDD-based storage, is it normal that one of them
can get stuck (with clients on the affected VLAN all "waiting on Storage
"), while the others are working?

Is this a bug? I remeber I had something like this in the past (but with
13.x and only two storages) and it worked.
Or is this NOT supposed to work?

   bye & Thanks
av.


This is not normal. More than likely though, it is another configuration issue.

A Bacula SD can read/write jobs from/to any of the Storages as it supports - 
simultaneously.

My first guess (without seeing any logs or configurations) is that there is a `MaximumConcurrentJobs` setting set to low 
causing the bottleneck.


Can you show a `status director` output, your configurations (sanitized), and some job logs of jobs waiting on something in 
the `status director` "Running J

obs" output?


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-fd appearing to use wrong storage server

2024-04-25 Thread Bill Arlofski via Bacula-users

On 4/25/24 3:55 PM, gaston.gloesener--- via Bacula-users wrote:

Thanks for the rplies,

I include here some of the requested informations.

First abouth the "bconsole reload", yes I did not do, but I did restart the 
director, the storage and file deamons several times after the config change. Also 
because I did run them in forground with debug.


Hello Gaston,


I am re-sending this since the last one I sent I had included a screenshot which will not translate well for search engines, 
etc...


The problem had pointed out in the screenshot was that it looks like you are using the `FDStorageAddress` setting in this 
client. I believe this is causing your issue.:

8<> *show job=Backup-james1

Job: name=Backup-james1 JobType=66 level=Incremental Priority=10 Enabled=1
  MaxJobs=1 NumJobs=0 Resched=0 Times=0 Interval=1,800 Spool=0 
WritePartAfterJob=1
  Accurate=1
   --> Client: Name=james1-fd Enabled=1 Address=james1.home FDport=9102 MaxJobs=

1 NumJobs=0

JobRetention=6 months  FileRetention=2 months  AutoPrune=1



  FDStorageAddress=bacula.home   
<--- HERE



   --> Catalog: name=MyCatalog address=*None* DBport=0 db_name=bacula
   db_driver=PostgreSQL db_user=bacula MutliDBConn=0
   --> FileSet: name=Full Set IgnoreFileSetChanges=0

8<

If this is not it, we will look a little deeper. :)


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-fd appearing to use wrong storage server

2024-04-25 Thread Bill Arlofski via Bacula-users

On 4/25/24 3:17 AM, gaston.gloesener--- via Bacula-users wrote:
>
Until now I did run bacula in a virtual machine running the director and storage deamon. The storage daemon was stroing data 
to files on a shared directory as the storage is on a NAS.


Now I have build bacula-sd for the NAS to avoid this duplicate transfer. I have configured one client to use the new storage 
but while it uses it, it claims to still contact the “old” storage daemon on the bacula node.


Hello Gaston,

A bconsole `show job=` will show what the Director knows about this job.

It is quite possible, and my guess that one of two possible things has happened:

#1: You forgot to issue the bconsole reload command
#2: The Director has reached its default configuration reload limit of 32 and 
is no longer reloading


You can check with:

* status director

...and look at the header line:

`Daemon started 24-Apr-24 18:55, conf reloaded 24-Apr-2024 18:55:27`

If the `conf reloaded` time is not recent, then you have hit #2 above and 
simply need to restart the Director.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wildfile

2024-04-10 Thread Bill Arlofski via Bacula-users

On 4/10/24 11:38 PM, Stefan G. Weichinger wrote:


First coffee here right now ;-)
Thanks for your example.


Hello Stefan,

You are welcome!

Well, it is 23:41 here now, so I have switched from coffee to beer. :)



I didn't add "signature" but already came up with this yesterday:

Fileset {
Name = "VM_XYZ"
Include {
  File = "/mnt/backup/vmbackup/Backup XYZ"
  Options {
WildFile = "*.vbk"
  }
  Options {
Exclude = "Yes"
RegexFile = ".*"
  }
}
}

seems to work!


OH! Look at you, jumping from WildFile to RegexFile!  :)

I was trying to keep things simple, but OK. :)

Yes, this is fine. Remember my first post about there always being several ways 
to do something in Bacula? :)



Do I have to add that signature-line?


To confirm why I made that comment, try running a restore of one of these jobs 
that you have now backed up. ;)

What do you see?


Glad y
ou got this working!

And, if you like we can talk about Client side scripts to point your `File = ` at too. Just for more fun and practice, of 
course. BUT... client side scripts are a bit less scalable as you might imagine. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wildfile

2024-04-10 Thread Bill Arlofski via Bacula-users

On 4/10/24 12:38 AM, Stefan G. Weichinger wrote:


Is this Fileset correct?

Fileset {
Name = "VM_xxx-y"
Include {
  Options {
WildFile = "\"/mnt/backup/vmbackup/Backup xxx-y/*.vbk\""
  }
}
}


I don't get files with this ... seems not to match.


Hello Stefan,

No, this will not backup anything because you have not specified anything to backup with a `File = /path/to/somewhere` inside 
of your `Include{}` block. So far, you have only set something (WildFile) inside of an `Option{}` block.


There are several ways to do what you want (there always is with Bacula :)

The first, based on your current Fileset template will require a `File =` line as mentioned above and also another 
`Options{}` block to exclude everything else:

8<
Fileset {
   Name = "VM_xxx-y"
   Include {
 # This option block matches your *.vbk files and creates a signature in 
the catalog for backed up files
 Options {
   signature = sha1   # You need a signature, so I 
added this
   WildFile = "/mnt/backup/vmbackup/*.vbk"# Note, I fixed your extra 
quotes and a backslash and simplified here
 }

 # This options block says to exclude everything else, not matched above
 Options {
   WildFile = "*.*"
   Exclude = yes
 }

 File = "/mnt/backup/vmbackup"  # And finally, what 'top level' 
directory are we considering for backups
   }
}
8<

This "Should work"™  I have not tested it, and wrote it quickly in my email client before finishing my first cup of coffee 
this morning. :)


Remember, when you have re-configured this Fileset and have issued a `reload` command in bconsole, instead or running the 
job, you can quickly see if your settings do what you want with an `estimate` command like:


* estimate listing job=

This way you don't have a bunch of failed jobs in your catalog. :)

P.S. I see your files (or directories, I am not 100% sure which) have spaces in their names.  I would recommend doing 
yourself a favor and not using spaces if you can help it. It will make your life easier and save you from needing to "escape" 
spaces with backslashes, etc.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup in disk AND tape

2024-04-09 Thread Bill Arlofski via Bacula-users

On 4/9/24 6:53 AM, Chris Wilkinson wrote:
Regarding the suggestion to put a Runafter block in the job to run the copy job at the end, that doesn't seem to be allowed. 
Run job=xx commands are not permitted in a Runscript as I just found out. It gives a not allowed command error.


Perhaps there is another way to accomplish this?



Hello Chris,

Yes, convert that run command to a simple script like:

/opt/bacula/script/run_catalog-copy.sh:
8<
#!/bin/bash

# Pipe the run command to bconsole
# 
echo "run yes job=catalog-copy" | bconsole
8<

Now, that is the most basic it needs to be, but you can add other things to it. ie: error checking, command line options, 
etc. Although in your use case it does not seem necessary to complicate things. :)



Then, just replace the
8<
Console = "run job=catalog-copy yes"
8<

...line in your RunScript with:

8<
`Command = /opt/bacula/script/ru
n_catalog-copy.sh`
8<

And you should be OK.

Make sure your `catalog-copy` job has the same Priority (11) as your Catalog job, otherwise you will end up in a dead-lock 
where the Copy job waits for the Catalog job to finish, and the Copy job is waiting for the catalog-copy job (which will 
never start) to finish.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy Job Configuration Question

2024-04-06 Thread Bill Arlofski via Bacula-users

On 4/6/24 10:53 AM, Chris Wilkinson wrote:
I am attempting to write a copy job to copy uncopied jobs from one SD to another. It seems that the client and fileset 
directives are required or the syntax check will fail. The documentation (v9) is not explicit on this point.


Since the client is not involved in a copy job, it seems that these clauses are redundant. If they really are required,  does 
it matter what value they have,  even a non-existent one?


This is what I have for the copy job:

Job {
   Name = "catalog-copy"
   Description = "copy of catalog"
   Type = "Copy"
   Level = "Full"
   Messages = "Standard"
   Storage = "dns-325-sd" #source storage
   Pool = "catalog"
   NextPool = "catalog-copy"  #destination storage
   Client = "catalog-fd"  #why is this needed, what value?
   Fileset = "Catalog"  #why is this needed, what value?
   Schedule = "sched_none"
   SelectionPattern = "catalog"  #copy only job names matching "catalog"
   SelectionType = "PoolUncopiedJobs"
}

Many Thanks
Chris Wilkinson


Hello Chris,

The parser sees that it is parsing a Job resource, and then requires all the settings for a Job resource, and does not 
distinguish a Backup type job from an Admin one, or Copy, or Verify etc. This had annoyed me also for some time, but I 
suspect the developers will never want to spend time on making this distinction when parsing resources. :)


What I have been doing in my Bacula environments for many years is I create some "dummy/fake" resources and use them in 
places where the parser requires them but they are clearly not needed/used.


The nice (OCD?) thing here is that in my Copy, Migration, Admin, Restore, etc job logs and summaries, it is clear that no 
Fileset, or Client, or Storage, etc was really used.


The same is true when viewing Job listings in BWeb, Baculaum, Bacula-Web, Bacularis, or in my 
https://github.com/waa/baculabackupreport script. ie: It is explicitly clear that a Copy/Migration Control job (for example) 
in the list contacted no Client.


In each of my fake resources, I have just the bare minimum required to satisfy the parser fro that type of resource. I name 
them all "None" (there is funny bug in my reporting script story about this - Python programmers will know straight away :) 
and I use them in special jobs as mentioned above.



Fake Client for copy jobs, etc:
8<
Client {
  Name = None
  Address = localhost
  Password = N/A
  @/opt/comm-bacula/include/Clients-Defaults  # Some required things for all 
Clients like FDPort, Catalog are in here
}
8<

Fake Fileset for copy jobs, etc:
8<
Fileset {
  Name = None
Include {
  Options {
  Signature = md5
}
  }
}
8<

Fake Storage for admin jobs, etc
8<
Autochanger {
  Name = None
  Address = localhost
  Enabled = no
  Device = N/A
  Password = N/A
  Media Type = None
}
8<

Fake Pool for copy jobs, etc:
8<
Pool {
  Name = None
  PoolType = Backup
}
8<

Fake Schedule with no run times. This way I can implicitly see what are my "Manually 
run" jobs:
8<
Schedule {
  Name = Manual
}
8<



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula support LTO9?

2024-04-02 Thread Bill Arlofski via Bacula-users

On 4/2/24 3:52 PM, Jose Alberto wrote:

Hi.

I work with Bacula (11 and 13)  with lto8,  all fine.

with bacula 13 or 15   work  LTO9 ?


Yes.  ;)

You may want to run some tests using btape to find the right `MaximumFileSize` and `MaximumBlockSize` for your tape drive(s), 
but I can assure you (personally working in Quantum's lab testing Bacula with their latest Scalar i6000 library and LTO9 
drives), Bacula absolutely works fine with them.


The settings I found that work quite well with LTO9 drives (with 10+ concurrent 
backup streams) are:
8<
MaximumFileSize = 32GB
MaximumBlockSize = 2097152
8<

I have also attached a script I wrote to automate the testing of several file 
and block size combinations using btape.

Please have a look and be sure to read the btape documentation to understand what is going on before running the script - 
also there are variable settings at the top you will need to edit to fit your environment.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com


tape_speed_tests.sh
Description: application/shellscript


signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup in disk AND tape

2024-04-02 Thread Bill Arlofski via Bacula-users

On 4/2/24 12:01 PM, Roberto Greiner wrote:

Hi,

I've installed Bacula recently in a server with a 7TB RAID5 storage, and
a LTO-6 tape unit.

I have configured 9 remote servers (most Linux, one Windows) to have the
backup made in this server in the disk storage, and I'm finish to
understand how to do the tape backup. Now, I have a question about
making the backup into both destinations.

I have the following setup for JobsDef:

JobDefs {
    Name = "DefaultJob"
    Type = Backup
    Level = Incremental
    Client = bacula2-fd
    FileSet = "Full Set"
    Schedule = "WeeklyCycle"
    Storage = FileAligned
    Messages = Standard
    Pool = File
    SpoolAttributes = yes
    Priority = 10
    Write Bootstrap = "/opt/bacula/working/%c.bsr"
}

Then I added a server to have the backup, let's say (it's a linux,
despite the name):

Job {
    Name = "AD"
    JobDefs = "DefaultJob"
    Client = ad-fd
    FileSet = "etc"
}

This will, obviously go to the dedup-disk storage. The question is, how
should I add the tape setup? Is there a way to add a couple of lines to
the job definition above so that the backup goes to both systems? Should
I create a separate job definition for the tape backup? Some other way I
didn't consider?

Thanks,

Roberto


PS: The storage definitions for the disk and tape destinations:

Storage {
    Name = FileAligned
    Address = bacula2
    SDPort = 9103
    Password = ""
    Device = Aligned-Disk
    Media Type = File1
}

Storage {
    Name = Fita
    Address = bacula2
    SDPort = 9103
    Password = ""
    Device = Ultrium
    Media Type = LTO
}


Hello Marcos,

With Bacula, there are almost always 10+ different ways to accomplish things, 
and/or to even think about them.

For example, you can override the Pool, Level, and Storage in a Schedule...

So, with this in mind, you might set your job to run Incs each weekday to disk, and then set the Fulls to run to tape on the 
weekend. (just one idea)


Another option is to use Copy jobs. With Copy jobs, you can run your Incs and Fulls to disk, then you can run a Copy job to 
copy your Incs, Fulls, or both to tape during normal working hours because Copy jobs do not make use of any Clients, so 
business productivity will not be affected on your server(s).


In your case, I would probably go with a Copy job. This way, you have your backups on disk for fast restores when needed, and 
you have the same data copied to new jobids onto tape - maybe with longer retention periods, for example.


Also have a look at the `SelectionType = PoolUncopiedJobs` feature for Copy jobs. This is a nice, handy "shortcut" to make 
sure that each of your jobs in some Pool is copied once, and only once to tape.


In this case, you can have two Copy jobs configured, one looking at your Full disk pool and one looking at your Inc disk pool 
and copying jobs that have not been copied.


OR, you can have one copy job running on a schedule where the Pool is overridden at two different times of the day to copy 
from the Full disk pool, and then also from the Inc disk pool.


OR... (lol I said 10, so I am working towards that number, and I am getting close :) ... You can have your normal backup jobs 
include a `RunScript {RunsWhen = after}` section which triggers an immediate copy of the job to tape as soon as it is completed.


So, I would start with a look at Copy jobs and see where that goes. :)

Feel free to ask more questions once you have taken a look at Copy jobs.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 15.02 upgrade on Centos 7 -- Storage Daemon not starting

2024-04-02 Thread Bill Arlofski via Bacula-users

Hello,

Glad we got this working. :)

Regarding any possible user:group changes, I guess you would have to contact the person who maintains the package for the 
CentOS 7 Linux distribution - And quite frankly, I am surprised that 15.0.2 is already available as it was *just* release 
some days ago. Kudos to the maintainer for being "Johnny on the Spot"™ :)


To fix everything, just figure out what user:group the Bacula SD is currently running as, make sure this combination has 
read/write access to all of the file volumes, and any directories/files (eg: /opt/bacula/working) and you should be all set.



P.S. I think it is time to upgrade/migrate from CentOS 7  ;)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 15.02 upgrade on Centos 7 -- Storage Daemon not starting

2024-04-02 Thread Bill Arlofski via Bacula-users

On 4/2/24 9:54 AM, sruckh--- via Bacula-users wrote:
I have upgraded from 13.x to 15.02 on a CentOS 7 server by changing yum repository and running yum update.  After RPMs were 
upgraded, I ran the scripts to upgrade the MySQL database.  When I try to start bacula-sd using systemctl no errors are 
returned, but the storage daemon is not starting (as seen by running 'ps -ef | grep bacula' ).  Running journalctl for 
bacula-sd does not show that bacula-sd is failing.


The systemctl status for bacula-sd is returning the following:

 hostname removed to protect the innocent

● bacula-sd.service - Bacula Storage Daemon service
Loaded: loaded (/usr/lib/systemd/system/bacula-sd.service; enabled; vendor 
preset: disabled)
Active: inactive (dead) since Tue 2024-04-02 08:18:13 MST; 18min ago
Process: 4066 ExecStart=/opt/bacula/bin/bacula-sd -dt -c 
/opt/bacula/etc/bacula-sd.conf (code=exited, status=0/SUCCESS)
Main PID: 19946 (code=exited, status=1/FAILURE)

Apr 02 08:18:13 xxx.xxx.xxx systemd[1]: Starting Bacula Storage Daemon 
service...
Apr 02 08:18:13 xxx.xxx.xxx systemd[1]: Started Bacula Storage Daemon service.



There is nothing in the system logs that would help narrow down the problem.  There is also nothing logged in 
/opt/bacula/log/bacula.log that mentions problems with the storage daemon.


If the storage daemon is instead started manually from the command line (as 
root user) using the following command the storage daemon starts and does not 
terminate:

sudo /opt/bacula/bin/bacula-sd -d 200 -c /opt/bacula/etc/bacula-sd.conf



Hello,

More than likely, the above command is/was the initial cause of your problem.

The Bacula SD typically runs as the user 'bacula'.

Starting the SD as root (as shown above with the sudo command), will cause the SD to open its PID and state files (and any 
file volumes) as the root user.


Later, when you try to start it with systemd - which will run it as the bacula user - it will not have access to these files 
and will just silently fail to start.


Try testing the config file syntax first with this:

# sudo -u bacula /opt/bacula/bin/bacula-sd -t

That might/should fail with some read and/or write permissions on one or more files. If it does not fail, then start the SD 
in foreground more like:


# sudo -u bacula /opt/bacula/bin/bacula-sd -d100 -f

Then, `chown bacula:bacula` any files it complains about, and try again until 
it starts up and remains running.

Next, find any file volumes the SD may have written to when run from the 
command line previously and chown them too.

Then, ctrl-c the running bacula-sd, and try to start with systemd, it should 
work now.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] configure CentOS/RedHat repository - first time usage

2024-03-27 Thread Bill Arlofski via Bacula-users

On 3/27/24 10:33 AM, Eduardo Rothe via Bacula-users wrote:

Hi

I am installing Bacula Community for the first time in a Fedora environment. According to the white paper Bacula Community 
Installation Guide, section "6.2 yum Package Manager Configuration", I should configure the repository using the, and I 
quote, "/path component sent in the registration email/". Which registration is this ? Where can I register ? Searching the 
bacula.org website, I can't find anything to register for.


Best to all,
Eduardo


Hello Eduardo,

On the bacula.org site, under "DOWNLOADS", the bottom option `Deb, Rpm and OSX 
Packages`, there is a form to fill out.

https://www.bacula.org/bacula-binary-package-download/

You will be emailed your personal repository link to use in your repository 
files for your package manager.


Hope this helps!
Bill
--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-19 Thread Bill Arlofski via Bacula-users

On 3/19/24 11:11, Josh Fisher wrote:


Do you remember if you checked for an ACTION="change" event on media
change? That would be sufficient to trigger a launch of vchanger REFRESH
to perform the update slots. It would be a feature of the device driver
and may or may not exist. If not, then there's definitely no way to
automate it and the update slots must be run manually from bconsole any
time a cartridge is inserted (or removed).


Hello Josh...

Wow... I have no idea as it was a very long time ago that I was messing with 
these things. :)

It would be interesting if someone currently working with the RDX stuff could 
fill in the blanks here. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-18 Thread Bill Arlofski via Bacula-users

On 3/18/24 12:53, Heitor Faria wrote:

Hello Bill,

RDX has to working modes: disks and VTL.
Many years ago I configured it as a VTL with Bacula, and it worked very well.
For Disk mode, maybe configuring a Bacula Storage for each disk could be manageable. Or a crazier idea would be to create a 
CEPH cluster on the top of all disks.


Rgds.


Hello Heitor!  :)

The use-case for the RDX is that you will (typically) have one bay, always connected to the host computer, and the magazines 
are removable. So there is typically never a scenario when all the RDX magazines are all plugged in at the same time.


This, of course, is where the Bacula + eSATA + cryptoLUKS + autofs + vchanger 
combination really shines.

But with RDX instead of eSATA, due to the scenario I described previously, there will need to be some manual interventions 
when changing RDX magazines.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-18 Thread Bill Arlofski via Bacula-users

This is in response to Josh...

In my experience, with RDX, the docking bay itself shows up as a device... 
(/dev/usbX or /dev/sdX, I forget)

But plugging/unplugging an RDX cartridge does not notify the kernel in any way, so udev rules are not possible to do anything 
automatically with RDX.


This was my experience about 8 or more years ago which is why I abandoned any attempts to use RDX with my own customers, and 
went with plain old removable eSATA drives, fully encrypted with LUKs, and auto-mounted with autofs.


I'd love to know if something has changed in this regard in the past 8 years or 
so. :)

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Permission Issues

2024-03-06 Thread Bill Arlofski via Bacula-users

Hello Ken,

In addition to what I previously wrote, i just noticed that your *.bsr files should not be in `/opt/bacula/working`, but 
rather in `/opt/bacula/bsr`.


Check your Jobs and JobDefs resources and make sure that the `WriteBootstrap` 
settings in these point to the bsr directory

This is not critical, just a "best practices" kind of thing...


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Permission Issues

2024-03-06 Thread Bill Arlofski via Bacula-users

On 3/6/24 10:19, Ken Mandelberg wrote:

I I notice these two permission errors in my logs. I'm on Ubuntu. What do I 
need to do to correct them.
The backups succeed but I guess they are missing info.

05-Mar 23:19 orac-dir JobId 7661: shell command: run BeforeJob 
"/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
05-Mar 23:19 orac-dir JobId 7661: BeforeJob: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS 
privilege(s) for this operation' when trying to dump tablespaces

05-Mar 23:19 orac-dir JobId 7661: Start Backup JobId 7661, 
Job=BackupCatalog.2024-03-05_23.10.00_05
05-Mar 23:19 orac-dir JobId 7661: Connected to Storage "File" at localhost:9103 
with TLS
05-Mar 23:19 orac-dir JobId 7661: Using Device "FileStorage" to write.
05-Mar 23:19 orac-dir JobId 7661: Connected to Client "orac-fd" at orac:9102 
with TLS
05-Mar 23:19 orac-fd JobId 7661: Connected to Storage at localhost:9103 with TLS
05-Mar 23:19 orac-sd JobId 7661: Vol

ume "Vol0010" previously written, moving to end of data.

05-Mar 23:19 orac-sd JobId 7661: Ready to append to end of Volume "Vol0010" 
size=52,690,472,345
05-Mar 23:19 orac-sd JobId 7661: Elapsed time=00:00:02, Transfer rate=325.8 M 
Bytes/second
05-Mar 23:19 orac-sd JobId 7661: Sending spooled attrs to the Director. 
Despooling 233 bytes ...
05-Mar 23:19 orac-dir JobId 7661: Error: Could not open WriteBootstrap file:
/opt/bacula/working/BackupCatalog.bsr: ERR=Permission denied
05-Mar 23:19 orac-dir JobId 7661: Error: Bacula Enterprise orac-dir 13.0.3 
(02May23):
   Build OS:   x86_64-pc-linux-gnu-bacula-enterprise ubuntu 22.04



Hello Ken,

There are two unrelated issues here.



First, somehow, the bsr file "Permission denied" error:
8<
05-Mar 23:19 orac-dir JobId 7661: Error: Could not open WriteBootstrap file: /opt/bacula/working/BackupCatalog.bsr: 
ERR=Permission denied

8<

This problem is that somehow (guessing someone 
started the Director as the root user at some point), so the BackupCatalog.bsr 
file permissions do not allow the Director (normally running as 'bacula' user to write to this file:

8<
-rw-r--r--  1 root   root190 Jul 11  2023 BackupCatalog.bsr
8<

Simple fix, as root, do:
8<
# chown bacula:bacula /opt/bacula/working/BackupCatalog.bsr
8<


Second problem is not a Bacula problem, but a MySQL user/permissions issue:
8<
05-Mar 23:19 orac-dir JobId 7661: shell command: run BeforeJob 
"/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
05-Mar 23:19 orac-dir JobId 7661: BeforeJob: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS 
privilege(s) for this operation' when trying to dump tablespaces

8<

With the Director running as the bacula user, this means the Before Script (/opt/bacula/scripts/make_catalog_backup.pl) in 
the Catalog job `BackupCatalog` will be called as the bacula user, then the sc
ript calls the mysqldump program to dump the 
'bacula' (by default) database.


This one is not a filesystem permissions issue/error.

The error is the MySQL database server telling you that the MySQL database user you have configured in the Catalog{} resource 
in your Director's configuration (usually bacula) does not have the "...PROCESS privilege(s) for this operation' when trying 
to dump tablespaces"



Find out how your Catalog resource (named "MyCatalog") in the Director is set up (ie: user, password, DB address, DB port, 
etc) is configured.



Then, make any DB privilege modifications necessary, and make sure you can (outside of Bacula, and as the 'bacula' user, 
*not* the root user) successfully run the command:


bacula@hostname] $ /opt/bacula/scripts/make_catalog_backup.pl MyCatalog

This will (by default) create a file `/opt/bacula/working/bacula.sql`

I see there is one there already from 23:19 yesterday evening, which means the 
mysqldump command called i
n the Before script 
by your BackupCatalog job was able to dump something, but clearly the MySQL DB server is not completely happy. :)




Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] CRC ERROR on volume what possible to do

2024-03-04 Thread Bill Arlofski via Bacula-users

On 3/4/24 09:00, Lionel PLASSE wrote:

Hello,

I have this error while reading a volume for virtualfull consolidating job

Error: block_util.c:521 Volume data error at 0:0!
Block checksum mismatch in block=11755301 len=64512: calc=dfb5486f
blk=9eafdf1a

It seems the volume file is definitively corrupted,  nothing is possible for
this block I think . ( The error occurs twice for the same job at the same
block)

But ,
is this possible to continue reading the volume by bypassing or "marking" the
error and proceed with consolidating the remaining data?
Can we blacklist the block in error (along with the corresponding files) to
complete the consolidation job, even if the result will be an incomplete
fileset?
Or Bacula definitively kill the job in error. I don't recall seeing an option
to bypass I/O errors .

What to do in such kind of hardware I/O problems.


Hello Lionel,

I have no idea if this would work, but it may be possible to start the SD with the `-p` (Proceed despite I/O errors), then 
try the restore. I have never tried this, and would typically revert to using the low-level `bextract` tool, which also has 
the `-p` command line option.


If starting the SD with `-p`:

# sudo -u bacula /path/to/bacula-sd -p -f(just start the SD in foreground 
with ignore errors etc)

... And performing a VFull does not get you a good* virtual full, you may have to use `bextract` against the volumes used in 
the last Full/VirtualFull to restore the data from the volumes.


*In this sentence, the word "good" is relative. I mean, with hardware I/O errors, the data recovered during the VFull will 
surely be missing data... The same thing goes for using bextract set to ignore I/O errors.


Personally, unless this were a critical restore situation, and I were just trying to "get what I can" back, I would abandon 
the last Full/VirtualFull and perform a new, good real Full immediately since you have surely lost some data in your backup 
chain due to this hardware issue.


I'd also recommend implementing Verify jobs in some manner (ie: Automatically restore and/or Verify (level=data) critical 
jobs when they finish with a RunsWhen = after script, or implement some Admin job that pseudo-randomly picks a recent, Good 
backup job and performs a restore and/or Verify job against it)


Here's an all-in-one™, overkill™ example script that I wrote as a proof-of-concept a while ago which performs a restore, then 
all three Verify levels against a backup job when it completes.  You can pick and choose parts of this that you need and 
abandon the rest. :)


https://github.com/waa/AutoRestoreAndVerify


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] auto labelling no longer working after transferring to a new backup disk

2024-03-03 Thread Bill Arlofski via Bacula-users

On 3/3/24 19:00, Thing wrote:

Hi,

Ahhh, I didnt think that was an issue as I have 74 volumes on disk.


Right, but volumes on disk does not necessarily match the number of volumes in catalog. This is especially true when dealing 
with removable media, and vchanger, for example. :)




  *list pools
Using Catalog "MyCatalog"
+++-+-+-+--+-+--+-+
| PoolId | Name       | NumVols | MaxVols | MaxVolBytes     | VolRetention | 
Enabled | PoolType | LabelFormat |
+++-+-+-+--+-+--+-+
|      1 | Default    |       4 |     100 |  53,687,091,200 |   31,536,000 |    
   1 | Backup   | *           |
|      2 | File       |   1,009 |   1,008 | 536,870,912,000 |   31,536,000 |    
   1 | Backup   | Vol-        |
|      3 | Scratch    |       0 |       0 |               0 |   31,536,000 |    
   1 | Backup   | *           |
|      4 | RemoteFile |   2,015 |   2,005 |  53,687,091,200 |   95,040,000 |    
   1 | Backup   | Remote-     |
+++-+-+-+--+-+--+-+
You have messages.

NumVols is 2005 and I have labeled 5 volumes so far I think, like doh.


It is the opposite actually. Your Pool's MaximumVolumes is 2,005 and your number of volumes in the catalog is 2,015, so, 
somewhere along the line, 10 additional volumes have been manually created. :)




*list media was interesting,
8><-

A huge quantity of Error'd volumes, is there a safe way to purge the error'd 
volumes (I assume inside mysql)?


You can tell the Director to delete these volumes with a volstatus=Error

* delete volume=


Of course, this can be scripted if you have hundreds or thousands of them.

This will present a list of only Volumes in this pool with a volstatus of 
'Error':
8<
# for vol in $(echo "list media pool=RemoteFile" | bconsole | grep "^| \+[0-9].* Error " | awk '{print $4}'); do echo "list 
volume=$vol" | bconsole; done

8<

After verifying that list looks to be OK (ie: only volumes with 
volstatus=Error), they may be deleted from the catalog.

To automatically delete them from the Bacula catlog:
8<
# for vol in $(echo "list media pool=RemoteFile" | bconsole | grep "^| \+[0-9].* 
Error " | awk '{print $4}'); \
do echo "delete XyesX volume=$vol" | bconsole; done
8<

*NOTE*, you will need to remove the "X"s around the word 'yes' above. ;)

*NOTE 2* I wrote the two above one liners in an email client. They may, or may 
not be syntactically correct. YMMV. :)



I have maybe 1Tb to go on this job.  I suppose I could stop it?   If so how do 
i do that pls?


Why stop it?  Everything I have shown above can be done while jobs are running.

Newer versions of Bacula (I forget when the feature was added), allow you to 
`stop jobid=`, then later,
you can resume the job with `resume incomplete jobid=`



This setup is years old I just keep doing a Debian in place upgrade, is 50gb 
too small for a file size these days?


Yeah, well, version 9 of Bacula is is pretty old. :)

Debian is notorious for having old packages in the official repositories.


You can get current packages (15.0.x) just by signing up at bacula.org.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] auto labelling no longer working after transferring to a new backup disk

2024-03-03 Thread Bill Arlofski via Bacula-users

On 3/3/24 10:01, steven wrote:

I moved to a new bigger disk and now I have to label every volume.  I
have restarted both the bacula director + the storage node.

How do I get auto-labeling to work again please?

bacula-director  9.6.7-3    amd64  on Debian12

root@bacula:/etc/bacula/conf.d# cat pools.conf
Pool {
      Name = RemoteFile
      Pool Type = Backup
      Label Format = "Remote-"
      Recycle = yes
      AutoPrune = yes
      Volume Retention = 1100 days
      Maximum Volume Bytes = 50G
      Maximum Volumes = 2000
      }

Device {
    Name = FileStorage
    Media Type = File
    Archive Device = /bacula/backup
    LabelMedia = yes;   # lets Bacula label unlabeled media
    Random Access = Yes;
    AutomaticMount = yes;   # when device opened, read it
    RemovableMedia = no;
    AlwaysOpen = no;
    Maximum Concurrent Jobs = 5
}



Hello Steven,

With 1,100 day retention, 50GB volumes, and a limit of 2,000 volumes in this pool, my guess is that your Pool "RemoteFile" 
has reached the MaximumVolumes"


What does the following bconsole command show for the numvols and maxvols for 
this pool?

* list pools

I am also guessing that since you have been manualkly adding/labeling new volumes, that there are more than 2,000 voilumes 
currently in this pool.


Newer versions of Bacula will tell you in the job logs that the maximum number of volumes in a pool has been reached before 
asking for you to mount an appendable volume or to label a new one.


If all my guesses are correct, just edit your pool and increase the 
"MaximumVolumes" setting, then, in bconsole:

* reload
* update pool=RemoteFile

Then, run a job that uses this pool and it sould be OK now.

If it is not OK, then please show a full joblog of the job that is asking for 
media, and also a list pools, and list media:

* ll joblog jobid=

* list pools

* list media


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client backup, VirtualFull and failed job 'retention'...

2024-02-22 Thread Bill Arlofski via Bacula-users

Hello Marco,

Just to clarify one thing I noticed in your post:


Marco wrote:
> Note that there's no 'Volume Retention = ', so no volume retention. ;-)


From the documentation:
8<
The default Volume retention period is 365 days
8<


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] MagStor M1000-L9SAS tape library

2024-02-22 Thread Bill Arlofski via Bacula-users

On 2/22/24 06:49, Pollard, Jim wrote:
>
By chance is anyone out there able to verify that the MagStor M1000 tape library works with Bacula?  I’m already using the 
LTO-9 HH SAS Drive and that works fine but I have the chance to possibly upgrade the system to a library and I can’t pass 
that up.  : )


Thanks for any nudges in the right direction!

Jim

**

*Jim Pollard*, Senior Sysadmin GCUX, GSEC, LPIC, Linux +


Hello Jim,

If the tape library shows up as a `/dev/sg#` node, and can be controlled with the standard Linux `mtx` utility, then it will 
work fine with Bacula.


Bacula uses the `mtx` utility (wrapped in the `mtx-changer` bash/perl script to get the library's status and to load/unload 
tapes.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Research] Tape-library udevadm info Drives Order identification

2024-02-13 Thread Bill Arlofski via Bacula-users

On 2/13/24 11:35, Heitor Faria wrote:

Hello Bill,

I will explain the reasons for that research.

RHEL 9 introduced new udev rules for tape devices creation (ID_SCSI_SERIAL):


Hello Heitor!

And what if there are two or more Libraries attached to the system? 😉

How do you know which drive(s) are in which library? 🤷


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Research] Tape-library udevadm info Drives Order identification

2024-02-13 Thread Bill Arlofski via Bacula-users

On 2/13/24 11:11, Heitor Faria wrote:

Hello All,

I'm doing a research on more automated ways of determining the Tape Library 
Drives' order identification from Linux standpoint.

If you have a physical tape, would you please send me in private the following?

 1. The output of the command: "for i in $(ls /dev/tape/by-id/*); do udevadm info 
--name $i |tee; done"
 2. The operating system version.

Rgds.



Hello Heitor,

Have you looked at the script that I just released two days ago yet?

https://github.com/waa/bacula-resource-auto-creator


FYI: There is no `udev` on FreeBSD, so the udevadm route will not work on that 
platform.

The above script literally loads a tape into a drive then properly identifies with library it is in and the correct Bacula 
DriveIndex setting.


Also, udevadm fails to provide usable information when `lin_tape` is in use* - as we have seen from our WhatsApp chat last 
evening. :)



*Note: I have not added lin _tap
e support yet, but plan on it very shortly. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup report script...

2024-02-06 Thread Bill Arlofski via Bacula-users
Just re-sending this (without the screenshot) to the list because I got a "Your message to Bacula-users awaits moderator 
approval"email reply when I sent it on the 3rd.



On 2/2/24 09:03, Marco Gaiarin wrote:

Mandi! Bill Arlofski via Bacula-users
   In chel di` si favelave...


I might be able to add this as a feature to my `baculabackupreport.py` Python 
reporting script - possibly quite easily.
The script already has a feature to show jobs that are "always failing" for x 
number of days. This feature you are asking for
is interesting.


I've tested it, cool!


Excellent!  Glad it is working/useful for you. :)


OK, so here's the thing. What I have just completed is a feature that (if it finds anything to report) generates another 
small auxiliary table at the bottom of the jobs report, like the Summary, Pools, Success rates tables.


It currently works like this:

- It internally generates a list of the last time every job name in the catalog 
has r
un successfully.
- Then it checks to see which of these last successful runs was more than 31 
days ago (default setting)
- Then it generates an HTML table listing each Job Id, Job Name, End Time, and 
number of days ago that End Time was.
- It can also be told to skip any Job Names you add the the 
`last_good_run_skip_lst` variable.
- I plan to add one more thing before pushing the changes. Namely I want to add another banner at the top when these jobs are 
found and the table is created. This should just take a few minutes of cut-n-paste-n-edit and should be pushed tonight. :)


See the screenshot of the new auxiliary table.


The new variables at the top of the script (which should not be edited, but 
added/modified in the config file) are these:
8<
# Warn about jobs that have been seen in the catalog, but
# have not had a successful run in `last_good_run_days` days
# --
warn_on_last_good_run = True   
  # Do we warn about jobs that have not run successfully in last_good_run_days days?

last_good_run_days = 31   # Longest amount of days a job can be 
missed from running successfully
last_good_run_skip_lst = ['Job1', 'Job2'] # Jobs to ignore when processing this 
`warn_on_last_good_run` feature
8<


 > Effectively the 'always failing' could suffices, but knowing that failed
 > because the host was not reachable will be better!

Now, there is really no way to represent the reason a job failed into the table as you ask without a few additional pieces in 
place.


- First, there would need to be some specific, clear, always identifiable text 
in the catalog's `Log` table
- Next it would require yet another "costly" text query, and I had tried for the longest time to avoid these as much as 
possible. :)

- Third, the text to report a reason why a job failed just wouldn't fit in the 
HTML table cell and would make a giant mess.

I think the new banner w
arning, and this table will let you know quickly which jobs have not completed is some (configurable) 
number of days. And,  when configured to create an HTML link (gui = True) to the Job log in Baculum, (or BWeb in the 
enterprise version) it will get you to the reason for the continued failures very quickly.




I've been looking for something to work on for a while now. This might be 
perfect. :)


Don't worry: stay warm, drink water and code slowly! ;-)


hehe  Yes, it took a FULL week to be able to function properly since I 
returned, and to get something done on this. :)

Hope this is useful!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT Installation

2024-02-02 Thread Bill Arlofski via Bacula-users

On 2/2/24 09:35, Phil Stracchino wrote:


BAT still does everything I want a Bacula console to do.  The things
that it does NOT do, like configuration, I don't WANT it to; I want to
do those myself, by hand, using a proper editor.


Hi Phil!

You sound like one of the customers of mine I mentioned. :)



To be truthful, I detest the "Everything is a web page/application"
model.


I never liked this either.



I will note that there are a few known bugs in BAT, notably that some
purge operations can produce *multiple* simultaneous pop-up confirmation
alerts that can be confusing.


My experience with BAT over the many years I have been using Bacula has been... let's just say "poor", with BAT randomly just 
hanging for no apparent reason, requiring me to kill and restart it so often I just gave up.


Of course this is my experience, on several platforms, over several versions in 
several of my customers' environments.

I do everything from the command l
ine and avoid Web GUIs at all costs - which does not say anything about the quality nor 
capabilities of them, just more about my abhorrence to Web GUIs in general.   :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT Installation

2024-02-02 Thread Bill Arlofski via Bacula-users

On 2/2/24 08:53, Heitor Faria wrote:

"*Having said that, years ago, I had customers that LOVE LOVE LOVED BAT, and 
would not move away from it even at my urging, so
there is one reason, I guess. :)"

Frankly, I never understood this community's urge to regularly create "one more Bacula GUI", instead of just fixing and 
improve the state-of-art.
IMHO bacula.org  should adopt and sponsor BAT and Bacularis/Baculum GUIs development, get people together, 
integrate bacula-web features and organize this mess.


Rgds.



Hello Heitor!


I can't say I disagree with this idea.

Imagine a fully-functional BAT? Able to do configuration too? All the bugs 
fixed?


So, any volunteers?  :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT Installation

2024-02-02 Thread Bill Arlofski via Bacula-users

On 2/2/24 05:48, Rob Gerber wrote:
>

As far as I know, BAT is available only with enterprise bacula.


Hello Rob, (and Howard Viccars),

BAT (Bacula Administration Tool) is a long-unmaintained QT graphical interface 
to 'manage' Bacula and is available in Community.

I say 'manage' because BAT is more like BMT (Bacula Management Tool) since you cannot configure anything using BAT, you can 
only start, stop, monitor jobs and view information about your system (Clients, Jobs, Pools, Media, etc) just like with 
bconsole. :)




I believe your choices with regard to bacula community are baculum, bacularis, 
and bacula-web.

Bacularis is a friendly fork of baculum, and is the better maintained package. 
Developer is on this list.

Bacula-web - I have not tested it, but I hear it provides good reporting, but 
no active control. Developer is on this list.


All good information.

And I would strongly urge to move away from BAT in favor of Bacularis (for 
example).


Bacularis, unlike BAT allows you to configure every aspect of your Bacula environment, and as Rob mentioned, Marcin Haba, a 
Bacula Systems employee maintains it as an open-source and free tool, and he is quite active and helpful on this list. :)


Bottom line: Abandon BAT. It has not received any love in a very long time, and quite frankly I am sure it will not. It did 
what it was designed to do, but now that we have some very powerful web gui administration tools, there is no reason not to 
abandon it.* (Just my two cents)


*Having said that, years ago, I had customers that LOVE LOVE LOVED BAT, and would not move away from it even at my urging, so 
there is one reason, I guess. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup report script...

2024-01-29 Thread Bill Arlofski via Bacula-users

On 1/29/24 08:44, Marco Gaiarin wrote:


In my Bacula installatin i've many backup types; for some, more ''solid'',
the bacula messages resources is perfect.


But i use bacula also for some more less solid backup: client backup (mostly
windows, mostly medical devices) that are not ''always on''.
So bacula message resources get 'sanitize' andnclient send little or no
feedback on backup.


Still i need some report, at least know that 'client X is out of network by
a month' or something like this.


I've found that a command like:

  *list jobs client=fvg-sv-eeg-fd type=B
  Using Catalog "BaculaLNF"
  
+++-+--+---+--++---+
  | jobid  | name   | starttime   | type | level | jobfiles | 
jobbytes   | jobstatus |
  
+++-+--+---+--++---+
  | 15,486 | FVG-S

V-EEG | 2024-01-05 14:01:12 | B| I |0 |  0 | A  
   |

  | 15,635 | FVG-SV-EEG | 2024-01-09 16:00:51 | B| I |0 |   
   0 | A |
  | 15,677 | FVG-SV-EEG | 2024-01-10 15:31:10 | B| I |0 |   
   0 | A |
  | 15,719 | FVG-SV-EEG | 2024-01-11 12:31:06 | B| I |   34 |  
2,349,575,229 | T |
  | 15,761 | FVG-SV-EEG | 2024-01-12 11:00:32 | B| I |   43 |  
1,813,375,650 | T |
  | 15,868 | FVG-SV-EEG | 2024-01-15 12:30:58 | B| F |   14,030 | 
64,277,646,705 | T |
  | 15,913 | FVG-SV-EEG | 2024-01-16 08:30:03 | B| I |0 |   
   0 | A |
  | 15,961 | FVG-SV-EEG | 2024-01-17 08:30:03 | B| I |0 |   
   0 | A |
  | 16,010 | FVG-SV-EEG | 2024-01-18 08:30:02 | B| I |0 |   
   0 | A |
  | 16,056 | FVG-SV-EEG | 2024-01-19 12:30:47 | B| I |  

58 |  5,740,439,597 | T |

  | 16,172 | FVG-SV-EEG | 2024-01-22 12:31:04 | B| F |   14,079 | 
72,084,456,031 | T |
  | 16,218 | FVG-SV-EEG | 2024-01-23 16:01:05 | B| I |0 |   
   0 | A |
  | 16,264 | FVG-SV-EEG | 2024-01-24 10:30:28 | B| I |0 |   
   0 | A |
  | 16,310 | FVG-SV-EEG | 2024-01-25 08:30:03 | B| I |0 |   
   0 | A |
  | 16,357 | FVG-SV-EEG | 2024-01-26 13:02:49 | B| I |   62 |  
6,935,959,574 | T |
  | 16,473 | FVG-SV-EEG | 2024-01-29 12:31:25 | B| F |0 |   
   0 | A |
  
+++-+--+---+--++---+

i can get a report of jobs and do some statistics with a script.


Before starting to code something... there's just a 'report script'
somewhere?! ;-)


Thanks.



Hello Marco,

I might b
e able to add this as a feature to my `baculabackupreport.py` Python reporting 
script - possibly quite easily.

The script already has a feature to show jobs that are "always failing" for x number of days. This feature you are asking for 
is interesting.


I'd need a little time to think about it since I just returned from a Company trip in Morocco and am still fighting whatever 
cold/flu I got while traveling. :)


I've been looking for something to work on for a while now. This might be 
perfect. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochangers and unload timeout...

2024-01-25 Thread Bill Arlofski via Bacula-users

On 1/24/24 10:13, Marco Gaiarin wrote:


I've reached my first tape change on my autochangers, yeah!


But...

  24-Jan 17:22 cnpve3-sd JobId 16234: [SI0202] End of Volume "AAJ661L9" at 333:49131 on 
device "LTO9Storage0" (/dev/nst0). Write of 524288 bytes got -1.
  24-Jan 17:22 cnpve3-sd JobId 16234: Re-read of last block succeeded.
  24-Jan 17:22 cnpve3-sd JobId 16234: End of medium on Volume "AAJ661L9" 
Bytes=17,846,022,566,912 Blocks=34,038,588 at 24-Jan-2024 17:22.
  24-Jan 17:22 cnpve3-sd JobId 16234: 3307 Issuing autochanger "unload Volume 
AAJ661L9, Slot 2, Drive 0" command.
  24-Jan 17:28 cnpve3-sd JobId 16234: 3995 Bad autochanger "unload Volume AAJ661L9, 
Slot 2, Drive 0": ERR=Child died from signal 15: Termination
Results=Program killed by Bacula (timeout)
  24-Jan 17:28 cnpve3-sd JobId 16234: 3304 Issuing autochanger "load Volume 
AAJ660L9, Slot 1, Drive 0" command.
  24-Jan 17:29 cnpve3-sd JobId 16234: 3305 Autochanger "load Volume AAJ660L9, Slot 
1, Drive 0", status is OK.

So, unload timeout, but subsequent load command works as expected (and
backup are continuing...).

I can safely ignore this? Better tackle with tiemout parameters on
/etc/bacula/scripts/mtx-changer.conf script?


Thanks.


Hello Marco,

It looks like the mtx command (called by the mtx-changer script) is taking more than 6 minutes to return, so the process is 
being killed.


But, it then looks like it *must* have succeeded since the load command loads a 
new tape into the now empty drive.

You can try a few things to debug this.

First, I would stop the SD, and then manually load/unload tapes into your drive 
with the mtx command:

# mtx -f /dev/tape/by-id/ status


If this shows a tape loaded in, for example, drive 0, unload it:

# mtx -f /dev/tape/by-id/ unload X 0  (where X is the slot 
reported loaded in the drive)


Then, try loading a different tape:

# mtx -f /dev/tape/by-id/ load Y 0(where Y is a slot that 
has a tape in it, of course :)


By doing these manual steps, At least you can find out how long your tape library takes for these processes, and then you can 
adjust mtx-changer.conf as Pierre explained.



Additionally, if you are feeling brave and like playing the part if guinea pig, you can try replacing the default mtx-changer 
bash/perl script in your Autochanger's "ChangerCommand" with my `mtx-changer-python.py` script. It is a drop-in replacement 
with better logging and some additional features (with more planned). It is very configurable, and logs everything very 
clearly - including mtx changer errors, etc (log level is configurable, of course).


It needs a few Python modules installed, and as far as I know very few people have even tried it (maybe no one, lol) - But I 
have been running it in the Bacula Systems lab with our tape library since this past Summer and it "just works"™


If you are even interested, you can find it in my Github account where I have shared it and a few other scripts here: 
https://github.com/waa



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to check a written job by re-read them

2024-01-23 Thread Bill Arlofski via Bacula-users

On 1/23/24 02:59, Pierre Bernhardt wrote:

Hello,

after a raid disaster which needs a full restore based on the
last full backup on tape which has unreadable blocks and
blocks the whole tape drive I want to check the written jobs.

A good idea is to create a copy job so I have a copy of the
written data which also checks the tape by reading them.

By the way I want to test the tape job only so I mean it
should be possible to write the copy data simply to /dev/null.

Is it possible to use the fifo device? Is there another
possibility to read the tape regularly?

Pierre



Hello Pierre,

Yes, a Copy job will need to read the backup data and by doing so, Bacula will verify the signature (checksum) of each file 
read. You would be notified in the job of a failure to read back a file with the correct checksum.


But, as the name implies, you will be copying the data to another storage location, and hence using some additional space - 
E

ven if it is only a temporary scratch space for your copies to be written to.

Alternately, you can run a Verify (level=data) job which read the data from the backup media, also verifying the checksum of 
every file read - without actually writing the data to a second storage location.


I have written a script which (just for testing purposes), when calls from a Backup Job's RunScript (RunsWhen = After), 
automatically restores the entire job but also runs all three Verify levels against the job. You can pick the parts of the 
script you need (maybe just the Verify level=data), and remove/comment out the rest, or just pick and choose what you need.


I am attaching the script `AutoRestoreAndVerify.sh` which I use in my environment. Please edit the variables at the top and 
read through the instructions at the top of the script to understand how to implement this script.


I hope this helps.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com


AutoRestoreAndVerify.sh
Description: application/shellscript


signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuring Bacula Community 13.0.3 Email / Job Summaries

2024-01-21 Thread Bill Arlofski via Bacula-users

On 1/19/24 14:39, Jamison Johnson wrote:
>

Hello Bill,

Thank you for this information, I'll give this a try. I looked through this 
last night and it appears to be great for what I need. I've noticed that in 
/opt/bacula/scripts/ from the 13.0.3 binaries there is a baculabackupreport  
.sh script. I'm assuming that the Python script via GitHub is maintained and 
the included .sh script is not?


Hello Jamison,

Yeah, please do not even look at that one that is currently included with the 
community source. It is embarrassing. :)

The script started out as a small bash script which sent a basic text email. Then as people requested more features, I added 
some awk, which then became a LOT of awk - to the point of the now bash/awk script finally being a completely unmaintainable 
mess.  I always knew it would have to be rewritten in something else like Python, and a couple years ago I finally started a 
complete re-write.


Please only use the Python version available on g
ithub. This one is the one that will be maintained.

In newer versions of Bacula community (and possibly Enterprise), this file will be replaced with a text file having a short 
note and a URL link to the github version. :)




Also, if I have any questions or recommendations in the future, can I reach out 
to you here or should I contact you elsewhere?


You can always write to the mailing list if it is something everyone on the list can benefit from. If it is something that 
should be added/tracked (ie: bug or feature request) I may ask you to create an issue in github and we can continue the 
issue-specific conversation there.


Welcome to the Bacula Community mailing list!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuring Bacula Community 13.0.3 Email / Job Summaries

2024-01-18 Thread Bill Arlofski via Bacula-users

On 1/18/24 12:24, Jamison Johnson wrote:
I hope this email finds you all well. I am currently using Bacula Community version 13.0.3 on RHEL9 and I receive email 
notifications for every job that runs as designed. Soon I will be backing up close to 50+ machines so to save my inbox, I 
would like an alternative method. I am seeking guidance on how to configure the Message section in bacula-dir. My goal is to 
receive a single email notification once all backups have completed for the night or, alternatively, receive a consolidated 
notification on a weekly basis.


I have reviewed the documentation, but I would appreciate your insights and recommendations on the specific configuration 
settings needed to achieve this. If there are any examples or best practices that you can share, it would be helpful.


Thank you in advance for your assistance.

Thank You,

Jamison Johnson

Principal Systems Administrator



Hello Jamison,

There is no 
way built into Bacula to do what you are asking about.


Please check out http://github.com/waa/baculabackupreport

It is an open-source, free Python script which I maintain.

When run daily from a cron job, I am pretty sure it does exactly what you want (and much, much more). There are some 
screenshots there too.


You can even run the script multiple times, pointing to different [sections] in the configuration file to send different 
types of reports to different people or groups of people.



Any and all comments and ideas are welcome.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Where is the path to a Volume stored in the database?

2024-01-10 Thread Bill Arlofski via Bacula-users

Hello Chris,

I am replying back to the list to keep thread continuity. :)



On 1/10/24 08:31, Chris Wilkinson wrote:

Thanks Bill

What i was trying to do was write a script to determine the disk space used by each pool = sum of all volumes. Looks like i 
would have to trace out the path from pool->storage->device from the conf files. More complicated than i imagined.


Perhaps a better option would be to use bconsole 'query 11' to get the volume 
sizes.



The easiest way to do this is to echo commands to bconsole and grep/sed/awk 
what you need:

It's not pretty, and you could do so much more (and easier) with Python, but:
8<
#!/bin/bash

# Get the pools to iterate through
pools=$(echo -e "list pools\nquit\n" | bconsole | grep "^| \+[0-9]" | awk 
'{print $4}')

# For each pool, we can just add the bytes.
for pool in ${pools}; do
  echo -n "Pool: ${pool} - Bytes: "
  echo -e "gui on\nlist media pool=${pool}\nquit\n" | bconsole | grep "^| 
\+[0-9]" | awk '{sum+=$10} END {print sum}'
done
8<

If you have more specific needs, or would like to see more info, formatted better, let me know, and maybe I can do something 
better in Python. :)



Best regards,
Bill


--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Where is the path to a Volume stored in the database?

2024-01-10 Thread Bill Arlofski via Bacula-users

On 1/10/24 04:19, Chris Wilkinson wrote:

I have been attempting to extract the physical full path for a Volume with an 
SQL query.

'SELECT * FROM location;'

The docs indicate that the location table would give me that information but when I look in the content of that table it 
seems to be empty. The query returns no data.


Is that table not used anymore and where is it now?

[V11/Postgres]

-Chris Wilkinson


Re-replying to the list this time... :)



Hello Chris,


The disk path to a volume is not stored in the catalog.

If there is a `location` column in the catalog I think that is a spill over from Enterprise where you set locations of 
Volumes that go offsite, like "Iron Mountain" or "Las Vegas media Vault" or where ever you send your tapes.


To find out where your file volumes are, look to your bacula-sd.conf and find 
the device(s).

The Device{} resource(s) will have an `ArchiveDevice = /path/to/volumes` 
setting.


Hope this
helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a file from an unknown backup

2023-12-13 Thread Bill Arlofski via Bacula-users

On 12/13/23 08:58, Borut Rozman wrote:

Hi,
I inherited a bacula backup solution, and now I got a request to
restore a file or set of files from backups. I only know a folder name,
and nothing else. Is there a way inside bconsole to search for a
specific string.

Query/option 20 does not give me any results.

Using bacula 11 with pgsql 14, any help appreciated.
B.


If you know what Job/Fileset/Client is used to backup that folder you can just 
do:

* restore

- Choose option `5: Select the most recent backup for a client`

- Then select that client from the list presented. If there is only one, it 
will be auto-selected for you.

- Then select the Fileset from the list presented. If there is only one, it 
will be auto-selected for you.

- Next, drill down in to the virtual directory tree presented and mark the 
directory to be restored.


That is the most simple way to go about this. You just need to know a couple 
basic pieces of information up f
ront.

If you cannot follow that path for some reason, we can come up with a SQL query 
to find jobids of jobs that backed up this
file. Then you would just be able to do `restore jobid=`, and drill down to 
the directory to be restored.

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Testing tape performace with btape...

2023-12-05 Thread Bill Arlofski via Bacula-users

On 12/5/23 10:24, Marco Gaiarin wrote:


I'm playing with my new LTO9 tape drive (Tandberg/IBM one, SAS).


Some year ago, trying to find optimal parameters for an LTO-5 tape drive,
i've found:

   options st buffer_kbs=16384

   Minimum block size = 0
   Maximum blocksize = 256K
   Maximum Network Buffer Size = 262144
   Maximum File Size = 25G


this lead to the LTO-9 unit:

  *speed file_size=16 nb_file=4

  btape: btape.c:1062-0 Test with zero data, should give the maximum throughput.
  btape: btape.c:911-0 Begin writing 4 files of 17.17 GB with raw blocks of 
262144 bytes.
  [...]
btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 592.4 MB/s

  btape: btape.c:1074-0 Test with random data, should give the minimum 
throughput.
  btape: btape.c:911-0 Begin writing 4 files of 17.17 GB with raw blocks of 
262144 bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write

rate = 279.3 MB/s


  btape: btape.c:1088-0 Test with zero data and bacula block structure.
  btape: btape.c:966-0 Begin writing 4 files of 17.17 GB with blocks of 262144 
bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 234.5 
MB/s

  btape: btape.c:1100-0 Test with random data, should give the minimum 
throughput.
  btape: btape.c:966-0 Begin writing 4 files of 17.17 GB with blocks of 262144 
bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 223.8 
MB/s


Trying to double all the value, as a starting point:

   options st buffer_kbs=32768

   Minimum block size = 0
   Maximum blocksize = 512K
   Maximum Network Buffer Size = 524288
   Maximum File Size = 50G

lead to:

  *speed file_size=16 nb_file=4
  btape: btape.c:1062-0 Test with zero data, should give the maximum throughput.
  btape: btape.c:911-0 Begin writing 4 files of 17.17 GB with

raw blocks of 524288 bytes.

  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 660.7 
MB/s

  btape: btape.c:1074-0 Test with random data, should give the minimum 
throughput.
  btape: btape.c:911-0 Begin writing 4 files of 17.17 GB with raw blocks of 
524288 bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 279.3 
MB/s

  btape: btape.c:1088-0 Test with zero data and bacula block structure.
  btape: btape.c:966-0 Begin writing 4 files of 17.17 GB with blocks of 524288 
bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 247.1 
MB/s

  btape: btape.c:1100-0 Test with random data, should give the minimum 
throughput.
  btape: btape.c:966-0 Begin writing 4 files of 17.17 GB with blocks of 524288 
bytes.
  [...]
  btape: btape.c:385-0 Total Volume bytes=68.71 GB. Total Write rate = 237.7 
MB/s

So, a little better but not doubled the throughput

, and this clearly was

expected.


So probably the optimal buffer size sit between 256K and 512K; but because
server have plenty of RAM, i think my search will stop here. ;-)


A queston rise on me: why 'bacula block structure' have a such great impact
on hardware compression?! EG, why if i write zeroes in raw mode i get 660.7 MB/s
while if i write zeroes in 'bacula block structure' i got 247.1 MB/s?!

Compressione seems correctly enabled:

root@svpve3:/etc/bacula# tapeinfo -f /dev/nst1
Product Type: Tape Drive
Vendor ID: 'IBM '
Product ID: 'ULTRIUM-HH9 '
Revision: 'Q3F5'
Attached Changer API: No
MinBlock: 1
MaxBlock: 8388608
SCSI ID: 2
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x98
Density Code: 0x60
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0xff
DeCompType: 0xff
BOP: yes
Block Position: 0
Partition 0 Remaining Kb

ytes: -1

Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 3


Thanks.


Hello Marco,


The following settings have been found to work quite well with the LTO9 drives 
I am testing in Quantum's lab:
8<
MaximumFileSize = 32GB
MaximumBlockSize = 2097152
8<

Also, do *not* set `Minimum block size`, and the `Maximum Network Buffer Size` 
is probably also not necessary.


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error closing volume

2023-12-01 Thread Bill Arlofski via Bacula-users

On 12/1/23 12:51, Senor Pascual wrote:

Hello,

Thanks for the reply.

The disk of these volumes is with enough space.The volumes are filled and marked as Full (because I have set the limit with 
Maximum Volume Bytes).


But I have set it to automatically create a new volume within the same pool. Bacula should not return error because it is 
full, this is the normal operation of my system and has never given error.
It is worth noting that in the jobs that this happens (not always happens, some days yes and some days no) are marked as OK 
-- with warnings. I have tried to restore that type of job and due to the problem of that volume.


The error itself seems logical and intuitive but with my current system, I do 
not understand it.

Thanks, best regards,


"No space left on device" means exactly that. Bacula cannot continue when the 
storage location is filled to 100%

Please show us proof that there is space available on the partition where 
`/mnt/test` lives.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error closing volume

2023-12-01 Thread Bill Arlofski via Bacula-users

On 12/1/23 12:32, Senor Pascual wrote:

Hello everyone,



Hello,

The message is clear:
8<
ERR=No space left on device.
8<

In a shell prompt, do:

# df -h

And you will see that the partition that `/mnt/text` is on is filled to 100%.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-30 Thread Bill Arlofski via Bacula-users

On 11/30/23 03:29, Chris Wilkinson wrote:

I use Backblaze B2. It is S3 compliant but about the lowest cost I could find, 
certainly a fraction of the price of AWS.

It works mostly OK except for errors of the kind in the log below. The "no tomes" error is from B2 when there are no upload 
slots available temporarily. B2 says that the user application is expected to retry the transfer if this occurs. AKAIK the S3 
driver doesn’t retry, just gives up. There is a bconsole 'cloud' command to upload volumes to cloud manually. A listing of 
the cloud volumes shows that the errored volume in the log wasn't uploaded but I could do it manually.


Because of this I do not delete the cache after backup, the downside being I 
have to provision the disc space.

Chris


Hello Chris, Bacula S3 by default re-tries 10 times to upload cloud volume parts. There is currently an internal feature 
request here at Bacula Systems to allow this to be configured, along with a feature request to configure the time between 
attempts. :)


Additionally, it is recommended to have a Bacula Admin job which runs `cloud upload` commands to make sure that any parts 
that had failed to upload (ie: due to temporary networking issues during the backup job, etc)



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-28 Thread Bill Arlofski via Bacula-users

On 11/28/23 13:27, MylesDearBusiness via Bacula-users wrote:
>

I'm not quite understanding where Vol-0014 and other similarly
named volumes are coming from.


I just noticed this statement

You have `LabelFormat = Vol-` in your Pool called `File`

The Director uses this as a template when it needs to create a new volume in 
the catalog and then tells the SD to create the
same named file volume on disk. And, once you add that `LabelMedia = yes` into 
the SD's devices (remember to create several
of them), then the SD is allowed to create the volumes on disk.


Hope this helps to clarify things,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-28 Thread Bill Arlofski via Bacula-users

On 11/28/23 13:27, MylesDearBusiness via Bacula-users wrote:

Thanks for the reply, Bill.

First of all, Bacula seems to be looking for a different volume name
every time, Vol-0014 doesn't match any folder I created (and the storage
space, minus a few boilerplate files and directories, was empty
initially as I would expect a newly configured cloud storage service to
be).  


Hello Miles,

To be clear, you are not trying to backup to an S3 "cloud" bucket, right? When I see the word cloud, I need to be sure 
because when using Bacula cloud storage plugins, instead of creating a single file volume on disk for each volume in the 
catalog, Bacula will create a directory for each volume, and then write parts of the cloud volume under that volume's 
directory (ie: part.1, part.2...part.n)


Having said this, I see no "Cloud {}" resources in your pasted configuration so 
I think we are OK. :)



I'm not quite understanding where Vol-0014 and other similarly
named volumes are coming from.  All I know is that I have a curlftpfs
based userspace mounting daemon running that presents remote storage
using a familiar file-system-based integration and that I'm trying to
point Bacula to back up into it.


We'll get there. :)



I originally created the local directory /mnt/my_backup as a directory
owned by backupuser:backupuser and then under user backupuser I mounted
my curlftpfs remote storage space under that directory.

Only backupuser has the rights to read from and write to this directory.

As shown in my gist copied again below, I'm only running the Director
under the bacula username.


Yes. I see this and this is all fine. Most people don't bother, but it is perfectly fine to edit the systemd unit files and 
change the user the Bacula daemons run as. :)




I'm running the SD and FD processes under the backupuser username
because that's the username I gave sole permission to access the storage
mount.


Yep. Still fine.



I also added more detail to the gist link to try to address some of your
questions :

https://gist.github.com/mdear/99ed7d56fd5611216ce08ecff6244c8b

More help is needed and help already given is much appreciated,

Thanks,





OK, I think I spotted at the issue.

I see what is more than likely the culprit:

In your SD Device called `FileChgr1-Dev1` you are missing an important parameter which allows the SD to create new file 
volumes when needed.

8<
LabelMedia = yes
8<

The other piece of this puzzle which allows Bacula to create new file volumes 
as necessary is to have:
8<
LabelFormat = 
8<

...in the Pool (The pool is `File` in this case). This is OK in your 
configuration, as I see:
8<
Label Format = "Vol-"
8<


I also see that you already have two other important settings in the `File` 
Pool:

- MaximumVolumeBytes
- MaximumVolumes

But keep in mind 90G file volumes with a maximum of 100 of them in the pool and a volume retention of 1 year, you will 
probably want/need to play with these numbers a bit depending on your amount of available storage.  I personally prefer 10GB 
file volumes, but people use any number of different sizes - this will depend on your environment.


I think once you make these changes things should start looking up.

I would then cancel any jobs you have running/stuck:

In bconsole:

* cancel (then follow the prompts)


Then list the volumes:

* list media pool=File


Then delete delete them from the catalog:

* delete yes volume=   (where  is the volume's name)


Then, reload the Director:

* reload


Then restart the SD, and try another test.

If it still fails please show:

* list joblog jobid=  (where  is the job id of a recent job sitting 
waiting to mount a volume)

* list pools

* list media


Then, please show from a shell (as root):

# find /mnt -ls  (The `-ls` is important.)



Please just post everything in the mailing list, It is easier for me to follow 
this way. :)

Additionally, once you get this working, you will want to have more than one of these Devices in the SD's Autochanger. This 
way you can run multiple concurrent jobs to different devices, and you will alway have a device ro restore with when backups 
are running - especially if you add a few and set `ReadOnly = yes`  :)



Good luck,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-27 Thread Bill Arlofski via Bacula-users

Hello Myles,

BTW, after I sent my first reply I saw in your Github posting that you had 
modified the Bacula SD systemd unit files to have
the SD run as the other user, so I suspect this volume is either somewhere else or it has 
been "mounted over" as I mentioned
previously. :)


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-27 Thread Bill Arlofski via Bacula-users

On 11/27/23 15:26, MylesDearBusiness via Bacula-users wrote:

Hello, Bacula experts.

My cloud provider offers only curlftpfs based storage, which I have enabled and secured.  I created a username “backupuser” 
and a system service that launches the service, effectively mounting the remote storage to /mnt/my_backup and running the 
userspace curlftpfs command as the user “backupuser”.  This user has demonstrable read/write privileges in the file system as 
expected.  So far so good.


I installed bacula on my Ubuntu 22.04.3 LTS server and got bacula-dir, 
bacula-fd, and bacula-sd all running.

In summary, when I try to run my backup job I get an error:

Device is BLOCKED waiting for mount of volume

ChatGPT4 doesn’t know nearly as much as this august body, I’m hoping a kindly 
member may be able to give me a hand up.


More details:https://gist.github.com/mdear/99ed7d56fd5611216ce08ecff6244c8b

  Thanks,





What does this show?:

# ls -la /mnt/khapbackup/backup/bacula/archive


Is there a Bacula File volume named `Vol-0014` in there?
Is it rw for the `bacula` user which the SD (normally) runs as?

If it is not there, but you know where it is, them you must move it there, and set the ownership to `bacula:disk` and the 
permissions to allow the bacula user to, well... Read and write to it. :)


It is just a guess, but is it possible that this Bacula file volume lives under a directory which you mounted 
`/mnt/khapbackup/backup/bacula/archive` on top of, so it exists, but is not visible currently?  This is just a guess, but it 
is also a common mistake. :)


If you truly do not know where this Bacula file volume is, then you need to delete it from the catalog so the director no 
longer thinks it is accessible:


* delete yes volume=Vol-0014

Might want to run the bconsole `query` command and select option 14 to be sure 
there are no jobs on it that you might need.

Once deleted from the catalog, the Director should select a new volume (or create a new one if `LabelFormat = yes` is set in 
the Pool `File` and `LabelMedia = yes` is set in the SD's Drive devices writing to this directory.


Alternately, you can just disable the volume in the catalog:

* update volume=Vol-0014 volstatus=Disabled


Also, consider that the ownership and permissions that you set on the mount point for the user `backupuser` will not allow 
the SD running as the user `bacula` to read/write there - unless you dis something like make the bacula user a member of the 
backupuser's group, and you gave the group read/write permissions into that directory tree.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Bill Arlofski via Bacula-users

On 11/7/23 15:09, Rob Gerber wrote:
>

Well, now I know that whole thing is impossible. Simpler, that way.

Thank you for letting me know!


:) Welcome.



could we have disk based backups managed by bacula on this larger
NAS, with copies made to tape for onsite AND offsite storage?


With the large NAS you have described, that is exactly what I would do. :)

Just make sure it is mounted to the SD server via iSCSI, FC, or NFS and *not* 
CIFS. ;)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Bill Arlofski via Bacula-users

On 11/7/23 13:04, Rob Gerber wrote:
>

How difficult will it be to run copy jobs with only 1 LTO8 tape drive?


Hello Rob,

Sorry to be the bringer of bad news, but "difficult" is not the correct word.

The word you are looking for is "impossible"

Bacula needs one read device and one write device for copy or migration jobs.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-03 Thread Bill Arlofski via Bacula-users

On 11/3/23 13:17, Rob Gerber wrote:

Oh, reading the error message again, did you set "heartbeat interval = 300" 
between Dir and SD? Your error message is from
the SD.

I don't know if FD will communicate directly with SD or if data from Fd goes 
through dir to SD.

Robert Gerber


Hello Robert,

Just an FYI, a HeartbeatInterval = 300 seconds has been hard-coded as the 
default - even when not explicitly set - for
several years now. :)

He may want to set a lower one on each Daemon (DIR, SD, FD), but I know from 
testing that anything less than 60 will be
ignored and 60 seconds will be used. ;)

But it really sounds to me that Lionel's problem is possibly an overly 
aggressive firewall closing sockets or it could also
have something to do with the IPsec VPN - since that seems to be the only real 
netowrking change - so that is where I would
look first.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring Incremental Promotes to Full

2023-10-27 Thread Bill Arlofski via Bacula-users

On 10/27/23 16:17, Chris Wilkinson wrote:

Having looked at the job log from a Baculum restore I can see that it is going 
back to the last full and restoring full,
diffs, incrs in order.

Is it not possible to restore from a particular job alone in Baculum as is 
possible with bconsole?


Knowing Bacula, and knowing Marcin, I am sure there must be a way.

Let's wait for Marcin to chime in. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring Incremental Promotes to Full

2023-10-27 Thread Bill Arlofski via Bacula-users

On 10/27/23 16:09, Chris Wilkinson wrote:
>

Yes Bill that is very helpful.


Glad I could help!


It is exactly as you say when I use bconsole restore. That works. I must have made a mistake when I tried with bconsole 
previously.


I'm not able to reproduce that behaviour using Baculum, it always restores from a full irrespective of the level of the job 
being restored. There doesn't seem to be an equivalent of selecting a specific job ID.


This is a question for Marcin. I will ping him to make him aware of this 
discussion. :)



I take your point about schedule being unwanted but it shouldn't do any harm. 
I'll take it out for tidiness :).


Well, if your specified Schedule has no `Run` lines, then "No harm, no foul"™, otherwise you will have a failed restore job 
every time it runs with this in the joblog:

8<
Fatal error: Cannot restore without a bootstrap file.
You probably ran a restore job directly. All restore jobs must be run using the 
restore command.
8<

:)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring Incremental Promotes to Full

2023-10-27 Thread Bill Arlofski via Bacula-users

Resending to list. Accidentally directly replied to Chris...



On 10/27/23 14:46, Chris Wilkinson wrote:
Till now I have only ever done restores of full backups using the restore wizard of Baculum. When I did a restore of an 
Incremental backup, I found that it reverted to restoring the last full backup. I get exactly the same behaviour using  bconsole restore.


When you do a restore in bconsole, by simply typing `restore`, you are 
presented with a menu of options.

Most people will want option 5 or 6, sometimes 12.

When you choose 5 or 6, Bacula will build the virtual restore directory tree from the last Full + all Incs up to the latest 
Inc (or from the last Full and the last Diff and then any Incs that followed)


Bacula will even tell you what jobids, their levels, and the volumes it has 
selected to build the restore tree from:
8<
* restore

...choose option 5
...select Client
...select Fileset
++---+---+-+-+--+
| jobid  | level | jobfiles  | jobbytes| starttime   | 
volumename   |
++---+---+-+-+--+
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0035 |
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0040 |
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0060 |
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0056 |
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0068 |
| 57,311 | F | 1,126,917 | 246,459,324,137 | 2023-09-30 23:00:00 | 
c0_0008_0064 |
[...snip ...]
| 57,336 | I | 2,408 |   6,799,759,717 | 2023-10-01 23:00:01 | 
c0_0008_0042 |
| 57,360 | I | 4,419 |  12,524,588,799 | 2023-10-02 23:00:00 | 
c0_0008_0062 |
| 57,360 | I | 4,419 |  12,524,588,799 | 2023-10-02 23:00:00 | 
c0_0008_0042 |
| 57,389 | I | 5,654 |  22,617,335,979 | 2023-10-03 23:00:00 | 
c0_0008_0049 |
| 57,389 | I | 5,654 |  22,617,335,979 | 2023-10-03 23:00:00 | 
c0_0008_0012 |
| 57,389 | I | 5,654 |  22,617,335,979 | 2023-10-03 23:00:00 | 
c0_0008_0037 |
| 57,416 | I | 7,835 |  11,500,849,933 | 2023-10-04 23:00:01 | 
c0_0005_0035 |
| 57,416 | I | 7,835 |  11,500,849,933 | 2023-10-04 23:00:01 | 
c0_0005_0031 |
| 57,440 | I |   343,892 |  29,334,262,930 | 2023-10-24 10:16:33 | 
c0_0005_0061 |
| 57,440 | I |   343,892 |  29,334,262,930 | 2023-10-24 10:16:33 | 
c0_0005_0035 |
| 57,440 | I |   343,892 |  29,334,262,930 | 2023-10-24 10:16:33 | 
c0_0005_0039 |
| 57,440 | I |   343,892 |  29,334,262,930 | 2023-10-24 10:16:33 | 
c0_0005_0069 |
| 57,446 | I | 5,753 |  11,490,954,183 | 2023-10-24 23:00:00 | 
c0_0005_0023 |
| 57,446 | I | 5,753 |  11,490,954,183 | 2023-10-24 23:00:00 | 
c0_0005_0024 |
| 57,477 | I | 6,270 |  22,715,024,368 | 2023-10-25 23:00:01 | 
c0_0005_0051 |
| 57,477 | I | 6,270 |  22,715,024,368 | 2023-10-25 23:00:01 | 
c0_0005_0015 |
| 57,477 | I | 6,270 |  22,715,024,368 | 2023-10-25 23:00:01 | 
c0_0005_0033 |
++---+---+-+-+--+
You have selected the following JobIds: 
57311,57336,57360,57389,57416,57440,57446,57477

Building directory tree for JobId(s) 
57311,57336,57360,57389,57416,57440,57446,57477 ..
8<

IF, on the other hand, you just want to restore some files that were backed up in a specific inc or Diff, simply specify the 
jobid on the restore command line:

8<
* restore jobid=57477
You have selected the following JobId: 57477

Building directory tree for JobId(s) 57477 ...  
+++
4,452 files inserted into the tree.
8<



This is the restore job resource.

Job {
   Name = "Restore"
   Description = "Restore template"
   Type = "Restore"
   Level = "Full"
   Messages = "Standard"
   Storage = "dns-325-sd"
   Pool = "usb16tb-full"
   Client = "usb16tb-fd"
   Fileset = "usb16tb"
   Schedule = "sched_none"
}

I had thought that most of these directives are not actually used as is but would be overridden by the wizard, i.e. the 
values here are required but unimportant.


I have no idea what Baculum does, Marcin can answer you definitively, but I bet there is a way to "restore files only from 
selected jobid" or similar option.



I also thought that only one 'dummy' restore job is needed that would be populated by the appropriate level, pool etc. from 
the job/level being restored but that isn't what actually happens. It looks like the restore is taking its' level from the 
restore job above.


There is no such thing as a 'level' when doing a restore. The Bacula config parser requires certain things to be in each Job 
resource, and a `Type = Restore` Job resource is a Job just like any other.


The pool, client, fileset, and storage are ignore as the

  1   2   3   >