Re: [Bacula-users] Aggressively prune (and truncate) file volumes on Progressive backups...

2024-10-02 Thread Bill Arlofski via Bacula-users

On 10/2/24 10:00 AM, Marco Gaiarin wrote:

Mandi! Gary R. Schmidt
   In chel di` si favelave...


RTFM again, it took me about three goes to understand it - and I've been
doing backups since 9-track tape drives were vertical!


Mmmm... i've found references to scratch pool but really i've nefer
understood; if i look in docs i found a rather laconic:


https://www.bacula.org/9.4.x-manuals/en/main/Configuring_Director.html#SECTION0020161000

seems there's no whitepaper on scratch pools...


So, RTFM where?! ;-)


Hello Marco,

Typically Scratch pools are not too useful for disk volumes.

The idea behind a Scratch pool is the following:

Let's say you have two pools: "Full", and "Inc"

If you do not have a "ScratchPool" to pull from, and a RecyclePool to send recycled volumes back into, then tapes created in 
the Inc pool (for example) will never be available to be used in the Full pool - even if they are all past their
retention 
and have all been pruned and recycled. They are doomed to this Inc pool forever. :)


So, you could have job(s) writing to the Full pool all waiting on media which is technically available (pruned and living in 
the Inc pool), but not available for use by jobs in the Full pool.


So, what you do is set "ScratchPool = ScratchPoolName" and "RecyclePool = ScratchPoolName" in your Full and Inc Pools. Then, 
make sure to enable Recycling (Recycle = yes) in the pools.


Then, when tapes are labeled (using the 'label barcodes' command in bconsole), you specify the ScratchPoolName as the initial 
pool where they will be created and then Bacula will pull from the Scratch pool when a tape is needed, and put it back there 
when it is recycled so that the tape is then free to move between the Full and Inc pool as it becomes available, and as 
necessary by Bacula's needs.



I do have a use case for ScratchPool and RecyclePool settings in Disk volume 
pools... In my case I use
Josh Fisher's 
'vchanger' so each of my xTB removable drives can have 10 file volumes in either my Inc or Full offsite pools. Disk volumes 
on each physical/removable xTB disk can freely move as described above so I never run into a situation where the whole 
physical disk is full of 10GB volumes in the full pool when Inc pool volumes are needed for a job and vice versa.



Note: A pool named "Scratch" is treated a bit special by Bacula. If no `ScratchPool` is specified in a pool, Bacula will look 
for volumes with the correct MediaType in this pool.   But, you can name your scratch pools anything you like and never use 
this specific one.


For example, If an SD manages more than one tape library, you can have a set of 
pools for Lib1:
 - Lib1_Scratch, Lib1_Full, Lib1_Inc

And then a set of pools for Lib2:
 - Lib2_Scratch, Lib2_Full, Lib2_Inc


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Debug and trace

2024-10-02 Thread Bill Arlofski via Bacula-users

On 10/2/24 8:34 AM, Mehrdad R. wrote:

Hi guys
So I am trying to trace the backup to verify that files are correctly
scanned and backup especially in diff. mode, I am testing backing up a
directory on a window server(Local windows FD and SD backing up to a
local disk on the server) which contains around 200k files in around
5k directories and i am not sure if diff. backup is actually scanning
and catching all the changed files which are copied there, the usually
logs and file lists don't show anything

I came across the SET BEDUG LEVEL command and ran that in bconsole
thinking I would get some info on what is happening in FD and SD trace
files, but there is nothing in those files

Does this command work still(it was some old documentation describing
it)? The bconsole seemed to buy it, but no results
Has anyone tried this? Or maybe knows of any other way to see which
files are actually scanned, selected, etc?? Or anyother insight to
w

hat is actually happening in the backup process?

Hello Mehrdad,

There is really no need for debugging. You can see what the end results are 
with just a couple bconsole commands.

Run the Full, then make the changes and/or file additions, then run the Diff...

Then, list the files from the Full and Diff:

# echo "list files jobid=" | bconsole | tee /tmp/full.txt

# echo "list files jobid=" | bconsole | tee /tmp/diff.txt

The full step is not really necessary I'd say.

But the diff.txt file list should align with any new or modified directories 
and files that happened after the Full.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-10-01 Thread Josip Deanovic via Bacula-users

On 2024-10-01 03:16, Dan Langille wrote:

On Mon, Sep 30, 2024, at 10:48 AM, Marco Gaiarin wrote:

Mandi! Dan Langille
  In chel di` si favelave...

From 
https://dan.langille.org/2023/12/24/backing-up-freebsd-with-bacula-via-zfs-snapshot/ 
:


I'm still doing some experimentation on this, indeed.


It doesn't all run as incrementals. If the list of DATASETS (see 
above URL) does not change, the fileset does not change.


OK, but if mountpoint change, so change root path, so... how can 
bacula do

incrementals, if all path are different?!

So.
I can use script for the list of path to backup; i can use 'Ignore
FileSet Changes = yes'
(but if i use script i think it is not needed...) but how can bacula 
do

incrementals?


Bacula will just do it.  Nothing special required.


This statement might be misleading in this particular case Marco
described.

Bacula will be able to run incremental backup but if the mountpoint is
changing every time backup is running, the content of the mountpoint
directory would be fully copied over and over again, thus, taking more
space compared to the incremental backup of the same directory.

As I said in some post, it might be avoided by using bind-mount option
of the mount(8).


Regards!

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very long retention time

2024-09-28 Thread Josip Deanovic via Bacula-users

On 2024-08-15 09:14, Mehrdad Ravanbod wrote:

Thank you Josip

Appreciate your response, so if i understand correctly the whole ten 
year backup data need not be kept in DB?? Not sure about the bootstrap 
file and the rest though, new to bacula and trying to figure it out as 
yet



Hello Mehrdad,

I apologize for missing your post and thus responding only now.

You need to keep the metadata in your database only for the most
recent, successful full backup job and all the related backup jobs
(differential and incremental).

For example, if you run full backup every three months, you don't
need to keep metadata that belong to older backup jobs.
I would still keep them at least for a year.

In case you remove all the metadata that belong to a certain very
old job but you still have access to volumes that hold the actual
data, you could read the data from the volume and populate the
database with the metadata using the bscan tool.
That would allow you to browse the data the usual way.

Bacula is exceptionally powerful backup system and I appreciate
the possibility for recovering the data even in the worst possible
scenarios (without the database and bacula daemons).

For archival purposes you mentioned, you could form a special
pool of volumes with different defaults for volumes and then
copy or migrate the jobs to that pool.
You will have to ensure that volumes in that pool don't get recycled.

So, yes, you don't need to keep 10 years of metadata in your
database but you need to keep in mind that you have to protect
your volumes from getting recycled and you need to keep at least
the last successful full backup job in your catalog (database) in
order for incremental backup to work.

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote Client Security Alerts

2024-09-27 Thread Josip Deanovic via Bacula-users

On 2024-09-17 13:57, Dragan Milivojević wrote:

AFAIK there is no such feature on the FD side but I might be wrong.


If the option "--with-tcp-wrappers" is used for building Bacula
daemons, Bacula daemons would make use of libwrap library.

I recommend using local firewall rather than tcp wrappers mechanism
which is best described as a host based networking ACL system.

Some major Linux distribution dropped support for tcp wrappers 
completely

few years ago.

A lot of people never learned about tcp wrappers and those who did,
often used them incorrectly.
In that light, I would say: good riddance.


On Tue, 17 Sept 2024 at 13:52, Chris Wilkinson  
wrote:


Is that something that can be done in the FD or is it a job for 
iptables?


-Chris Wilkinson

On Tue, 17 Sep 2024, 12:48 Dragan Milivojević,  
wrote:


These are just automated scans. I would not run a FD open to the 
world.

Block anything but the DIR and SD from contacting the FD?

On Tue, 17 Sept 2024 at 12:43, Chris Wilkinson 
 wrote:

>
> I keep getting security alerts from a remote client backup. The backups 
always run to success. The IPs that are listed in the job log are different every 
time and in various locations including some in Russia but also in London and 
European data centres. There are no entries at all in the remote client bacula 
log. This only happens with remote client backups, never with local client backups.
>



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] binaries for Ubuntu 24.04

2024-09-27 Thread Eric Bollengier via Bacula-users

Hello Gilles,

On 9/19/24 11:38, Gilles Van Vlasselaer wrote:

Any updates on this?


We have uploaded binaries to the web server, tests and feedback
would be welcome.

Thanks,
Best Regards,
Eric


Thanks in advance,
Gilles

On 5/31/24 19:38, Bill Arlofski via Bacula-users wrote:

On 5/31/24 9:59 AM, d...@bornfree.org wrote:


Currently there are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories
for Bacula CE versions 15.0.2 and 13.0.4.  Will there be?



Yes. The builds for Bacula Enterprise packages for this very new platform are 
currently going through testing. Community packages should follow soon. I 
cannot give an ETA, of course. :)



Best regards,
Bill



_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-09-27 Thread Josip Deanovic via Bacula-users

On 2024-09-26 17:01, Dragan Milivojević wrote:

For example, while browsing/restoring backup of the specific date, you
would
get the state of a backed up directory as it was during the execution 
of

a
selected backup job.
I would recommend the "accurate" option in this case.


Be careful with the accurate option, if you use the recommended pino5 
option

it breaks backups.


Hello Dragan,

Could you please elaborate on that?
Are you referring to some bug or to the 'o' option that appeared in 
Bacula
version 13 and would result in saving only the metadata in case the 
content
of a file hasn't been changed? I don't see the problem with that, 
provided

that it works as documented.


Regards!

--
Josip Deanovic


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-09-26 Thread Josip Deanovic via Bacula-users

On 2024-09-26 13:02, Marco Gaiarin wrote:

Hello Marco!

Or it is still the 'fileset change' the trigger, so scripts can make 
all the

dumbest things, but bacula keep going on incrementals?
Also, if my script mount on '/some/dirs-202040926' and pass this as a 
backup
dir for a full, but tomorrow script mount '/some/dirs-202040927' and 
calls

for an incremental on that, i think will make still a new full...


Unless the "Ignore FileSet Change" option is used, Bacula will compare 
the MD5
checksum of the Include/Exclude contents of the FileSet and decide 
whether a

Full backup is needed or not.
I don't recommend using this option and the Bacula documentation 
strongly
recommends against it as well because it bears a risk of having 
incomplete

backup.

Seems to me there's no (easy) escape, and i need to mount snapshots on 
the

same mountpoint to have working incrementals...


Consistency is always desired. :-)

In your case, you are trying to devise a versioning by creating new 
directories
named by the date but it might be unnecessary as Bacula already keeps 
that

information for you.

For example, while browsing/restoring backup of the specific date, you 
would
get the state of a backed up directory as it was during the execution of 
a

selected backup job.
I would recommend the "accurate" option in this case.

If you cannot avoid creation of the directories based on the date, you 
might

help yourself by using bind mount option of the mount(8).


Regards

--
Josip Deanovic


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 15.0.2 - Windows Clients name-fd.trace File Grows Extremely Fast

2024-09-25 Thread Eric Bollengier via Bacula-users

Hello,

thanks for the report, I have uploaded a new version of the
binary on bacula.org without the trace problem. It's called
15.0.2-3. The previous one is no longer on the main page.

Best Regards,
Eric

On 9/21/24 00:15, Arno Lehmann wrote:

Hi Derek,

Am 20.09.2024 um 20:37 schrieb Derek Bable:

Hi All,

I am having a strange issue regarding Bacula 15.0.2 on Windows Clients. 
Clients were installed using the Bacula 15.0.2 Binaries from bacula.org.


We noticed that the C:\Program Files\Bacula\working\host-fd.trace file grows 
extremely fast every time a backup occurs. Here is an example of what events 
we see. These events seem like they are occurring with every directory 
specified to include in our fileset for each client:

...> I’ve already tried disabling debug logging / tracing on the QB client
above (opened bconsole on server and ran ‘setdebug level=0 trace=0 
client=QB’), but I still continue to see thousands of these entries display in 
the trace log when a backup is run.


If I recall correctly, we had something very similar with the enterprise edition 
recently. The fix needed a rebuild. So for now, I would propose downgrading to 
an older bacula-fd on Windows (unless you need new functions / fixes, of course) 
and report this via the bug tracker.


With a bit of luck, I'll recall this on Monday and see if I can reach out to 
Eric :-)


Cheers,

Arno





_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote Client Security Alerts

2024-09-17 Thread Bill Arlofski via Bacula-users

On 9/17/24 4:41 AM, Chris Wilkinson wrote:
I keep getting security alerts from a remote client backup. The backups always run to success. The IPs that are listed in the 
job log are different every time and in various locations including some in Russia but also in London and European data 
centres. There are no entries at all in the remote client bacula log. This only happens with remote client backups, never 
with local client backups.


It's not clear to me whether these alerts are coming from the DIR or being sent 
to the Director by the client.

I'm not sure whether to just ignore these or take some steps to block them. Is there an FD directive that would reject these 
perhaps?


Any advice welcomed.

Thanks

-Chris Wilkinson


Hello Chris,

Since this FD "nuc2" is (obviously) exposed to the Internet, I would enable the firewall on it, and only allow connections in 
from the Director on port 9102/TCP (default).


Best/safest way IMHO.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/11/24 1:50 AM, Bruno Bartels (Intero Technologies) wrote:
>

Hi Bill,
thank you very much for your great answer!
I am going to implement this in the next time and then get back to you.
Thank you again for that valuable hint!
Bruno


Hello Bruno,

You are welcome.

I have worked with this new feature when it was first introduced, but it has been a while since I touched it so I look 
forward to your results. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula setup hangs, need help debugging

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/11/24 1:55 AM, Simon Flutura wrote:

Hi,


I inherited a bacula setup, backing several machines up on tape.

Sadly the setup hangs, no network/cpu/io activity while jobs are running
forever.

We are running Bacula   11.0.6.

Do you have any clue where to start in debugging the setup?


Best,


Simon


Hello Simon,

Most likely Bacula is waiting for something (media is my first guess)

In bconsole, a "status director" will be the first place to start.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-11 Thread Bruno Bartels (Intero Technologies) via Bacula-users

Hi Bill, 
  
thank you very much for your great answer!  
  
I am going to implement this in the next time and then get back to you. 
  
Thank you again for that valuable hint! 
  
Bruno 
 

-Ursprüngliche Nachricht-

Von: Bill 
An: bacula-users 
Datum: Mittwoch, 11. September 2024 09:46 CEST
Betreff: Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as 
fatal

On 9/10/24 5:21 AM, Bruno Bartels \(Intero Technologies\) via Bacula-users 
wrote: 
> Hi alltogether, 
> I have adjusted Bacula to cancel jobs, that are duplicates of one job 
> (Disable "AllowDuplicateJobs", or 
> "DisableDuplicateJobs" AND "CancelQueedDuplicates, which should be the same 
> im my understanding). 
> The reason for this is, that there are running some really big full backups, 
> that take few days to finish, and meanwhile 
> there is the possibility, that the same job can start in a lower level 
> (incremental/differential) 
> This works fine, except one problem: 
> When the new job gets canceled there is this error thrown in the logs: 
> 
> Fatal error: JobId XXX already running. Duplicate job not allowed. 
> 
> And Bacula is sending out a mail. 
> 
> Question: Is there possibility not log this as FATAL ERROR? 
> 
> I want to receive mails concerning fatal errors, so setting the message 
> ressources isn't a option. 
> 
> Can you please help? 
> 
> Thank you in advance 

> 
> Bruno 

Hello Bruno, 

You cannot do this the way you are currently trying because as you have seen, 
Bacula will cancel the job, and it will show up 
as a "non good" (canceled) job in the catalog, and you will get the failed job 
email. 

Fortunately, there is a new feature which was added recently that should solve 
this issue for you. 

It is called 'Run Queue Advanced Control' which adds a new "RunsWhen" setting 
for your job Runscripts. The new setting is 
"RunsWhen = queued" 

The idea is that instead of using the AllowDuplicateJobs, DisableDuplicateJobs, 
and CancelQueedDuplicates job options to 
control if a job is allowed to be queued/started, you add a RunScript{} stanza 
to your job, set the RunScript's "Runswhen = 
queued", and have the RunScript's "Command =" setting point to a custom script 
(we have examples). 

The script's returncode/errorlevel will determine whether the job enters the 
queue, or is just dropped and forgotten about - 
producing no canceled jo 
b, and no job error email. 

Using this new advanced "RunsWhen = queued" level, you should be able to 
accomplish what you are looking to do: 

- Prevent same jobs from being queued when the same job is already running 
- Prevent duplicate jobs from being canceled and error emails from being sent. 


This new feature is available since Bacula Community version 15.0.x, which 
closely tracks Bacula Enterprise v16.0.x. 

It is documented here in the Enterprise manual: 

https://docs.baculasystems.com/BETechnicalReference/Director/DirectorResourceTypes/JobResource/index.html#job-resource
 

The section you are looking for is named: "Notes about the Run Queue Advanced 
Control with RunsWhen=Queued" 

Please make sure you are running Community version v15.0.2 first, then give 
this a try and let me know if this helps. 


Best regards, 
Bill 

-- 
Bill Arlofski 
w...@protonmail.com 

_______ 
Bacula-users mailing list 
Bacula-users@lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/bacula-users   
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-11 Thread Bill Arlofski via Bacula-users

On 9/10/24 5:21 AM, Bruno Bartels \(Intero Technologies\) via Bacula-users 
wrote:

Hi alltogether,
I have adjusted Bacula to cancel jobs, that are duplicates of one job (Disable "AllowDuplicateJobs", or 
"DisableDuplicateJobs" AND "CancelQueedDuplicates, which should be the same

im my understanding).
The reason for this is, that there are running some really big full backups, that take few days to finish, and meanwhile 
there is the possibility, that the same job can start in a lower level

(incremental/differential)
This works fine, except one problem:
When the new job gets canceled there is this error thrown in the logs:

Fatal error: JobId XXX already running. Duplicate job not allowed.

And Bacula is sending out a mail.

Question: Is there possibility not log this as FATAL ERROR?

I want to receive mails concerning fatal errors, so setting the message 
ressources isn't a option.

Can you please help?

Thank you in advance




Bruno


Hello Bruno,

You cannot do this the way you are currently trying because as you have seen, Bacula will cancel the job, and it will show up 
as a "non good" (canceled) job in the catalog, and you will get the failed job email.


Fortunately, there is a new feature which was added recently that should solve 
this issue for you.

It is called 'Run Queue Advanced Control' which adds a new "RunsWhen" setting for your job Runscripts. The new setting is 
"RunsWhen = queued"


The idea is that instead of using the AllowDuplicateJobs, DisableDuplicateJobs, and CancelQueedDuplicates job options to 
control if a job is allowed to be queued/started, you add a RunScript{} stanza to your job, set the RunScript's "Runswhen = 
queued", and have the RunScript's "Command =" setting point to a custom script (we have examples).


The script's returncode/errorlevel will determine whether the job enters the queue, or is just dropped and forgotten about - 
producing no canceled jo

b, and no job error email.

Using this new advanced "RunsWhen = queued" level, you should be able to 
accomplish what you are looking to do:

- Prevent same jobs from being queued when the same job is already running
- Prevent duplicate jobs from being canceled and error emails from being sent.


This new feature is available since Bacula Community version 15.0.x, which 
closely tracks Bacula Enterprise v16.0.x.

It is documented here in the Enterprise manual:

https://docs.baculasystems.com/BETechnicalReference/Director/DirectorResourceTypes/JobResource/index.html#job-resource

The section you are looking for is named: "Notes about the Run Queue Advanced 
Control with RunsWhen=Queued"

Please make sure you are running Community version v15.0.2 first, then give 
this a try and let me know if this helps.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Automatically cancel backup jobs --> do not mark as fatal

2024-09-10 Thread Bruno Bartels (Intero Technologies) via Bacula-users
 
Hi alltogether, 
  
I have adjusted Bacula to cancel jobs, that are duplicates of one job (Disable 
"AllowDuplicateJobs", or "DisableDuplicateJobs" AND "CancelQueedDuplicates, 
which should be the same 
im my understanding). 
The reason for this is, that there are running some really big full backups, 
that take few days to finish, and meanwhile there is the possibility, that the 
same job can start in a lower level
(incremental/differential) 
  
This works fine, except one problem: 
  
When the new job gets canceled there is this error thrown in the logs: Fatal 
error: JobId XXX already running. Duplicate job not allowed.

 

And Bacula is sending out a mail. 

 

Question: Is there possibility not log this as FATAL ERROR? 

I want to receive mails concerning fatal errors, so setting the message 
ressources isn't a option. 

Can you please help? 

Thank you in advance 

Bruno 


 
   
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up to USB disks - Archival of data?

2024-09-05 Thread Bill Arlofski via Bacula-users

On 9/5/24 6:48 AM, Anders Gustafsson wrote:

Hi!

What is the best or recommended process here? Ie if we back up to externa USB 
disks and want to replace the
disk every now and then and keep the old as an archival copy?

Assuming that we then want to restore a file from an old disk, that was 
disconnected three months ago. What
do we need to do?


Hello Anders,

When using multiple removable disks, I would recommend Josh Fisher's excellent 
"vchanger"

You can find it here: https://sourceforge.net/projects/vchanger/


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual backup

2024-09-04 Thread David Waller via Bacula-users
Thanks for your help. I had not configured an auto changer hence my problem. I 
will try this out now. Again thanks for pointing me in the right direction.

David Waller
david.wall...@blueyonder.co.uk



> On 4 Sep 2024, at 19:01, Bill Arlofski via Bacula-users 
>  wrote:
> 
> On 9/4/24 11:27 AM, David Waller via Bacula-users wrote:
>> Sorry forgot to add:
>> I am running version 15.02 on Debian and the two pool definitions are as 
>> follows:
>> Pool {
>>   Name = "Air-Full"
>>   PoolType = "Backup"
>>   LabelFormat = "Air-"
>>   ActionOnPurge = "Truncate"
>>   MaximumVolumeJobs = 1
>>   VolumeRetention = 31536000
>>   NextPool = "Virtual-Air"
>>   Storage = "FreeNAS1"
>>   AutoPrune = yes
>>   Recycle = yes
>> }
>> And
>> Pool {
>>   Name = "Virtual-Air"
>>   PoolType = "Backup"
>>   LabelFormat = "Virtual-"
>>   VolumeRetention = 31536000
>>   Storage = "FreeNAS1"
>> }
>> My understanding is that the job creates the full and incremental backup 
>> volumes in the Pool, Air-Full, by running the job with a level of full. I 
>> can then run the job with a level of VirtualFull and bacula will copy the 
>> full and the various incremental volumes to the pool Virtual-Air and 
>> consolidate into one volume. The various volumes are created in the pool 
>> Air-Full as expected, the failure happens when I run with a level of 
>> VirtualFull.
>> Both pools are on the same storage, I have not tested yet if that is the 
>> issue. If it is, and I have to have the two pools on separate storage, what 
>> happens with the media type as my understanding is to use separate media 
>> type if you have different storage devices in which case would bacula get 
>> confused on a restore?
> 
> Hello David,
> 
> 8<
> Status “is waiting on max Storage jobs.”
> 8<
> 
> This means what it says. Somewhere in the "pipeline" (in this case on the 
> Storage) you have reached the limit on the number of concurrent jobs that can 
> run on the defined storage.
> 
> When you run a VirtualFull using the same storage (perfectly fine to do so), 
> it needs a device to read and a device to write, which counts as two jobs to 
> the Storage.
> 
> 
> What is your `MaximumConcurrentJobs` settings for the following:
> 
> - Director Storage resource: `FreeNAS1`
> - The Storage Daemon itself (this is the top level limit of the number of 
> jobs the SD can run concurrently)
> - The Device(s) in the SD. If you only have one device, the 
> MaximumConcurrentJobs will not matter because a device can only read or write 
> during a job, not both.
> 
> Can you post your configurations for:
> 
> - The Director storage resource `FreeNAS1`
> - The SD's Autochanger and devices
> 
> If your Director's storage resource `FreeNAS1` points to a single device on 
> the SD, then you need to instead create an Autochanger on the SD with some 
> number of devices and point the Director's Storage resource `FreeNAS1` at 
> that Autochanger.
> 
> I usually start with 10 devices plus a couple more "ReadOnly" devices so 
> there should always be a device available for reading during critical 
> restores.
> 
> 
> Best regards,
> Bill
> 
> --
> Bill Arlofski
> w...@protonmail.com <mailto:w...@protonmail.com>
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net>
> https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual backup

2024-09-04 Thread Bill Arlofski via Bacula-users

On 9/4/24 11:27 AM, David Waller via Bacula-users wrote:

Sorry forgot to add:
I am running version 15.02 on Debian and the two pool definitions are as 
follows:

Pool {

   Name = "Air-Full"

   PoolType = "Backup"

   LabelFormat = "Air-"

   ActionOnPurge = "Truncate"

   MaximumVolumeJobs = 1

   VolumeRetention = 31536000

   NextPool = "Virtual-Air"

   Storage = "FreeNAS1"

   AutoPrune = yes

   Recycle = yes

}


And

Pool {

   Name = "Virtual-Air"

   PoolType = "Backup"

   LabelFormat = "Virtual-"

   VolumeRetention = 31536000

   Storage = "FreeNAS1"

}


My understanding is that the job creates the full and incremental backup volumes in the Pool, Air-Full, by running the job 
with a level of full. I can then run the job with a level of VirtualFull and bacula will copy the full and the various 
incremental volumes to the pool Virtual-Air and consolidate into one volume. The various volumes are created in the pool 
Air-Full as expected, the failure happens when I run with a level of VirtualFull.


Both pools are on the same storage, I have not tested yet if that is the issue. If it is, and I have to have the two pools on 
separate storage, what happens with the media type as my understanding is to use separate media type if you have different 
storage devices in which case would bacula get confused on a restore?


Hello David,

8<
Status “is waiting on max Storage jobs.”
8<

This means what it says. Somewhere in the "pipeline" (in this case on the Storage) you have reached the limit on the number 
of concurrent jobs that can run on the defined storage.


When you run a VirtualFull using the same storage (perfectly fine to do so), it needs a device to read and a device to write, 
which counts as two jobs to the Storage.



What is your `MaximumConcurrentJobs` settings for the following:

- Director Storage resource: `FreeNAS1`
- The Storage Daemon itself (this is the top level limit of the number of jobs 
the SD can run concurrently)
- The Device(s) in the SD. If you only have one device, the MaximumConcurrentJobs will not matter because a device can only 
read or write during a job, not both.


Can you post your configurations for:

- The Director storage resource `FreeNAS1`
- The SD's Autochanger and devices

If your Director's storage resource `FreeNAS1` points to a single device on the SD, then you need to instead create an 
Autochanger on the SD with some number of devices and point the Director's Storage resource `FreeNAS1` at that Autochanger.


I usually start with 10 devices plus a couple more "ReadOnly" devices so there should always be a device available for 
reading during critical restores.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual backup

2024-09-04 Thread David Waller via Bacula-users
Sorry forgot to add:
I am running version 15.02 on Debian and the two pool definitions are as 
follows:

Pool {
  Name = "Air-Full"
  PoolType = "Backup"
  LabelFormat = "Air-"
  ActionOnPurge = "Truncate"
  MaximumVolumeJobs = 1
  VolumeRetention = 31536000
  NextPool = "Virtual-Air"
  Storage = "FreeNAS1"
  AutoPrune = yes
  Recycle = yes
}

And 

Pool {
  Name = "Virtual-Air"
  PoolType = "Backup"
  LabelFormat = "Virtual-"
  VolumeRetention = 31536000
  Storage = "FreeNAS1"
}

My understanding is that the job creates the full and incremental backup 
volumes in the Pool, Air-Full, by running the job with a level of full. I can 
then run the job with a level of VirtualFull and bacula will copy the full and 
the various incremental volumes to the pool Virtual-Air and consolidate into 
one volume. The various volumes are created in the pool Air-Full as expected, 
the failure happens when I run with a level of VirtualFull.

Both pools are on the same storage, I have not tested yet if that is the issue. 
If it is, and I have to have the two pools on separate storage, what happens 
with the media type as my understanding is to use separate media type if you 
have different storage devices in which case would bacula get confused on a 
restore?
David Waller
david.wall...@blueyonder.co.uk



> On 4 Sep 2024, at 16:41, David Waller via Bacula-users 
>  wrote:
> 
> Hello,
> 
> I am trying to get the Progressive virtual backup to work. However I get the 
> following messages when I run status:
> 
> Status “is waiting on max Storage jobs.”
> 
> I created the job via the wizard on bacularis and it is shown below:
> 
> Job {
>   Name = "Virtual"
>   Type = "Backup"
>   Level = "VirtualFull"
>   Messages = "Standard"
>   Storage = "FreeNAS1"
>   Pool = "Air-Full"
>   Client = "MacAir-fd"
>   Fileset = "Mac Air"
>   WriteBootstrap = "/opt/bacula/working/client.bsr"
>   Priority = 10
>   BackupsToKeep = 3
>   DeleteConsolidatedJobs = yes
> }
> 
> And List Volumes on Air-Full shows 9 volumes
> +-++---+-++--+--+-+--+---+---+-+--+-++
> | mediaid | volumename | volstatus | enabled | volbytes   | volfiles | 
> volretention | recycle | slot | inchanger | mediatype | voltype | volparts | 
> lastwritten | expiresin  |
> +-++---+-++--+--+-+--+---+---+-+--+-++
> |   9 | Air-0009   | Used  |   1 | 79,063,130,495 |   18 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-08-29 10:37:54 | 30,995,885 |
> |  14 | Air-0014   | Used  |   1 |106,514,046 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |1 | 
> 2024-08-29 19:33:25 | 31,028,016 |
> |  16 | Air-0016   | Used  |   1 | 10,858,674 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-08-30 00:06:34 | 31,044,405 |
> |  17 | Air-0017   | Used  |   1 |120,521,774 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-08-30 15:01:08 | 31,098,079 |
> |  18 | Air-0018   | Used  |   1 |486,985 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-08-30 15:13:37 | 31,098,828 |
> |  20 | Air-0020   | Used  |   1 |  1,653,816 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-08-31 19:17:53 | 31,199,884 |
> |  22 | Air-0022   | Used  |   1 |  3,196,647 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-09-01 17:46:21 | 31,280,792 |
> |  26 | Air-0026   | Used  |   1 |  2,535,187 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-09-02 22:47:12 | 31,385,243 |
> |  28 | Air-0028   | Used  |   1 |  5,134,330 |0 |   
> 31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
> 2024-09-03 22:24:22 | 31,470,273 |
> +-----++---+-++--+------+-----+--+---+---+-+------+-----+----+
> 
> 
> Any ideas where I am going wrong. Thanks
> 
> David Waller
> david.wall...@blueyonder.co.uk
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual backup

2024-09-04 Thread David Waller via Bacula-users
Hello,

I am trying to get the Progressive virtual backup to work. However I get the 
following messages when I run status:

Status “is waiting on max Storage jobs.”

I created the job via the wizard on bacularis and it is shown below:

Job {
  Name = "Virtual"
  Type = "Backup"
  Level = "VirtualFull"
  Messages = "Standard"
  Storage = "FreeNAS1"
  Pool = "Air-Full"
  Client = "MacAir-fd"
  Fileset = "Mac Air"
  WriteBootstrap = "/opt/bacula/working/client.bsr"
  Priority = 10
  BackupsToKeep = 3
  DeleteConsolidatedJobs = yes
}

And List Volumes on Air-Full shows 9 volumes
+-++---+-++--+--+-+--+---+---+-+--+-++
| mediaid | volumename | volstatus | enabled | volbytes   | volfiles | 
volretention | recycle | slot | inchanger | mediatype | voltype | volparts | 
lastwritten | expiresin  |
+-++---+-++--+--+-+--+---+---+-+--+-++
|   9 | Air-0009   | Used  |   1 | 79,063,130,495 |   18 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-08-29 10:37:54 | 30,995,885 |
|  14 | Air-0014   | Used  |   1 |106,514,046 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |1 | 
2024-08-29 19:33:25 | 31,028,016 |
|  16 | Air-0016   | Used  |   1 | 10,858,674 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-08-30 00:06:34 | 31,044,405 |
|  17 | Air-0017   | Used  |   1 |120,521,774 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-08-30 15:01:08 | 31,098,079 |
|  18 | Air-0018   | Used  |   1 |486,985 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-08-30 15:13:37 | 31,098,828 |
|  20 | Air-0020   | Used  |   1 |  1,653,816 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-08-31 19:17:53 | 31,199,884 |
|  22 | Air-0022   | Used  |   1 |  3,196,647 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-09-01 17:46:21 | 31,280,792 |
|  26 | Air-0026   | Used  |   1 |  2,535,187 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-09-02 22:47:12 | 31,385,243 |
|  28 | Air-0028   | Used  |   1 |  5,134,330 |0 |   
31,536,000 |   1 |0 | 0 | File1 |   1 |0 | 
2024-09-03 22:24:22 | 31,470,273 |
+-++---+-++--+--+-+--+---+---+-+--+-++


Any ideas where I am going wrong. Thanks

David Waller
david.wall...@blueyonder.co.uk





smime.p7s
Description: S/MIME cryptographic signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full and diff backups in 2 different jobs

2024-09-03 Thread Bill Arlofski via Bacula-users

On 9/3/24 4:24 AM, Mehrdad R. wrote:

Hi
Mostly to see if it is possible, i am fully aware that it can be done in one 
job,


Hello Mehrdad, this is in fact the only way Bacula works. :)

Bacula considers one Job name, one Client, and one Fileset a unit. Change any one of these and Bacula sees an entirely new 
unit requiring a Full backup before an incremental or a differential can be run.




was just wondering if there was a way to have them in
separate jobs, maybe save them on separate disks eventualy
The jobs that i have set up fail in that respect as i mentioned and
the diff job does not see the full job


What you would/could do in this case is one of a couple different things:

- Use the "FullBackupPool", "IncrementalBackupPool", and the "DifferentialBackupPool" settings in a Job so that each of these 
levels of Backups may be directed to different storage locations.


OR

- Set the "Level", "Storage", and "Pool" in each of your your schedule's "Run" l
ines.


Using the first method, you will need to set the "Storage" in each of these pools so that the correct disk location is always 
used regardless of the Pool selected.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up on a nework share

2024-09-03 Thread Josh Fisher via Bacula-users
Make sure that the user that bacula-sd runs as has read/write permission 
for the mounted share.


On 9/3/24 06:51, Mehrdad Ravanbod wrote:

Hi guys

Ok, so I am testing bacula in backing up windows machines, using a 
local SD(meaning that the SD daeon is running on the windows machine 
to be backed up), doing the backups on a local disk is not a problem, 
but so far I have been unable to do backups on a networkshare, and I 
am not sure what i am doing wrong


what i have is a newowrk share which can be opened and written to from 
the windows machine, i have even tried mounting it as a drive (Z:\) in 
the windows, i have tried following different versions in the SD-conf, 
but the job fails,


Archive Device = "Z:\"

Archive Device = "Z:\\"

Archive Device = "Z:/"

if i set the Archive Device =  job executes, other wise 
not, how to do this??




Mehrdad Ravanbod System administrator



_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

Oh! Just one more note...

I wrote:
8<
*disable storage=mhvtl-waa_a-Autochanger drive=0
8<

I meant to mention that the '0' in the 'drive=0' command line option is the SD's Drive Device's "DriveIndex" number, so of 
course it might be 1, 2 or whatever. :)


Drives are zero indexed, while slots are one indexed.

Except in all Tape Library web GUIs I have seen, where drives are also one 
indexed.

This makes life really fun for a backup admin. 



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

On 8/27/24 8:29 AM, Marco Gaiarin wrote:

Mandi! Bill Arlofski via Bacula-users
   In chel di` si favelave...


You cannot tell Bacula to load a cleaning tape without experiencing
these kinds of errors, because when the SD is told to
load a tape from a slot into a drive, the 'mtx-changer' script calls the 'mtx' 
utility to load, then, once that returns OK,
the script calls `mt -f /tape/nodeid status` over and over (with time/iteration 
limits and some sleep time between each call)
until it sees a "ONLINE" in the `mt status` output (in the case of a Linux 
distribution).

In the case of a cleaning tape, this "ONLINE" will never appear, and the 
mtx-changer script will always time out and fail,
then return with errorlevel 1, and the SD will complain exactly as you have 
demonstrated above.


I supposed that. Super clear! And thanks to all!


Welcome!  :)



You have two choices for cleaning tape drives with Bacula:
- Manual: Issue a disable command to the drive in bconsole, then manually 
load/unload a cleaning tape, then re-enable the drive.


Only a note: you meant 'disable job(s)', right? 'disable [command to] the
drive' seems not possible in bconsole, or i'm missing something...

Nope... You want to disable the drive in bconsole.

Then, the drive will not be used for any jobs that start on the SD - this is nice when you have library with several drives 
because other jobs can continue to be started and run on the other drives while this one is disabled:

8<
*disable storage=mhvtl-waa_a-Autochanger drive=0
  │
Automatically selected Catalog: MyCatalog   
  │
Using Catalog "MyCatalog"   
  │
3002 Device ""mhvtl-waa_a-Autochanger_Dev0" 
(/dev/tape/by-id/scsi-350223344ab001700-nst)" disabled.
8<


And a status storage=mhvtl-waa_a-Autochanger now shows the device is currently 
disabled:
8<
Device Tape: "mhvtl-waa_a-Autochanger_Dev0" 
(/dev/tape/by-id/scsi-350223344ab001700-nst) is not open.
Device is disabled. User command.
Drive 0 is not loaded.
8<


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-27 Thread Bill Arlofski via Bacula-users

On 8/27/24 1:44 AM, Arno Lehmann wrote:

Hi all,

just one tiny addition -- you may want to change the "Maximum Changer
Wait" time if you want to use tape cleaning with Bill's script, because
the additional activity naturally needs some time. And having jobs that
already used a lot of time and tape capacity fail just because the tape
drive needed cleaning in between is rather inconvenient.


Thanks for the additional comment, Arno.

I figured my email was too long-winded already, but this reminds me that I probably should add this detail to the notes at 
the top of the script and the documentation. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Library and clean cartdrige...

2024-08-26 Thread Bill Arlofski via Bacula-users

On 8/26/24 8:20 AM, Marco Gaiarin wrote:


I've setup in my pool(s) the 'Cleaning Prefix' and addedd to the library the
cleaning tape; situation now is:

  *update slots storage=CNPVE3Autochanger
  Connecting to Storage daemon CNPVE3Autochanger at cnpve3.cn.lnf.it:9103 ...
  3306 Issuing autochanger "slots" command.
  Device "Autochanger" has 8 slots.
  Connecting to Storage daemon CNPVE3Autochanger at cnpve3.cn.lnf.it:9103 ...
  3306 Issuing autochanger "list" command.
  Catalog record for Volume "AAJ666L9" is up to date.
  Catalog record for Volume "AAJ667L9" is up to date.
  Volume "CLN001L9" not found in catalog. Slot=8 InChanger set to zero.

I've tried to mount cleaning cartdrige but:

  *mount storage=CNPVE3Autochanger slot=8
  3304 Issuing autochanger "load Volume , Slot 8, Drive 0" command.
  3305 Autochanger "load Volume , Slot 8, Drive 0", status is OK.
  3901 Unable to open device ""LTO9Storage0" (/dev/nst0)": ERR=t

ape_dev.c:170 Unable to open device "LTO9Storage0" (/dev/nst0): 
ERR=Input/output error


This seems also pretty logic: bacula mount the cartdrige and (try to) read
it, but this is a clieaning one...
Anyway, i've waited the command to end, then i've tried also:

  *unmount storage=CNPVE3Autochanger
  3307 Issuing autochanger "unload Volume *Unknown*, Slot 8, Drive 0" command.
  3901 Device ""LTO9Storage0" (/dev/nst0)" is already unmounted.

after that, anyway, cleaning cartdrige was back on slot 8 (probably the
previous mount failed early and anyway cleaning tape was just on the route to
slot 8).


I've the doubt i'm doing something wrong, eg i don't have to use '(u)mount
storage' command from bacula console to mount cleaning cartdrige, but use
insted direct library command via mtx-changer script, for example.


Someone have some clue? Thanks.


Hello Marco,

You cannot tell Bacula to load a cleaning tape without experiencing
these kinds of errors, because when the SD is told to 
load a tape from a slot into a drive, the 'mtx-changer' script calls the 'mtx' utility to load, then, once that returns OK, 
the script calls `mt -f /tape/nodeid status` over and over (with time/iteration limits and some sleep time between each call) 
until it sees a "ONLINE" in the `mt status` output (in the case of a Linux distribution).


In the case of a cleaning tape, this "ONLINE" will never appear, and the mtx-changer script will always time out and fail, 
then return with errorlevel 1, and the SD will complain exactly as you have demonstrated above.



You have two choices for cleaning tape drives with Bacula:

- Manual: Issue a disable command to the drive in bconsole, then manually 
load/unload a cleaning tape, then re-enable the drive.

- Try my mtx-changer drop-in replacement script: 
`https://github.com/waa/mtx-changer-python`

This script does much nicer logging of all activities (if logging is enabled), a
nd it can detect when a drive needs to be 
cleaned (if enabled), and it can automatically load a cleaning tape, wait, then unload it, then return control to the SD 
(also if enabled).


I know that a few people are using this script in production environments (some quite large), and I also use in our lab and 
production environments, but I have not gotten too much (or any) feedback about this script yet, so one more person's eyes on 
it in production would be helpful and welcomed.


In conjunction with this mtx-changer-python.py script, you may also want to check out my `tapealert` script drop-in 
replacement here: https://github.com/waa/bacula-tapealert


This script reports drive and/or tape issues using the tapealert utility and reports them back to the SD which will log the 
TapeALerts(s) reported by the drive, and can disable a drive, or a tape or both depending on the errors reported.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula rescue cdrom source code

2024-08-24 Thread Josip Deanovic via Bacula-users

On 2024-08-24 01:32, Jose Alberto wrote:

rear with bacula?  work??? Test

https://packages.debian.org/bookworm/rear


Hello,

Yes, it works.
I am using it for several years now, with Centos 6, Centos 7, Centos 8,
Debian 10, Debian 11 and Debian 12.
In my case, bootable rear image is between 180 and 550 MB, depending
on the OS.

I didn't have to use it for real but all the tests I have performed were
successful.

It can restore the correct partition table with file systems and will 
boot
with the same IP address as the server that was used to create the 
image.


I managed to configure it the way that it can add bacula client and all
the needed libraries to the image.
The ReaR configuration to achieve that is quite complicated and there 
were
some limitations that I had to work around (without modifying the 
source).


I got the impression that it is not an easy task for ReaR developers to
maintain the code. Looks like bugs are easily introduced and hard to 
trace.


I have a bacula job that builds fresh rear image every month for all the 
servers
and a job called baremetal-restore which is meant to be used in 
conjunction with

a rear iso image.

The procedure for restoration using rear image in my setup goes like 
this:

1. get the newest bootable rear iso image from bacula
2. write the image to a bootable medium or use it as-is in case of 
virtual machine

3. boot the system using bootable rear iso image
4. use bacula director to initiate a bare metal restore
5. reboot the system

The idea was to make the bare metal restore procedure quick and easy, 
with the

hardware which is the same or similar to the original server.


Regards

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] no space left on Bacula server

2024-08-23 Thread Bill Arlofski via Bacula-users

On 8/23/24 12:22 PM, Adam Weremczuk wrote:

I've got the answer now: Backup failed -- Incomplete

Scheduled time: 23-Aug-2024 12:42:26
Start time: 23-Aug-2024 12:49:46
End time:   23-Aug-2024 18:54:02
Elapsed time:   6 hours 4 mins 16 secs
Priority:   10
FD Files Written:   3,459
SD Files Written:   3,459
FD Bytes Written:   726,920,886,781 (726.9 GB)
SD Bytes Written:   726,921,923,888 (726.9 GB)
Rate:   33259.6 KB/s
Software Compression:   None
Comm Line Compression:  42.8% 1.7:1
Snapshot/VSS:   yes
Accurate:   yes
Volume Session Id:  115
Volume Session Time:1723051858
Last Volume Bytes:  1,812,510,508,032 (1.812 TB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup failed -- Incomplete

What on earth has happened here?


How can we know? You have not shown us any logs, nor any listing of files in 
the dataspool directory. 🤷

In bconsole:
* ll joblog jobid=

In bash:
# ls -la /path/to/data/spool/dir


BUT, your job terminated "Incomplete" which means that once you fix whatever is wrong, you can resume this job and not have 
to start from the beginning:


* resume incomplete(and choose Job, then the jobid)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] no space left on Bacula server

2024-08-23 Thread Bill Arlofski via Bacula-users

On 8/23/24 11:33 AM, Adam Weremczuk wrote:

Hi all,

Bacula 9.6.7 on Debian 11.

I'm backing up a 777 GB folder of an external xxx server. The backup job
has been running for a number of hours.

To my surprise, Bacula backup server has just COMPLETELY run out of disk
space with 1.2 TB out of 1.7 TB being used for Bacula .spool temporary
files.

I've never seen this happening before, i.e. so much space being used.

Relevant jobs look as below:

    4139  Back Full  3,458    726.9 G xxx_backup   SD despooling Data
    4140  Back Full  8    586.2 G xxx_backup   is running

Tape space (LTO-8) looks like below:

Before: Device: Remaining Native Capacity in Partition (MiB) (10,620,751)
Now: Device: Remaining Native Capacity in Partition (MiB) (10,176,636)

Is this backup going to fail or succeed? Since it's Fri evening here I
would kind of prefer to know now...

Regards,
Adam


Hello Adam,

Bacula will begin de-spooling a data spool file when one of the following 
conditions are met:

- The spool file reaches the SpoolSize set in a Job
- The spool file reaches the MaximumSpoolSize set in a device
- The spool size reaches the MaximumJobSpoolSize set in a device
- The SpoolDirectory fills to 100% capacity (a condition that should be 
avoided, but Bacula gracefully handles this)

So, if this spool directory reaches 100% (obviously something to try to avoid), jobs will stop spooling, and start despooling 
to the tape drive(s) - one at a time per tape drive of course.


You will have failed jobs if the attribute spool (always in SD's "WorkingDirectory") fills, but Bacula should recover 
gracefully if the data spool directory fills.


However, you are using a very old version of Bacula, and this "feature" may not be in version 9.6.7 - I honestly do not know 
when this was added/fixed.


What is in the spool directory?   Maybe there are some leftover *data*.spool files leftover from failed jobs just eating up 
space?


Attribute and Data Spool files are named very clearly, so you will know which 
ones to keep and which ones may to be deleted.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very long retention time

2024-08-15 Thread Eric Bollengier via Bacula-users

Hello Guys,

On 8/14/24 20:35, Martin Simmons wrote:

I have archival backups going back 10 years without any problems.

If you want to be able to restore any single file from the backup, then you
need to explicitly configure File Retention and Job Retention in the Client
resource, because they default to 60 and 180 days respectively.  I've set them
both to 50 years.

You also need to set the Volume Retention in the pool (which defaults to 1
year).

After changing any of these, update the db and volumes using the bconsole
update command.


Thanks Martin, these points are essential. When a Volume has no Job Record
associated, it can be purged, and it's not what you are looking for.

In general, I recommend to manage retention only at one place. Bacula
is very flexible, and you have probably 10 different directives to control
the retention in many scenarios.

For example, set Client / File Retention and Job Retention to 50 years
and set the Volume Retention to what you want.

I would never prune Job Records if you can, File records are nice in
general, but you might end up with a very large catalog.

It is possible also to disable Autopruning in different places,
doing so, you control exactly when Bacula will prune records.

Keep us informed, Hope it helps!
Eric


__Martin



On Wed, 14 Aug 2024 14:21:48 +0200, Mehrdad Ravanbod said:


Thanx for the response, appreciate it

As to media, plan is to put it on disk, with some sort of raid(1, 5, 6)
to ensure safety/integrity, that is where the files are right now

as to the amount of data, it is not that huge, weekly full backups is
around 50 GB, and it lends it self well to compression, and data does
not change that much so incre. backups should be small, my concern is
mainly the database(using postgresql) and whether Bacula can handle such
retention times of say 3560+ days

I have seen reten times of one year but that is about it, if anyone have
experience of handling longer retention times and care to share their
experiences or config files for the jobs, I would be grateful


Regards /Mehrdad

On 2024-08-14 13:36, Gary R. Schmidt wrote:

On 14/08/2024 16:19, Mehrdad Ravanbod wrote:


hello everyone

I need to set up backups for a set of files that need to be saved and
be possible to access for a very long time(approx. 10 years, for
compliance reasons), is this possible in Bacula or even advisable??
Or do we need to solve this some other way


The first problem you have is finding media that is guaranteed to last
for 10+ years.

Tape is generally considered the best for this, but...  YMMV if you
don't store it correctly.

Another option is long-term optical disk, which is not the same as DVD
or Blu-Ray.

Talk to the suppliers of archival services in your country for
information on what is available and sensible.

The second problem is storage for the database.  If you have millions
of files that change frequently you will need a lot of space for the
database.

Non-Bacula solution: Back when I was at SGI we offered HFS -
Hierarchical File System - for people who had this sort of problem.
And very, very deep pockets.
I don't know if Rackable kept it alive when they purchased SGI, maybe
talk to them?

 Cheers,
     Gary    B-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--

Mehrdad Ravanbod    System administrator



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very long retention time

2024-08-14 Thread Josip Deanovic via Bacula-users

On 2024-08-14 14:21, Mehrdad Ravanbod wrote:

Thanx for the response, appreciate it

As to media, plan is to put it on disk, with some sort of raid(1, 5, 6) 
to ensure safety/integrity, that is where the files are right now


as to the amount of data, it is not that huge, weekly full backups is 
around 50 GB, and it lends it self well to compression, and data does 
not change that much so incre. backups should be small, my concern is 
mainly the database(using postgresql) and whether Bacula can handle 
such retention times of say 3560+ days


I have seen reten times of one year but that is about it, if anyone 
have experience of handling longer retention times and care to share 
their experiences or config files for the jobs, I would be grateful



Hello Mehrdad,

The documentation says nothing about limit regarding retention.
At least I didn't notice such limit.
Retention is specified as a time data type.
For the time data type, the documentation on the page 217 states:

time
  A time or duration specified in seconds. The time is stored internally 
as a 64 bit integer
  value, but it is specified in two parts: a number part and a modifier 
part. The number
  can be an integer or a floating point number. If it is entered in 
floating point notation, it
  will be rounded to the nearest integer. The modifier is mandatory and 
follows the number

  part, either with or without intervening spaces.

This is the relevant document for the version 15.0.x:
https://www.bacula.org/15.0.x-manuals/en/main/main.pdf


However, longer the retention, more data will be kept in the Bacula 
catalog.

If that poses a problem, a solution would be to prune the data from the
catalog and store the volumes (preferably together with the bootstrap
files) and using bscan tool to load the data on the volume back into the
catalog if needed.

It would be a good idea to read this as well:
https://www.bacula.org/15.0.x-manuals/en/main/main.pdf#subsection.22.2.4


Regards

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-09 Thread Bill Arlofski via Bacula-users

On 8/9/24 4:51 AM, Chris Wilkinson wrote:
>> Just an aside - I realised whilst editing the jobs that the storage=“sd used for 
backup jobs" should be specified in the Job

resource, it’s not necessary (or desirable) to specify the storage in the Pool 
as well since the job overrides the pool.



Just a correction here.

If you specify a `Storage = ` in a Pool, it cannot be overridden anywhere - not in a JobDefs, not in a Job, not in a 
Schedule, not on the command line, and not even when modifying it just before the final submission of a job.


This, in my opinion is a bug as I belief that when an admin overrides something, they should be believed that they know what 
they are doing and that should be the final word. :)



This 
doesn’t seem to be the case for Copy/Migrate Jobs, the storage=“sd used for copy jobs" has the be specified in every Pool 
used for copy jobs. Am I right that there is no equivalent override mechanism for Copy/Migrate jobs?


The Storage for Copy/Migration control jobs needs to be in the source Pool, or in the Copy/Migration control job itself. I 
don't have time to test, but it may be possible to override the Pool and Storage for these in a Schedule or on the command 
line, but that would make no sense to do. :)



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Cloud Storage / Amazon S3 working with Bacula 9.6.7 on Debian 12

2024-08-08 Thread Bill Arlofski via Bacula-users

On 8/8/24 2:03 PM, Robert Heller wrote:

Hello Robert,

In this case the error you are getting - as generic as it is - is correct on 
count #2.  :)

"or no matching Media Type"



Your MediaType in the Director's Storage resource called `File1` is:
8<
Media Type = File1
8<


And the MediaType on the SD's Device `CloudStorage` is:
8<
Media Type = CloudType
8<


These two need to match.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-07 Thread Bill Arlofski via Bacula-users

On 8/7/24 1:11 PM, Chris Wilkinson wrote:

And then import the saved sql dump which drops all the tables again and 
creates/fills them?

-Chris


Hello Chris!

My bad!

I have been using a custom script I wrote years ago to do my catalog backups. It uses what postgresql calls a custom (binary) 
format. It's typically faster and smaller, so I switched to this format more than 10 years ago. I had not looked at an ASCII 
dump verison in years and I just looked now, and it does indeed DROP and CREATE everything.


So, the only thing you needed to do was create the database with the 
create_bacula_database script, then

Sorry for the static. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-07 Thread Bill Arlofski via Bacula-users

On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson  wrote:


Would it fail if no tables exist? If so, I could use the bacula create tables 
script first.


Hello Chris,

The Director would probably not even start. :)

If you DROP the bacula database, you need to run three scripts:

- create_bacula_database
- make_bacula_tables
- grant_bacula_privileges

Now you have an empty, but fully functional Bacula catalog for the Director to 
work with.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula suddenly running very slow

2024-08-06 Thread Bill Arlofski via Bacula-users

On 8/6/24 9:01 AM, Chris Wilkinson wrote:
I've had v11/postgresql13 running well for a long time but just recently it has started to run very slow. The Dir/Fd is on a 
Raspberry PiB with 8GB memory, Sd on a NAS mounted via CIFS over a Gbe network. I was getting a rate of ~30MB/s on the backup 
but this has dropped to ~1-2MB/s. I can see similar values on the network throughput page of Webmin. Backups that used to 
take 10h are now stretching out 10x and running into next scheduled backups. Jobs do eventually complete OK but are much too 
slow.


It remains the same after a couple of reboots of both the Pi and NAS.

I've tried my usual suite of tools e.g. htop, iotop, glances, iostat, iperf3 but none of these are raising any flags. Iowait 
is < 2%, cpu < 10%, swap is 0 used, free mem is > 80%. Iperf3 network speed testing Dir<=>Fd is close to 1Gb/s, rsync 
transfers Pi>NAS @ 22MB/s, so I don't suspect a network issue.


On the NAS, I have more limited tools but ifstat shows a similarly low incoming network rate. No apparent issues on cpu load, 
swap, memory, disk either. fsck ran with no errors.


I thought maybe there was a database problem so I've also had a try at adjusting PostgreSQL conf per the suggestions from 
Pgtune but to no effect. Postgresqltuner doesn't reveal any problems with the database performance. Postgres restarted of course.


Backup to S3 cloud is also slow by about 3x. It runs 25MB/s (22Mb/s previously) into local disk cache and then 2MB/s to cloud 
storage v. 6MB/s previously. My fibre upload limits at 50Mbs. I would have expected that a database issue would impact the 
caching equally but that doesn't seem to be the case.


So the conclusions so far are that it's not network and not database 🤔.

I'm running out of ideas now and am hoping you might have some.

-Chris Wilkinson


Hello Chris,

This is a long shot, but is there *any* chance you have disabled attribute 
spooling in your jobs? (SpoolAttributes = no)

If this is disabled, then the SD and the Director are in constant communication and for each file backed up the SD sends the 
attributes to the Director and the Director has to insert the record into the DB as each file is backed up.


With attribute spooling enabled (the default), the SD spools them locally to a file, then sends this one file at the end of 
the job and the Director batch inserts all of the attributes at once (well, in one batch operation)


Crossing my fingers on this one.🤞 :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Raspberry Pi Bookworm Arch64

2024-08-05 Thread Jim Richardson via Bacula-users
Good morning everyone,

Thank you for your response, Carsten.  I think this is exactly what I hoped 
for.  A set of scripts would be available that took the binaries from Bacula's 
a successful source code build and packed them into a Raspbian Bookwork 
distribution's deb files.  It looks as if everything I am going to need is in 
the bacula/debian folder.  I'll have more free time over the weekend.  I'll be 
in touch if I have any questions.


Jim Richardson

-Original Message-
From: Carsten Leonhardt 
Sent: Monday, August 5, 2024 2:53 AM
To: Jim Richardson 
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Raspberry Pi Bookworm Arch64

[EXTERNAL]

Jim Richardson via Bacula-users 
writes:

> Good Saturday morning, Bacula-users,
>
> I have recently started a weather/solar station project with a series of 
> Raspberry Pi devices.  When the time came to have backups, I turned to 
> Bacula.  The Bacula community doesn't provide an Arch64 version.  I am 
> successfully building it from source (15.0.2).  When it comes to scripts 
> associated with building binary packages, does anyone here know if the 
> community version package maintainers publish them?  Any assistance is 
> appreciated.  I do plan to share this build with the community.
>
> I have downloaded the Amd64 packages and can reverse engineer the debs to get 
> where I want to be.  Still, it would save me a lot of time and 
> troubleshooting if the package maintainers have a "make deb-rasp-arch64"-like 
> script already completed.
>
> Thank you for any assistance you can provide.

If that "bookworm" you mention refers to Debian bookworm, you can simply "apt 
install bacula-fd", the director can backup older FDs.

Otherwise, the sources for Debian packaging of 15.0.2 can be found here in the 
experimental branch:

https://salsa.debian.org/bacula-team/bacula/-/tree/experimental?ref_type=heads

It's stalled on testing the packages at the moment, so feedback is welcome.

Regards

Carsten
CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited. If you received this email in error, please 
notify the sender and delete this email from your system. Thank you.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with Mount tapes

2024-08-05 Thread Bill Arlofski via Bacula-users

On 8/5/24 9:55 AM, Dr. Thorsten Brandau wrote:

Hi
I get this message:

05-Aug 16:09 -sd JobId 908: Please mount append Volume "23L9" or label a 
new one for:

and I am confused.

Webin shows me the volume available in the pool:


23L9LTO-9   2024-08-05 16:14:36 2024-08-05 16:39:38 
485996102656Append

So, what specifically can I do now? Bacula should load the volume by itself, 
doesn't it?

Regards

Thorsten



Hello Thorsten,

You have given us basically nothing to go on here.

Can you tell us anything about this environment?

# mtx -f /dev/tape/by-id/  status  (where  is the library's node id)


The full Bacula job log:

* ll joblog jobid=908


Media list:

* @tall /tmp/medialist.txt   (open text log file)

* list media

* @tall  (close file)

Then, attach `/tmp/medialist.txt` so it does not wrap horribly in an email.


How about Bacula configuration files?

Directory's Storage, SD's 
Autochanger, and Device(s)?



Thank you,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Raspberry Pi Bookworm Arch64

2024-08-03 Thread David Waller via Bacula-users
Hello Jim,

A while ago I built the bacula code several times from source and installed it 
all on various raspberry pi's, which then ran for quite a while as the backup 
server for several machines. I also had to build the various client software 
again for several machines, including a raspberry pi and a Mac. I used the 
first chapter of the Bacula Miscellaneous Guide to help with the compiling of 
the code and from memory it went very easily. There is a "make install" to 
correctly put all the software in the right place and various scripts as well 
are also provided. The scripts include the various database configuration 
scripts as well as the sample configuration files. From my experience it was 
easier to always install the source tree, build and install that rather than 
try and make a package, but then I have little experience in making packages.

David Waller
david.wall...@blueyonder.co.uk



> On 3 Aug 2024, at 16:06, Jim Richardson via Bacula-users 
>  wrote:
> 
> Good Saturday morning, Bacula-users,
>  
> I have recently started a weather/solar station project with a series of 
> Raspberry Pi devices.  When the time came to have backups, I turned to 
> Bacula.  The Bacula community doesn’t provide an Arch64 version.  I am 
> successfully building it from source (15.0.2).  When it comes to scripts 
> associated with building binary packages, does anyone here know if the 
> community version package maintainers publish them?  Any assistance is 
> appreciated.  I do plan to share this build with the community.
>  
> I have downloaded the Amd64 packages and can reverse engineer the debs to get 
> where I want to be.  Still, it would save me a lot of time and 
> troubleshooting if the package maintainers have a “make deb-rasp-arch64”-like 
> script already completed.
>  
> Thank you for any assistance you can provide.
>  
> ~Jim Richardson
>  
> CONFIDENTIALITY: This email (including any attachments) may contain 
> confidential, proprietary and privileged information, and unauthorized 
> disclosure or use is prohibited. If you received this email in error, please 
> notify the sender and delete this email from your system. Thank you. 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net>
> https://lists.sourceforge.net/lists/listinfo/bacula-users



smime.p7s
Description: S/MIME cryptographic signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Raspberry Pi Bookworm Arch64

2024-08-03 Thread Jim Richardson via Bacula-users
Good Saturday morning, Bacula-users,

I have recently started a weather/solar station project with a series of 
Raspberry Pi devices.  When the time came to have backups, I turned to Bacula.  
The Bacula community doesn't provide an Arch64 version.  I am successfully 
building it from source (15.0.2).  When it comes to scripts associated with 
building binary packages, does anyone here know if the community version 
package maintainers publish them?  Any assistance is appreciated.  I do plan to 
share this build with the community.

I have downloaded the Amd64 packages and can reverse engineer the debs to get 
where I want to be.  Still, it would save me a lot of time and troubleshooting 
if the package maintainers have a "make deb-rasp-arch64"-like script already 
completed.

Thank you for any assistance you can provide.

~Jim Richardson

CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited. If you received this email in error, please 
notify the sender and delete this email from your system. Thank you.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula rescue cdrom source code

2024-08-01 Thread Bill Arlofski via Bacula-users

On 8/1/24 7:31 AM, William Rice wrote:
>

Hello I'm trying to locate the bacula-rescue cdrom source code so I can make a 
Bare Metal Recovery cdrom for bacula-9.0.8

Any help would be greatly appreciated!



Hello William,

Just a quick FYI: Bacula's Bare Metal recovery for Windows and Linux are 
Enterprise products, not available to the community.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is there a bconsole command to list the file size of a job?

2024-08-01 Thread Kees Bakker via Bacula-users

On 01-08-2024 15:00, Radosław Korzeniewski wrote:

Hi,

czw., 1 sie 2024 o 13:43 Kees Bakker  napisał(a):

Thanks
BTW What GUI is that?


IBAdmin from Inteos. There is even (unmaintained) open-source version 
of it: https://github.com/inteos/IBAdmin




Thanks
Django, Python. Now we're talking :-)
--
Kees___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Is there a bconsole command to list the file size of a job?

2024-08-01 Thread Kees Bakker via Bacula-users

Hi,

We have the "list files jobid=" command, but that only lists the 
file names.


How can I list the files in a job with the names, the size and other 
details (perhaps "ls -l" style output)?

--
Kees


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacularis wants to connect to hardcoded port 9097

2024-07-31 Thread Kees Bakker via Bacula-users

On 31-07-2024 14:54, Kees Bakker via Bacula-users wrote:

On 31-07-2024 14:31, Marcin Haba wrote:

On Wed, 31 Jul 2024 at 14:08, Kees Bakker  wrote:


I have now enabled Config. Two problems:

1. On the Director details page, it does not show the "Configure
director" tab.


Hello Kees,

Hi Marcin,


To have the configuration function enabled, this config capability 
needs to be enabled in the API. It is this section in the initial 
wizard where originally you answered 'no'. To enable it you don't 
need to go through all the wizard once again, but you can go to the 
API panel


[Main menu] => [Page: API Panel]

then log in to the API panel and then please go to:

[Main menu] => [Page: Settings] => [Tab: Config]

Alternatively it is also available at this address:

https://YOUR_HOST/panel/settings/#settings_config 
<https://YOUR_HOST/panel/settings/#settings_config>


There you can configure which Bacula components you would like to use 
with this API instance. In this case it will be Director, so you need 
to configure the bdirjson tool settings in that form.


That was already done. "Test configuration" marks all sections with OK.

Something is working, because I can see the Storage configuration. 
Client configurations are visible too, except they all show the data 
from the one FD Client.

However, the "Configure director" tab is missing.


Solved. I had to logout and login.
--
Kees_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacularis wants to connect to hardcoded port 9097

2024-07-31 Thread Kees Bakker via Bacula-users

On 31-07-2024 14:31, Marcin Haba wrote:

On Wed, 31 Jul 2024 at 14:08, Kees Bakker  wrote:


I have now enabled Config. Two problems:

1. On the Director details page, it does not show the "Configure
director" tab.


Hello Kees,

Hi Marcin,


To have the configuration function enabled, this config capability 
needs to be enabled in the API. It is this section in the initial 
wizard where originally you answered 'no'. To enable it you don't need 
to go through all the wizard once again, but you can go to the API panel


[Main menu] => [Page: API Panel]

then log in to the API panel and then please go to:

[Main menu] => [Page: Settings] => [Tab: Config]

Alternatively it is also available at this address:

https://YOUR_HOST/panel/settings/#settings_config 
<https://YOUR_HOST/panel/settings/#settings_config>


There you can configure which Bacula components you would like to use 
with this API instance. In this case it will be Director, so you need 
to configure the bdirjson tool settings in that form.


That was already done. "Test configuration" marks all sections with OK.

Something is working, because I can see the Storage configuration. 
Client configurations are visible too, except they all show the data 
from the one FD Client.

However, the "Configure director" tab is missing.



2. If I select one of the other clients (not the client where
Apache is running) and go to the details of the client, next click
"Configure file daemon" tab, I am getting the details of the
client where Apache is running (in other words the FD client where
the Director is). There is no error message, but this is clearly
wrong. Is this a known issue?

On the Client details page you can see the main FD settings from 
Director host if on the FD side does not work Bacularis API. This is 
default behavior. It could be better, I know.

I think it is a bug. Not "it could be better" :-)


Another case when it can happen is if the Bacularis API is installed 
and configured on the FD host but the FD address used in Bacula and 
that Bacularis FD API host address are different. In this case you can 
switch the FD API host to the right one on the Client details page on 
the green menu bar on the right side. There is a combobox with the FD 
API hosts to select.


Until now I have a hard time understanding where the API component 
lives. You make it sound as if I can have a separate API component 
installed  somewhere. That's not the case. There is only a Bacularis 
package to be installed, which is a web interface to Bacula.___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacularis wants to connect to hardcoded port 9097

2024-07-31 Thread Kees Bakker via Bacula-users

On 31-07-2024 12:21, Marcin Haba wrote:

Hello Kees,

[...]
For the configuration part, Bacularis has an option to use Bacula 
configs in read-only mode for all or for selected Bacula resources and 
Bacula components. If you are interested in, you can see it on this 
video guide:


https://www.youtube.com/watch?v=ZuTsuGMEms8


I have now enabled Config. Two problems:

1. On the Director details page, it does not show the "Configure 
director" tab.


2. If I select one of the other clients (not the client where Apache is 
running) and go to the details of the client, next click "Configure file 
daemon" tab, I am getting the details of the client where Apache is 
running (in other words the FD client where the Director is). There is 
no error message, but this is clearly wrong. Is this a known issue?

--
Kees___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacularis wants to connect to hardcoded port 9097

2024-07-31 Thread Kees Bakker via Bacula-users

On 31-07-2024 12:21, Marcin Haba wrote:
On Wed, 31 Jul 2024 at 11:36, Kees Bakker via Bacula-users 
 wrote:


Hi,

It seems that Bacularis has port 9097 hardcoded somewhere.

I've setup Apache to do Bacularis. But not with the supplied
example of using port 9097. I'm just using port 433 with a proper
SSL certificate.
Bacula Dir/SD/FD were already running on this system, installed
with Ubuntu apt.

The configuration went well, I think. (I don't want Bacularis to
mess with editing the Dir/SD/FD configuration. Only that part was: no)

As soon as the initial setup finished the Dashboard came up.
However, it keeps popping up

Error code: 100
Message: Problem with connection to remote host. cURL error 7:
Failed to connect to localhost port 9097 after 0 ms:
Connection refused.

Huh? Where is that 9097 coming from? Nowhere in the settings there
is an item for this.


Hello Kees,

This port setting was in advanced options of the initial wizard. Once 
the wizard is finished, this port is used on the web interface side in 
API host config.


[Main menu] => [Page: Security] => [Tab: API hosts] => [Edit: 'Main' 
host].


You can update it there.

For the configuration part, Bacularis has an option to use Bacula 
configs in read-only mode for all or for selected Bacula resources and 
Bacula components. If you are interested in, you can see it on this 
video guide:


https://www.youtube.com/watch?v=ZuTsuGMEms8


Hi Marcin,

Thank you for the reply.
Indeed that seems to work. I had already found 
/etc/bacularis/Web/hosts.conf which I manually changed. In my case the 
host had to be the FQDN, not localhost.

So, that's working now.
--
Kees
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacularis wants to connect to hardcoded port 9097

2024-07-31 Thread Kees Bakker via Bacula-users

Hi,

It seems that Bacularis has port 9097 hardcoded somewhere.

I've setup Apache to do Bacularis. But not with the supplied example of 
using port 9097. I'm just using port 433 with a proper SSL certificate.
Bacula Dir/SD/FD were already running on this system, installed with 
Ubuntu apt.


The configuration went well, I think. (I don't want Bacularis to mess 
with editing the Dir/SD/FD configuration. Only that part was: no)


As soon as the initial setup finished the Dashboard came up. However, it 
keeps popping up


   Error code: 100
   Message: Problem with connection to remote host. cURL error 7:
   Failed to connect to localhost port 9097 after 0 ms: Connection refused.

Huh? Where is that 9097 coming from? Nowhere in the settings there is an 
item for this.

--
Kees___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Considereing replacing Synology Active Backup for Business

2024-07-29 Thread Thorsten Schöning via Bacula-users

Am 30.07.2024 um 08:11 schrieb Radosław Korzeniewski:

It is an Enterprise Class solution.


Agreed: That's why the chat is popping up every few minutes asking me
for questions and why I'm unable to get prices without providing a lot
of private data, while I am a private person instead of a company and I
simply don't want them to e.g. call me. And what happens when asking the
chat? It simply asks for name, company and business mail as well, though
gladly not for a telephone number anymore, instead of being able to
provide answers for ismple questions like prizes.

That's "Enterprise" feeling like everyone loves it! :-) Would be too
easy to just provide at least some prizes at the homepage to know if
it's even worth it considering for my pretty small use-case or it's
better to consider RSYNC.


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Considereing replacing Synology Active Backup for Business

2024-07-29 Thread Thorsten Schöning via Bacula-users

Am 29.07.2024 um 17:23 schrieb Andrea Venturoli:

AFAIK it works on files and transfers them as a whole.


Seems correct to me, deduplication is an enterprise product and I can't
even find any pricing:

https://www.baculasystems.com/enterprise-endpoint-backup-deduplication/

Thanks for your answers, but this looks like a showstopper for me. Need
to think about it. I need deltas to backup clients in a timely manner
using somewhat slow DSL uploads, like is possible with ABB pretty well.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Considereing replacing Synology Active Backup for Business

2024-07-29 Thread Thorsten Schöning via Bacula-users

Hi everyone,

I'm currently using a Synology DS218+ with Active Backup for Business to
backup some Windows clients, even remote using OpenVPN. Things work
pretty well, ABB is fast because of using Changed Block Tracking based
on Windows VSS and aspects like when to run at all can be configured
remotely.

Though, I'm considering replacing the NAS with a more powerful home
server based on Linux to have more freedom about the used software etc.
One large topic left is how to backup Windows clients, so would be great
if you could answer some questions about how ABB works now and if there
is a reasonable replacement approach with Bacula.


* Does the Windows client of Bacula implement CBT based on VSS or does
  it simply iterate files in the created snapshot? Does it transfer
  changes within files only at all or always the whole changed files?
  Even without CBT it could apply some RSYNC-like approach.

* Can the Windows agent be configured remotely, e.g. to change when to
  backup, which files etc.? Or do I need to do these things manually
  accessing the client "somehow", chaing some config files etc.?

* Can the agent be updated automatically when a new version of Bacula is
  deployed? I've only read about some installers yet and stuff about
  compatible versions.

* Does Bacula support some sort of tenants? For ABB I'm using one BTRFS
  subvolume per Windows client to strictly seperate data. While it makes
  deduplication less efficient, that wasn't too much of a problem yet.
  Looking at the architecture with centralied database and storage pool,
  it doesn't seem supported/possible to me currently?

* What's the preferred tunneling for Bacula, OpenVPN or SSH? I'm using
  ABB with OpenVPN and it works pretty well. But maybe the Windows agent
  has some hooks ore something where SSH-tunnels might be created and
  those used instead?

* Would be great to hear from anyone else who migrated similarly already
  about their experiences and approaches!

Thanks for your time!


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error Code 4

2024-07-23 Thread Murugavel Rajendiran via Bacula-users
Hi Marcin,

Thanks for the valuable information.
Thanks & Regards,
*Murugavel R*
*Linux Administrator - Infra Support.*
murugavel.rajendi...@chainsys.com
www.chainsys.com


  A Trusted innovator in the API Economy, Data Management space, and ERP
Implementations
*850+ Improvers* | *5+ Locations* | *500+ Customer Partners.*
[image: LinkedIn] <https://www.linkedin.com/company/chain-sys> [image:
Facebook] <https://www.facebook.com/chainsyscorp> [image: Twitter]
<https://twitter.com/chainsys> [image: Youtube]
<https://www.youtube.com/chainsys>
[image: Oracle partner]
[image: Salesforce registere ISV partner]
[image: SAP partner]



On Wed, Jul 24, 2024 at 6:57 AM Marcin Haba  wrote:

> On Tue, 23 Jul 2024 at 20:53, Murugavel Rajendiran <
> murugavel.rajendi...@chainsys.com> wrote:
>
>> Thank you for your response,
>>
>> Deployment has been completed through the remote.
>> I don't know what are details will submit while add client..
>>
>
> Hello Murugavel,
>
> The details in the add client form are required to add the new Client to
> the Director. The most important to complete are the following:
>
> Name = SOME_CLIENT_NAME
> Password = PASSWORD_THE_SAME_AS_IN_FD_CONFIG_bacula-fd.conf
> Address = FD_ADDRESS
> Catalog = CATALOG_RESOURCE_NAME_ON_THE_DIRECTOR_SIDE
>
> For example:
>
> Name = myclient-fd
> Password = strong_secret_password
> Address = 192.168.1.30
> Catalog = MyCatalog
>
> I would propose to read the Bacula Client documentation at Bacula.org.
> Currently the Bacula.org service is unavailable (Error 503) but I am sure
> that it will be fixed quickly.
>
> Best regards,
> Marcin Haba (gani)
>
>
> --
>
> "Greater love hath no man than this, that a man lay down his life for his 
> friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie za 
> przyjaciół swoich." Jezus Chrystus
>
>

-- 



Disclaimer:
This email and any attachments to it may be confidential 
and are intended solely for the use of the individual to whom it is 
addressed.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error Code 4

2024-07-23 Thread Murugavel Rajendiran via Bacula-users
Thank you for your response,

Deployment has been completed through the remote.
I don't know what are details will submit while add client..


On Tue, 23 Jul, 2024, 9:37 pm Marcin Haba,  wrote:

> On Tue, 23 Jul 2024 at 17:23, Murugavel Rajendiran <
> murugavel.rajendi...@chainsys.com> wrote:
>
>> Hi Marcin,
>>
>> Thanks for the information,
>>
>> Deployment has completed but can't able to add client to run job,
>> Kindly give me the instructions method from start to finish in front end.
>> Also need to take client backup from server.
>> Kindly give me the differentiation between backuppc and our bacukaris.
>>
>
> Hello Murugavel,
>
> You can have a problem with adding any client to Bacula via Bacularis
> because the bconsole is not able to connect to the Director. Without
> solving this problem it can be hard to go futher with adding clients.
>
> From your message I understood that you did a deployment and it has been
> completed. Could I ask you about providing more details about it? I mean:
> what you deployed and how? This could help us to understand what is already
> installed and what could be missing.
>
> For the instruction of adding a new Bacula Client there are a couple of
> ways of doing it:
>
> 1) if you have the Bacula Client installed, you can add it to the Director
> using in Bacularis:
>
> [Main menu] => [Page: Clients] => [Button: Add client]
>
> 2) if you don't have the Bacula Client installed, you can install it and
> then follow on instruction in point 1)
>
> 3) if you don't have the Bacula Client installed, you can use the
> deployment function in Bacularis that will install remotely Bacularis and
> Bacula components. All of them will be ready to use with Bacularis without
> additional actions.
>
> For the question about differences between BackupPC and Bacula, I don't
> know any this type of comparison.
>
> Best regards,
> Marcin Haba (gani)
>
> --
>
> "Greater love hath no man than this, that a man lay down his life for his 
> friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie za 
> przyjaciół swoich." Jezus Chrystus
>
>

-- 



Disclaimer:
This email and any attachments to it may be confidential 
and are intended solely for the use of the individual to whom it is 
addressed.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error Code 4

2024-07-23 Thread Murugavel Rajendiran via Bacula-users
Hi Marcin,

Thanks for the information,

Deployment has completed but can't able to add client to run job,
Kindly give me the instructions method from start to finish in front end.
Also need to take client backup from server.
Kindly give me the differentiation between backuppc and our bacukaris.

On Tue, 23 Jul, 2024, 7:44 pm Marcin Haba,  wrote:

> Hello Murugavel,
>
> This error means that the bconsole used by Bacularis is not able to
> connect to the Director. It is a problem between communication Bconsole <=>
> Director and bconsole exits with the exit code 1.
>
> I would propose to check it manually in the command line by running
> bconsole. The best is to run it with the same user used by the web server
> running Bacularis. Maybe the Director is not working or it is not
> accessible?
>
> Please let us know about this test result.
>
> Best regards,
> Marcin Haba (gani)
>
> On Tue, 23 Jul 2024 at 15:58, Murugavel Rajendiran via Bacula-users <
> bacula-users@lists.sourceforge.net> wrote:
>
>> Hi,
>>
>> Received alert given below
>> [image: image.png]
>> Also cant able to add clients,
>>
>>
>> Thanks & Regards,
>> *Murugavel R*
>> *Linux Administrator - Infra Support.*
>> murugavel.rajendi...@chainsys.com
>> www.chainsys.com
>>
>>
>>   A Trusted innovator in the API Economy, Data Management space, and ERP
>> Implementations
>> *850+ Improvers* | *5+ Locations* | *500+ Customer Partners.*
>> [image: LinkedIn] <https://www.linkedin.com/company/chain-sys> [image:
>> Facebook] <https://www.facebook.com/chainsyscorp> [image: Twitter]
>> <https://twitter.com/chainsys> [image: Youtube]
>> <https://www.youtube.com/chainsys>
>> [image: Oracle partner]
>> [image: Salesforce registere ISV partner]
>> [image: SAP partner]
>>
>>
>>
>> 
>> Disclaimer:
>> This email and any attachments to it may be confidential and are intended
>> solely for the use of the individual to whom it is addressed.
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
>
> --
>
> "Greater love hath no man than this, that a man lay down his life for his 
> friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie za 
> przyjaciół swoich." Jezus Chrystus
>
>

-- 



Disclaimer:
This email and any attachments to it may be confidential 
and are intended solely for the use of the individual to whom it is 
addressed.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error Code 4

2024-07-23 Thread Murugavel Rajendiran via Bacula-users
Hi,

Received alert given below
[image: image.png]
Also cant able to add clients,


Thanks & Regards,
*Murugavel R*
*Linux Administrator - Infra Support.*
murugavel.rajendi...@chainsys.com
www.chainsys.com


  A Trusted innovator in the API Economy, Data Management space, and ERP
Implementations
*850+ Improvers* | *5+ Locations* | *500+ Customer Partners.*
[image: LinkedIn] <https://www.linkedin.com/company/chain-sys> [image:
Facebook] <https://www.facebook.com/chainsyscorp> [image: Twitter]
<https://twitter.com/chainsys> [image: Youtube]
<https://www.youtube.com/chainsys>
[image: Oracle partner]
[image: Salesforce registere ISV partner]
[image: SAP partner]

-- 



Disclaimer:
This email and any attachments to it may be confidential 
and are intended solely for the use of the individual to whom it is 
addressed.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What is a API instance for Bacularis

2024-07-23 Thread Kees Bakker via Bacula-users

On 22-07-2024 17:51, Marcin Haba wrote:
On Mon, 22 Jul 2024 at 17:31, Kees Bakker via Bacula-users 
 wrote:


Hi,

The docker bacularis-web [1] asks me to add a Baculum [2] API
instance.
What exactly is that?

I already have a functioning Bacula system with a Dir/SD and several
FDs. I am looking
for a replacement of the good old BAT and was hoping that Bacularis
would be it.

[1] https://hub.docker.com/r/bacularis/bacularis-web
[2] Could this be a left over after renaming Baculum -> Bacularis?


Hello Kees,

Hi Marcin,


Thanks for your feedback about trying to use Bacularis docker images.

Each of these docker image types has in the description a chapter 
named "About this image". There is information about what it contains 
and for what purposes it could be used. For this "bacularis-web" we 
can see the information:


"It contains a pure the Bacularis web interface without API layer. It 
can be used to connect external Bacularis API server(s) in containers 
or outside them. If you would like to connect Bacularis API ran in 
containers, you can use dedicated for this purpose images:

 - for File Daemons bacularis-api-fd
 - for Storage Daemons bacularis-api-sd
 - for Directors bacularis-api-dir"


I didn't want to use those because I already have a working DIR+SD and 
FDs. I just want the API, ha ha.




For the question about what is the API, to understand the concept, I 
would propose to read these two chapters in the Bacularis documentation:


1. Component relationship and characteristic:
https://bacularis.app/doc/brief/introduction.html#component-relationship

2. Before you start:
https://bacularis.app/doc/brief/before-you-start.html

They can answer your questions. If not, please let us know. Thanks in 
advance.


From the before-you-start:

   "... If you would like to use Bacularis instance in container with
   Bacula components located outside the container, you need to prepare
   it yourself. Exception of that is the bacularis-web container image
   that provides a pure web interface without any API part and it can
   be connected to any Bacularis API instance available in the network.
   See chapter: Install using Docker."

I believe that this is not going to work with my existing Bacula 
daemons. So my conclusion is that I must forget the dockers.


I have also setup a test system with Bacula DIR+SD and Bacularis Apache. 
As far as understand its workings, the API is implemented in PHP, it 
wants access to the Postgres database, it wants to run bconsole, it 
wants to be able to read/write the configuration files. (The latter is 
optional and it is definitely something I don't want.)


Before I go that route (installing Apache) I need to spend more time 
trying to understand how this works.




For the question about Baculum API, yes, it is a left over :-) Thanks 
for catching it. We will fix it quickly.


Good luck.

Best regards,
Marcin Haba (gani)
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What is a API instance for Bacularis

2024-07-22 Thread Kees Bakker via Bacula-users

Hi,

The docker bacularis-web [1] asks me to add a Baculum [2] API instance. 
What exactly is that?


I already have a functioning Bacula system with a Dir/SD and several 
FDs. I am looking
for a replacement of the good old BAT and was hoping that Bacularis 
would be it.


[1] https://hub.docker.com/r/bacularis/bacularis-web
[2] Could this be a left over after renaming Baculum -> Bacularis?
--
Kees


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Default postgresql password readable for everyone

2024-07-18 Thread Kees Bakker via Bacula-users

Hi,

Today I installed bacula-postgresql 15.0.2 on a Ubuntu 22.04 system.
This was done through ansible scripts, so everything that apt install 
did was non-interactive.


Then I was curious what the password for the database was. I saw a new 
random password in

  /etc/dbconfig-common/bacula-postgresql.conf
That file is owned by root and has 600 permission bits. Great.

However, during apt install it also seems to set this same random 
password in

  /opt/bacula/scripts/grant_postgresql_privileges
That file has 755 permissions. That doesn't seem right to me.
--
Kees


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-07-17 Thread Josip Deanovic via Bacula-users

On 2024-07-15 17:26, Marco Gaiarin wrote:
We have found that a dir (containing mostly home directories) with 
roughly
one and a half million files, took too much time to be backud up; it is 
not
a problem of backup media, also with spooling it took hours to prepare 
a

spool.

There's some strategy i can accomplish to reduce backup time (bacula
side; clearly we have to work also filesystem side...)?

For example, currently i have:

Options {
  Signature = MD5
  accurate = sm
}

if i remove signature and check only size, i can gain some performance?



Hello Marco,

Most probably yes.

If your file system supports it, you could mount the file system with 
the

noatime and nodiratime options.

Alternatively, Bacula has the noatime option which is not set by 
default.

That option would prevent Bacula from updating inode atime which would
most probably result in some performance gains (although, not dramatic).
Also, check the option keepatime which could negatively affect the 
performance

if enabled (it is disabled by default).

A long time ago, I had a FreeBSD with a file system about 30GB in size, 
with
the small usage percentage but with the huge number of indexed 
directories.

To archive the whole directory structure using tar, it used to take more
than 26 hours.
I solved it by using dump tool which does the backup on the block level 
thus

doesn't suffer from large directory tree issue.
This approach bears the risk of having inconsistent data in the backup 
in
case where file system is mounted while performing a dump. This could be 
solved
by utilizing snapshots or some type of file system locking/freeze 
(depending on

the OS and the file system).


Regards

--
Josip Deanovic


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strategies for backup of millions of files...

2024-07-15 Thread Bill Arlofski via Bacula-users

On 7/15/24 9:26 AM, Marco Gaiarin wrote:


We have found that a dir (containing mostly home directories) with roughly
one and a half million files, took too much time to be backud up; it is not
a problem of backup media, also with spooling it took hours to prepare a
spool.

There's some strategy i can accomplish to reduce backup time (bacula
side; clearly we have to work also filesystem side...)?

For example, currently i have:

 Options {
   Signature = MD5
   accurate = sm
 }

if i remove signature and check only size, i can gain some performance?


Thanks.


Hello Marco,

The typical way to help with this type of situation is to create several Fileset/Job pairs and then run them all 
concurrently. Each Job would be reading a different set of directories.


Doing something like backing user home directories that begin with [a-g], [h-m], [n-s], [t-z] in four or more different 
concurrent jobs.



A coup
le FileSet examples that should work in how I described:
8<
FileSet {
  Name = Homes_A-G
  Include {
Options {
  signature = sha1
  compression = zstd
  regexdir = "/home/[a-g]"
}
Options {
  exclude = yes
  regexdir = "/home/.*"
}
  File = /home
  }
}

FileSet {
  Name = Homes_H-M
  Include {
Options {
  signature = sha1
  compression = zstd
  regexdir = "/home/[h-m]"
}
Options {
  exclude = yes
  regexdir = "/home/.*"
}
File = /home
  }
}

...and so on...
8<


Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Skip hard links in verify job

2024-07-11 Thread Sabroe Yde Carlos Cristóbal via Bacula-users
Hi everyone!
I need some guidance on how to configure a Verify job that doesn’t check 
hard-links of files. 
I’m receiving lots of messages like these: 

10-jul 22:15 srv JobId 88710:SHA1 digest differs on 
"/usr/share/man/man1/erb.ruby2.5.1.gz". File=2jmj7l5rSw0yVb/vlWAYkK/YBwk 
Vol=Q8i9IIJgCt54Z7XEmwpCJY2slug
10-jul 22:15 srv JobId 88710: Cat checksum differs: 
/usr/share/man/man1/erb.ruby2.5.1.gz
10-jul 22:15 srv JobId 88710:SHA1 digest differs on 
"/usr/share/man/man1/newaliases.1.gz". File=2jmj7l5rSw0yVb/vlWAYkK/YBwk 
Vol=H14R1gsvQiU4NhM+ftl2UWkRxtg

Where all digests on ‘File=' are the same. 
After several months of investigation I believe these are caused by hard-linked 
files. I’d like to know how to ignore these links.

This is an example of a Verify Job definition and the corresponding FileSet:

JobDefs {
Name = "DefaultVerify"
Type = Verify
Level = Data
Accurate = yes
Schedule = "v_CicloSemanal"
Messages = "Msj_Verificaciones"
FileSet = "Full Set"
Pool = lto8
Priority = 1000
#   SkipVerify = HardLinks
}

job {
JobDefs = "DefaultVerify"
Name = "v_bd-test"
Verify Job = "bd-test"
Client = "srv"
FileSet = "bd-test-fs"
Schedule = "v_CicloSemanal-UTI"
Priority = 1160
}

FileSet {
Name = "bd-test-fs"
Include {
Options {
signature = SHA1
Verify = pins1
sparse = yes
}
File = /
}
}

 "SkipVerify = HardLinks” was sugested by ChatGPT, but is not a valid statement.

The only thing that worked so far was including "hardlinks = no” as an option 
on the FileSet, but I don’t think it will be useful in case I need to restore a 
full backup.

Thanks a lot!

Cristobal Sabroe Yde
Unidad de Tecnología de la Información
Universidad Nacional de Río Cuarto 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula client vs server version

2024-07-09 Thread Sergio Gelato via Bacula-users
* Rob Gerber [2024-07-04 14:45:43 -0500]:
> I don't know if that would work since I haven't used the old 9.6x bacula
> binaries. I do see that the client is only a tiny bit newer than the
> server. Personally, I suspect that if the server checks versions, it will
> refuse to work.

The difference is only in the Debian part of the version number; both are
9.6.7 so if the server checked versions they would match.

I once got away with a slight patch-level skew between Director and SD
(I think it was 7.4.3 and 7.4.4). It all depends on the details of the
intervening changes; look at the diffs. The servers did not complain.

> Maybe try to see if a slightly older version of the client
> is available with special apt options?

Looking at the Debian changelog from 9.6.7-3 to 9.6.7-7 I get the impression
that it's all routine packaging changes, mainly driven by differences between
Debian 11 and Debian 12.

> Personally, I would use the latest stable bacula 13.0.4 from the project
> official repo,

I generally prefer Debian-maintained packages as the packaging tends to be
of higher quality. 13.0.4 is now available in trixie (future Debian 13) and
noble (Ubuntu 24.04 LTS) and I backported it to Debian 12 without difficulty;
I haven't put it in production yet but it's been working fine in my test lab.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrate job

2024-07-03 Thread Bill Arlofski via Bacula-users

On 7/3/24 7:39 AM, Stefan G. Weichinger wrote:


I can't get that migrate job running.


[...snip...]

> I have diskbased volumes in Storage "File" and want to migrate them to
> physical tapes in Storage "HP-Autoloader", Pool "Daily"

Hello Stefan,

Something is quite wrong here... :)

And a lot of extra information is missing.


Your status storage shows that it is reading from the "daily" pool, using the tape drive "HP-Ultrium", and it is wanting to 
write to the "Daily" pool and also use the same tape drive - Of course, it is an impossibility to read and write from one 
device at the same time. :)


This is clearly not what you described as what you want.


8<
> Running Jobs:
> Writing: Full Backup job VM-vCenter JobId=3498 Volume="CMR933L6"
>   pool="Daily" device="HP-Ultrium" (/dev/nst0)
>   spooling=0 despooling=0 despool_wait=0
>   Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>   FDSocket closed
> Reading: Increme
ntal Migrate job migrate-to-tape JobId=3497 Volume=""
>   pool="Daily" device="HP-Ultrium" (/dev/nst0) newbsr=0
>   Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>   FDSocket closed
8<



Job {
Name = "migrate-to-tape"
Type = "Migrate"
Pool = "File"
NextPool = "Daily"
JobDefs = "DefaultJob"
PurgeMigrationJob = yes
Enabled = yes
MaximumSpawnedJobs = 20
SelectionPattern = "."
SelectionType = "OldestVolume"
}


The `SelectionPattern` setting means nothing here since you have specified `SelectionType 
= "OldestVolume"`.  From the docs:
8<
The Selection Pattern, if specified, is not used.
8<



Pool {
Name = "Daily"
Description = "daily backups"
PoolType = "Backup"
MaximumVolumes = 30
VolumeRetention = 864000
VolumeUseDuration = 432000
Storage = "HP-Autoloader"
}



OK, this is the destination pool.

We don't 
see the source pool.


Typically, I set the NextPool in the source pool, but setting it in a Schedule or the Copy/Migration control job is OK too. 
We will need to see more...



Can you show:

- The 'File" Pool

- The "DefaultJob" JobDefs


In bconsole:

* ll joblog jobid=3497
* ll joblog jobid=3498


It seems to me from what I see so far, that you may have not restarted the SD, or not reloaded the Director after making 
changes to the settings the Migration Control and Pool and we are somewhere mid-stream between changes.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Does anybody have a working Linux ISO that will allow me to restore from an offsite Bacula backup?

2024-07-02 Thread MylesDearBusiness via Bacula-users
Thanks so much for the suggestions, Rob. I'll shoot for the simplest possible 
recovery plan.

My NGINX config was utterly blown away by the failed element.io install and I'm 
still not sure how to browse individual files on my offsite Bacula backup, even 
with Bacularis, although I've had some discussions with the Bacularis developer 
on the topic.

I'll check if the bconsole is still working and, if so, will find a way to 
remotely restore the content from / into /restore/ and the content from /boot 
into /restore/boot. Then, I'll have to find a way to switch in the new fileset 
and switch out the old fileset (keeping it around for a while just in case, 
since I haven't maxed out my drive storage yet and have headroom). Kind of like 
changing the tires of a car while still driving.

I'll post back once I have a working system again.

In hindsight, I should have done more dress rehearsals of restore, I'm very 
slowly climbing out of the well here and learning lots on the way.

Best,



On 2024-06-25 10:48 a.m., Rob Gerber wrote:

> Good morning, Myles.
>
> I have a few thoughts that might contribute to recovering from this situation.
>
> 1. Sounds like the element.io installer hosed up your nginx web server files 
> (which bacularis relies upon). Please note that because bacularis isn't 
> integral to the function of bacula itself, the bacula installation could be 
> just fine. If this is the extent of the damage, and the cloud server is 
> otherwise stable, why not get a shell on it and do a restore from the command 
> line bacula bconsole utility? Could be much less invasive than a full system 
> restore using a (comparatively) untested restore plan. I would REALLY try 
> this first. I would restore the files to a different location than the 
> destination, to give yourself the ability to compare with diff or something 
> prior to restoration. You'll probably want to bring the relevant system 
> services down prior to restoration of files. so bring down nginx, bacularis, 
> etc.
>
> 1a. If bacularis is the only thing that is broken, maybe get a temporary 
> bacularis instance running to assist with this process? Probably more 
> complicated and troublesome that just doing a restore using bconsole, though.
>
> 2. If the above isn't a viable option for some reason, I would suggest a 
> minor alternative to a custom recovery ISO. I don't know much about crafting 
> custom recovery ISO images so I'll suggest:
> 2a. Export your bacula configuration files (including any relevant certs) 
> from your existing system. At minimum you'll want the contents of 
> /opt/bacula/etc. I suggest copying the entire /opt/bacula directory and all 
> subdirectories. Probably best to use tar to export it since that way you're 
> guaranteed to be able to save user:group permissions and ownership. Chatgpt 
> can help you with this.
> 2b. Export your latest catalog backup. This would require that you have 
> functional bconsole access (assuming bacularis is unusable). If you have the 
> ability to do this, I don't know why you wouldn't do a full recovery of your 
> system from bconsole instead. Perhaps you have a reason. Either way, restore 
> your most recent full catalog backup. If you don't have a recent catalog 
> backup, take one from within bconsole, then export it. Will be a bacula.sql 
> file that contains all the database commands to fully drop a previous bacula 
> database instance and then restore your entire bacula database.
> 2c. Bacula depends on Fully Qualified Domain Names (hostname.domain) in the 
> bacula configuration files. If you want this ISO to (temporarily) fully take 
> the place of your damaged bacula system you'll probably need to change your 
> booted ISO system's hostname to match the hostname of your damaged system. 
> Please be very careful when doing this (esp during testing) because you don't 
> want a hostname conflict. ALTERNATIVELY, consider just a different name for 
> your recovery system and edit the bacula configuration files to match this 
> name. The inclusion of your custom cert could complicate this if it's tied in 
> some way to the FQDN of the damaged server.
> 2d. Instead of trying to build a fully functional bacula installation into a 
> custom recovery ISO, maybe script first time installation and configuration 
> of bacula. Will need to set up your repo file, install bacula, drop your old 
> configuration files over top of the default bacula configuration, edit 
> configuration files to match any changed details like different system 
> hostname, restore bacula database backup, install bacularis, (maybe install 
> bacularis sooner in the process?), then look into restoring your needed files.
>
> 

Re: [Bacula-users] Use of Multiple Tape Drives at Once

2024-07-01 Thread Bill Arlofski via Bacula-users

On 7/1/24 2:27 PM, Kody Rubio wrote:

Hi Bill,

The following is the output of 'status director' when the jobs are waiting.

  597  Back Full    146,973    667.8 G    fileserver-active         is running
  599  Back Incr          0         0               obsdata4                   
is waiting for higher priority jobs to finish



It seems that "obsdata4" it is waiting for a higher priority job.
Although the fileserver-active job does not have a priority set in the config.
Also, obsdata4 has a Priority of 1 (highest), so I am currently unsure on why 
this is happening.


Hello Kody,

The default Priority is 10 if not set.

So, you have a priority 10 job running `fileserver-active`, and a job with a 
different priority (1) waiting `obsdata4`

The message `is waiting for higher priority jobs to finish` is a bit of a red herring. It does not matter if the other jobs 
have a higher or lower priority set, this will be the (misleading) message logged.


If you want more than one job to run at the same time, they will need to be set to the same priority. Sure, you can also play 
with different priorities, and then enable "AllowMixedPrioritiy" in several places (your Job resources), but this will just 
lead to more confusing results, I am afraid. It is generally recommended to stick to the same priority for your normal backup 
and Restore Jobs, and then set the small number of "special" jobs (Admin, CopyControl, MigrationControl, Verify, etc) to 
something else so they they do not run as the same time as "normal" jobs.


Check the default "Catalog" job to see that while the default Backup jobs are `priority = 10`, that job is 11 or 12 to make 
sure it is run only when all the other normal, nightly backups jobs have completed so that the catalog has all of the 
information for all of the night's jobs - except, of course itself. :)


You will need to check the Director's Storage resource simply called `Autochanger` and make sure that it has 
`MaximumConcurrentJobs` > 1, and that your jobs expected to run at the same same time have the same priority and you should 
be able to make some progress.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Use of Multiple Tape Drives at Once

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 1:33 PM, Kody Rubio wrote:

I am searching for advice and/or expertise in allowing Bacula to use multiple 
tape drives at once.

I currently have two separate jobs that use separate Pools, that run around the same time. Although the second job is always 
waiting for the other job to finish. While the second tape drive is open, I would like for it to use the second drive and not 
have to wait for the other drive to be finished.

I also read that setting "MaximumConcurrentJobs = 1" for each will allow this 
but I get the same result.

Below is my configuration for the devices:


Hello Cody, what does `status director` show when these jobs are "wating"

What does your Director Storage resource look like?


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 8:50 AM, Chris Wilkinson wrote:

Oops - typo in my message. It should be

*Set storage resource 'FDStorageAddress='

Which is what actually have but wrote it here wrong.☹️


No worries. I saw that and knew what you meant. :)


I'm not aware of NAT reflection being set in the router; no such option that I can find. I can't see sending local backup 
data out and back is an issue since the data is encrypted but anyway it's set up now as you suggested so moot.


Well, what I mean is that your local FD -> SD traffic would hit the firewall, pass through it to its external IP, then be 
NAT'ted back into your local network. So, nothing going to the Internet from local, but still using up the firewall's network 
bandwidth and CPU cycles unnecessarily.


I think you seem to be in good shape now. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-27 Thread Bill Arlofski via Bacula-users

On 6/27/24 2:54 AM, Chris Wilkinson wrote:

I made the additional changes you suggested.

*Removed FD port open on remote router
*Set storage resource 'Address='
*Set storage resource 'FDStorageAddress='

I re-tested local and remote backups and these seem to be working fine.


Excellent! \o/

Next up Client Initiated Backups!   heh


These changes were not absolutely required as local backups continued to work when I had storage resource 'Address=FQDN of remote site>' and without the 'FDStorageAddress=' directive. I presume this was because I had opened ports 9101-9103 
to the DIR/SD host on the local router as part of my previous attempts and I haven't undone any of them.


I didn't say it yesterday, but I was suspecting that if local and remote FD -> SD connections for backups were all still 
working after you set the Storage's Address to the external IP of the firewall, then it mu
st be that NAT reflection was 
set/enabled on your firewall.


Yes sure, that will work, but do you really want all of your backup data 
traversing your firewall? :)



Thanks for your help

-Chris-


You're welcome!

I am glad that my curiosity to set this exact configuration up last week (for 
fun) was well timed.  :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-26 Thread Bill Arlofski via Bacula-users

On 6/26/24 3:31 PM, Chris Wilkinson wrote:
>

Your tips were bang on. I implemented this and it is working.


Excellent!  \o/


The other steps required were to forward ports on the routers at each end 


This should not be necessary at the remote site(s). The remote clients will be making outbound calls to the Internet, unless 
you have NAT inside NAT or something "interesting" going on at the remote site(s).  :)



> and change the DIR storage resource Address= from a local lan address to the 
public FQDN.

Uff yes, sorry, I missed a step!  :)


The part I missed (which you solved differently, but not the best way if this Storage is needed to be used by other internal 
Clients) is to instead leave the SD's `Address = ` alone and set the `FDStorageAddress = firewall>` in the Director's configuration resource for this (and any other external) Clients.


This way, all your normal internal backups/clients that use this same SD can 
conn
ect to it using its normal `Address = internal IP or FQDN>` and only these external clients will have the FDStorageAddress set to connect to the Main site's 
external firewall IP or FQDN.




Thank you.

-Chris-


You're welcome!

Hope these additional tips help to clean things up even more.

Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing Up a Remote Client

2024-06-26 Thread Bill Arlofski via Bacula-users

On 6/26/24 10:44 AM, Chris Wilkinson wrote:

I'm seeking some advice on configuring the backup of a remote client.

Up till now all clients were located on the same local lan that hosts the Director, File and Storage Daemons. The whole lan 
is behind a nat'd router. One of these clients has now moved to a remote site, also behind a nat'd router so my existing FD 
for this client doesn't work.


As I understand Bacula, the sequence of operations is:
DIR > FD : command to begin
FD > SD : send data from fd to sd
and there will be messages to the DIR also.


Hello Chris,


It is more like:

1: DIR --> SD
2: DIR --> FD   (unless FD is configured to connect to the DIR), then it is FD 
--> DIR
3: FD  --> SD   (unless the Director's Client resource has "SDCallsClient = yes"), 
then it is SD --> FD


For this to work for a remote client, all Daemons must be addressable by FQDNs and therefore the use of local addresses is 
not possible.


One thought that occurs to me is that router ports 9101-9103 can be opened to address the Daemons as :port. This 
won't work for the SD which a mounted cifs share due to the storage being a locked down NAS with no possibility of installing 
an SD.


Appreciate any thoughts or suggestions you might have.


The "best" way to do this is configure your remote FD(s) to call into the Director. They can be configured to make the 
connection on a schedule, or to try to make the connection and stay connected - reconnecting at startup, and when disconnected.


You will need to configure the firewall on the SD side to allow and forward the 
connection into the DIR and the SD.

There is a section int he manual about Clients behind NAT, and also Client 
initiated backups. If you get stuck, just ask...

For fun, I recently just configured and tested this exact type of "FD calls 
Director" configuration here.

I know, who does this for fun, right? lol 😆🤷🤦


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Does anybody have a working Linux ISO that will allow me to restore from an offsite Bacula backup?

2024-06-25 Thread MylesDearBusiness via Bacula-users
I`d be pleased to provide more detail, I was trying to keep my answer terse, 
obviously I undershot !?

I commissioned our cloud server, installing our business services and then 
securing with community edition Bacula system backup to an offsite Koofr 
storage backend, via rclone. I also installed and configured Bacularis with 
appropriate Nginx reverse-proxying and production SSL cert install.

I then attempted to install element.io (as a potential Slack replacement for 
our company) and all he** broke loose, the microk8s installer completely 
decimated my NGINX configuration and all services stopped working. The 
element.io "support" team declined to help me recover my system from an 
installer that crashed halfway through, leaving the system in an unstable state.

Thus, I wish to restore the entire bare metal cloud server from a complete 
Bacula backup I took just prior to doing this test install.

After trying and failing to use more low-level manual means, I`m now pivoting 
and trying to adapt an existing Live Ubuntu ISO and install appropriate Bacula 
packages and storage backend linkages using CUBIC. I plan to test the ISO on 
local VirtualBox and ultimately bring it up in a virtual CDROM on my cloud 
server`s ASMB9-iKV. I then want the ISO to come up with all Bacula services 
primed and ready for a full system restore.

I`m not sure what the history was that led up to the removal of the Live ISO 
from the community Bacula builds, but it`s taking a LOT of my time to figure 
out how to do restores on an unstable system. Is there a reason a basic 
universal Bacula recovery ISO isn't being built with each community release ?

Thanks,



On 2024-06-25 2:55 a.m., Davide F. wrote:

> Hi Myles,
>
> Could you give the context ?
>
> I do t understand what’s the problem you are trying to solve, why do you want 
> to build an ISO ?
>
> Best,
>
> Davide
>
> On Tue, Jun 25, 2024 at 03:25 MylesDearBusiness via Bacula-users 
>  wrote:
>
>> I spent most of today trying to create a custom ISO manually, with no 
>> success. I'm trying to boot locally in my VirtualBox before the main event 
>> in which I'll remotely mount it into my cloud server's IPMI ASMB9-iKVM and 
>> boot it using a virtual CDROM drive.
>>
>> I'm aiming to try CUBIC next to customize a stock live boot ISO.
>>
>> For me, with my level of knowledge, and only ChatGPT as my support resource, 
>> this may take many days or even weeks to achieve.
>>
>> Does anybody have an ISO that I could use as a starting point?
>>
>> I was hoping to have Bacula community edition 13.0.3 or 13.0.4 on this 
>> recovery ISO.
>>
>> Thanks,
>>
>> 
>>
>> _______
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users

--___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-25 Thread Josh Fisher via Bacula-users


On 6/24/24 11:04, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


Except when the MaximumSpoolSize for the Device resource is reached or
the Spool Directory becomes full. When there is no more storage space
for data spool files, all jobs writing to that device are paused and the
spool files for each of those jobs are written to the device, one at a
time. I'm not sure if every job's spool file is written to tape at that
time, or if the forced despooling stops once some threshold for
sufficient space has been freed up. So, more accurately, other jobs
continue spooling IFF there is sufficient space in the spool directory.

Ok, doing some more test in real condition (and still arguing on why logs
row are not ordered by date...) i've splitted a job:

...


Sicerily, i hoped there's some more 'smart' way to manage despooling in
bacula; the only way to boost performance seems to me tune MaximumSpoolSize
so tapes it takes more or less the same time to despool of the other
tasks(to) to spool, so dead time interleaved can be minimized.

Seems an hard task. ;-)



When you set MaximumSpoolSize in the Device resource, that sets the 
maximum storage available for all spool files, not for each spool file. 
When that is reached, it means there is no more space for any job's 
spool file, so they all must pause and write their spool files to tape.


Another way that might be better for your case is to leave 
MaximumSpoolSize = 0 (unlimited) and specify a MaximumJobSpoolSize in 
the Device resource instead. The difference is that when one job reaches 
the MaximumJobSpoolSize it will begin writing its spool file to tape, 
but it will not cause the other job(s) to pause until they also reach 
the MaximumJobSpoolSize. Then by starting one job a few minutes after 
the other it should be possible to mostly avoid both jobs pausing 
(de-spooling) at the same time.


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Does anybody have a working Linux ISO that will allow me to restore from an offsite Bacula backup?

2024-06-25 Thread Davide F. via Bacula-users
Hi Myles,

Could you give the context ?

I do t understand what’s the problem you are trying to solve, why do you
want to build an ISO ?

Best,

Davide

On Tue, Jun 25, 2024 at 03:25 MylesDearBusiness via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> I spent most of today trying to create a custom ISO manually, with no
> success.  I'm trying to boot locally in my VirtualBox before the main event
> in which I'll remotely mount it into my cloud server's IPMI ASMB9-iKVM and
> boot it using a virtual CDROM drive.
>
> I'm aiming to try CUBIC next to customize a stock live boot ISO.
>
> For me, with my level of knowledge, and only ChatGPT as my support
> resource, this may take many days or even weeks to achieve.
>
> Does anybody have an ISO that I could use as a starting point?
>
> I was hoping to have Bacula community edition 13.0.3 or 13.0.4 on this
> recovery ISO.
>
> Thanks,
>
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Does anybody have a working Linux ISO that will allow me to restore from an offsite Bacula backup?

2024-06-24 Thread MylesDearBusiness via Bacula-users
I spent most of today trying to create a custom ISO manually, with no success. 
I'm trying to boot locally in my VirtualBox before the main event in which I'll 
remotely mount it into my cloud server's IPMI ASMB9-iKVM and boot it using a 
virtual CDROM drive.

I'm aiming to try CUBIC next to customize a stock live boot ISO.

For me, with my level of knowledge, and only ChatGPT as my support resource, 
this may take many days or even weeks to achieve.

Does anybody have an ISO that I could use as a starting point?

I was hoping to have Bacula community edition 13.0.3 or 13.0.4 on this recovery 
ISO.

Thanks,

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Can't fetch bacula-aligned driver version 13.0.3 from community repo with apt on ubuntu 20.04 focal

2024-06-22 Thread Joris Dallaire via Bacula-users
Hello all ! Hope everyone is having a wonderful day.
As per subject, the file bacula-aligned_13.0.3-23050308~focal_amd64.deb is 
there in the repo but it doesn't show up when I do "apt search bacula-aligned" 
and of course I can't install it with "sudo apt install bacula-aligned". The 
installation of the main components went without problems. I have installed it 
without problems on my other machines that run on ubuntu 22.04 jammy.
My configuration in /etc/apt/sources.list.d/Bacula-Community.list:deb 
[arch=amd64] https://www.bacula.org/packages/my-key/debs/13.0.3 focal main

Some command output:joris@rhea:~$ sudo apt update[sudo] Mot de passe de joris 
:Atteint :1 http://ca.archive.ubuntu.com/ubuntu focal InReleaseAtteint :2 
http://ca.archive.ubuntu.com/ubuntu focal-updates InReleaseAtteint :3 
http://ca.archive.ubuntu.com/ubuntu focal-backports InReleaseAtteint :4 
https://dl.google.com/linux/chrome/deb stable InReleaseAtteint :5 
http://security.ubuntu.com/ubuntu focal-security InReleaseRéception de :6 
https://esm.ubuntu.com/apps/ubuntu focal-apps-security InRelease [7 565 
B]Réception de :7 https://esm.ubuntu.com/apps/ubuntu focal-apps-updates 
InRelease [7 456 B]Réception de :8 https://esm.ubuntu.com/infra/ubuntu 
focal-infra-security InRelease [7 450 B]Réception de :9 
https://esm.ubuntu.com/infra/ubuntu focal-infra-updates InRelease [7 449 B]Ign 
:10 https://www.bacula.org/packages/642193db6dca8/debs/13.0.3 focal 
InReleaseAtteint :11 https://www.bacula.org/packages/642193db6dca8/debs/13.0.3 
focal Release29,9 ko réceptionnés en 5s (5 946 o/s)Lecture des listes de 
paquets... FaitConstruction de l'arbre des dépendancesLecture des informations 
d'état... FaitTous les paquets sont à jour.
joris@rhea:~$ sudo apt install bacula-alignedLecture des listes de paquets... 
FaitConstruction de l'arbre des dépendancesLecture des informations d'état... 
FaitE: Impossible de trouver le paquet bacula-aligned
Any help will be greatly appreciated.Thanks,Joris___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-21 Thread Josh Fisher via Bacula-users




On 6/20/24 18:58, Bill Arlofski via Bacula-users wrote:

On 6/20/24 8:58 AM, Marco Gaiarin wrote:

Once that is hit, the spoofles are written to tape, during which active
jobs have to wait because the spool is full.

There's no way to 'violate' this behaviour, right?! A single SD process
cannot spool and despool at the same time?


An SD can be spooling multiple jobs wile *one* and only one Job spool 
file is despooling to one drive.


Add another drive and and the same is still true, but the SD can now 
be despooling two jobs at the same time while other jobs are spooling, 
and so on as you add drives.




Except when the MaximumSpoolSize for the Device resource is reached or 
the Spool Directory becomes full. When there is no more storage space 
for data spool files, all jobs writing to that device are paused and the 
spool files for each of those jobs are written to the device, one at a 
time. I'm not sure if every job's spool file is written to tape at that 
time, or if the forced despooling stops once some threshold for 
sufficient space has been freed up. So, more accurately, other jobs 
continue spooling IFF there is sufficient space in the spool directory.




_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-20 Thread Bill Arlofski via Bacula-users

On 6/20/24 8:58 AM, Marco Gaiarin wrote:


But, now, a question: this mean that in spool data get interleaved too? How
they are interleaved? File by file? Block by block? What block size?


No. When you have jobs running, take a look into the SpoolDirectory. You will see a 'data' *.spool file and an 'attr' *.sppol 
file for each job running.




Once that is hit, the spoofles are written to tape, during which active
jobs have to wait because the spool is full.


There's no way to 'violate' this behaviour, right?! A single SD process
cannot spool and despool at the same time?


An SD can be spooling multiple jobs wile *one* and only one Job spool file is 
despooling to one drive.

Add another drive and and the same is still true, but the SD can now be despooling two jobs at the same time while other jobs 
are spooling, and so on as you add drives.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-8 tape report error

2024-06-20 Thread Bill Arlofski via Bacula-users

On 6/20/24 4:54 AM, Adam Weremczuk wrote:


OMG...

The only reason the script started failing was a different device
assignment caused by a reboot:

ls -al /dev | grep sg.
crw-rw  1 root tape 21,   0 Jun 19 12:25 sg0
crw-rw  1 root disk 21,   1 Jun 12 17:42 sg1

Once I've changed sg1 to sg0 it started working like a charm again :)


Hello Adam,

I am glad you found it. It should have been the first thing I recommended to 
check. 🤷🤦

You might be interested in my `mtx-changer-python.py` drop-in replacement for the `mtx-changer` script and/or my 
`bacula-tapealert.py` drop-in replacement for the `tapealert` script that Bacula ships with.


Both determine the /dev/sg# node automatically, on-the-fly, preventing this issue with Bacula when a /dev/sg# node changes 
after a reboot.



The `bacula-tapealert.py` currently requires the Python `doctopt` module but I will swap that out and replace it with 
`argparse` shortly. I have already done this for the `mtx-changer-python.py` and a couple other scripts.



Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify Jobs not pruning cache when using S3 storage

2024-06-19 Thread Eric Bollengier via Bacula-users

Hello Martin,

On 6/19/24 12:57, Martin Reissner wrote:

Hello,

we're running a Bacula 15.0.2 setup which stores everything in S3 storage and 
due to having a lot of data to backup every day

we use

   Truncate Cache = AfterUpload
   Upload = EachPart

in our "Cloud" ressources, to ensure the systems running the SDs do not run out 
of diskspace. This works pretty well but I recently

started to configure Verifyjobs with

   Level = VolumeToCatalog

for some important jobs and it looks like when running the Verifyjob, data gets 
transferred from S3 to the directory specified
as "Archive Device" in the "Device" ressource, sometimes also known as 
"cloudcache" and is then compared to the respective data in the catalog.
So far so good, but it seems that even after a successful verification the 
volume data remains there, probably until the volume is reused and for

large jobs this takes up quite some space which causes diskspace issues on the 
SD.

Am I maybe missing something there and is there some option that can be used to 
prune the cache after a (successful) verifyjob?


Yes, you have a cloud command to prune the cache, I suggest to run it at a
regular interval inside an admin job for example. (cloud prune allpools 
storage=XXX)

Hope it helps!

Best Regards,
Eric



_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] libmysqlclient.so.21

2024-06-19 Thread Davide F. via Bacula-users
Ok, there was a misunderstanding from my side, my bad.

I thought you wanted to build the rom yourself.

Now I understand better.

In your specific context, if you get this error, it means the rpm package
has a bug (missing dependency or something else).

I’ll leave people from Bacula Systems answer here then.

Good luck

Davide

On Wed, Jun 19, 2024 at 13:08 Mehrdad Ravanbod 
wrote:

>
> Tbh, i am not too keen on building my own package from source, and none of
> these packages seem to have anything to do with mysql
>
> i did check the version libmysqlclient which is installed under
> /usr/lib64/mysql and it is version 20, i probably could install a later
> version of MySQL hoping for it to install the 21-version that the
> bacula-mysql rpm seems to need but i am pretty sure this will give me other
> problems
>
> So i just give up on this and try the postgres database, not a very good
> choice from my point of view since i am pretty unfamiliar with that DB but
> well, it's  a test installation
>
> Regards /Mehrdad
> On 2024-06-19 12:43, Davide F. wrote:
>
> Hi,
>
> Then try installing below packages (used on Centos 7, some packages name
> may have changed for Rocky Linux).
>
> openssh-clients rpm-sign mariadb-devel \gcc gcc-c++ make autoconf glibc-devel 
> \ ncurses-devel readline-devel libstdc++-devel \
> zlib-devel openssl-devel libacl-devel bzip2-devel \
> openldap-devel libxml2-devel rpmdevtools \
> rpmlint postgresql-devel libcurl-devel
>
> I remember the rpm package build process is documented somewhere in Bacula 
> documentation but I can't find it.
>
> Best,
>
>
> On Wed, Jun 19, 2024 at 12:33 PM Mehrdad Ravanbod <
> mehrdad.ravan...@ampfield.se> wrote:
>
>> Installed the mysql-community-devel.x86** and the i686 versions, fails
>> again with same error
>>
>> But thanks for answering
>>
>> Reagrds /Mehrdad
>> On 2024-06-19 12:16, Davide F. wrote:
>>
>> Hi,
>>
>> Not 100% sure, but as far as I remember, you need MySQL devel package
>> installed
>>
>> Let me know if it helps.
>>
>> Davide
>>
>> On Wed, Jun 19, 2024 at 11:44 Mehrdad Ravanbod <
>> mehrdad.ravan...@ampfield.se> wrote:
>>
>>>
>>> Hi guys
>>>
>>> Trying to install Bacula 13.0.4 community on a RockyLinux9(Rhel9)
>>> computer to test it for our backup needs
>>>
>>> Installation went well, upto and including MySQl5.7, but now I am stuck
>>>
>>> Trying:  yum install bacula-mysq fails with message that
>>>
>>> -nothing provides libmysqlclient.so.21 needed by
>>> bacula-mysql-13.0.4..el9.x86_64 from bacula-community
>>>
>>>  From what i understand this file is a part of MySQL8, trying to install
>>> the file libmysqlclient.so.21 fails too since it conflicts with file
>>> already installed by MySQL5.7
>>>
>>> I had installed Bacula 13.0.1 earlier without problems but this time i
>>> am stuck and the docmentation is so outdated it is not even funny
>>>
>>> So anyone that has a solution? Or should i just try to install an older
>>> version??( i know 13.0.4 is already an old version but well )
>>>
>>>
>>> --
>>> 
>>> Ampfield Aktiebolag
>>> Mehrdad Ravanbod System administrator
>>>
>>>
>>>
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>> --
>>  Ampfield
>> Aktiebolag Mehrdad Ravanbod System administrator
>>
> --
>  Ampfield
> Aktiebolag Mehrdad Ravanbod System administrator
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] libmysqlclient.so.21

2024-06-19 Thread Davide F. via Bacula-users
Hi,

Not 100% sure, but as far as I remember, you need MySQL devel package
installed

Let me know if it helps.

Davide

On Wed, Jun 19, 2024 at 11:44 Mehrdad Ravanbod 
wrote:

>
> Hi guys
>
> Trying to install Bacula 13.0.4 community on a RockyLinux9(Rhel9)
> computer to test it for our backup needs
>
> Installation went well, upto and including MySQl5.7, but now I am stuck
>
> Trying:  yum install bacula-mysq fails with message that
>
> -nothing provides libmysqlclient.so.21 needed by
> bacula-mysql-13.0.4..el9.x86_64 from bacula-community
>
>  From what i understand this file is a part of MySQL8, trying to install
> the file libmysqlclient.so.21 fails too since it conflicts with file
> already installed by MySQL5.7
>
> I had installed Bacula 13.0.1 earlier without problems but this time i
> am stuck and the docmentation is so outdated it is not even funny
>
> So anyone that has a solution? Or should i just try to install an older
> version??( i know 13.0.4 is already an old version but well )
>
>
> --
> 
> Ampfield Aktiebolag
> Mehrdad Ravanbod System administrator
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] libmysqlclient.so.21

2024-06-19 Thread Davide F. via Bacula-users
Hi,

Then try installing below packages (used on Centos 7, some packages name
may have changed for Rocky Linux).

openssh-clients rpm-sign mariadb-devel \gcc gcc-c++ make autoconf
glibc-devel \ ncurses-devel readline-devel libstdc++-devel \
zlib-devel openssl-devel libacl-devel bzip2-devel \
openldap-devel libxml2-devel rpmdevtools \
rpmlint postgresql-devel libcurl-devel

I remember the rpm package build process is documented somewhere in
Bacula documentation but I can't find it.

Best,


On Wed, Jun 19, 2024 at 12:33 PM Mehrdad Ravanbod <
mehrdad.ravan...@ampfield.se> wrote:

> Installed the mysql-community-devel.x86** and the i686 versions, fails
> again with same error
>
> But thanks for answering
>
> Reagrds /Mehrdad
> On 2024-06-19 12:16, Davide F. wrote:
>
> Hi,
>
> Not 100% sure, but as far as I remember, you need MySQL devel package
> installed
>
> Let me know if it helps.
>
> Davide
>
> On Wed, Jun 19, 2024 at 11:44 Mehrdad Ravanbod <
> mehrdad.ravan...@ampfield.se> wrote:
>
>>
>> Hi guys
>>
>> Trying to install Bacula 13.0.4 community on a RockyLinux9(Rhel9)
>> computer to test it for our backup needs
>>
>> Installation went well, upto and including MySQl5.7, but now I am stuck
>>
>> Trying:  yum install bacula-mysq fails with message that
>>
>> -nothing provides libmysqlclient.so.21 needed by
>> bacula-mysql-13.0.4..el9.x86_64 from bacula-community
>>
>>  From what i understand this file is a part of MySQL8, trying to install
>> the file libmysqlclient.so.21 fails too since it conflicts with file
>> already installed by MySQL5.7
>>
>> I had installed Bacula 13.0.1 earlier without problems but this time i
>> am stuck and the docmentation is so outdated it is not even funny
>>
>> So anyone that has a solution? Or should i just try to install an older
>> version??( i know 13.0.4 is already an old version but well ....)
>>
>>
>> --
>> 
>> Ampfield Aktiebolag
>> Mehrdad Ravanbod System administrator
>>
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> --
>  Ampfield
> Aktiebolag Mehrdad Ravanbod System administrator
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-8 tape report error

2024-06-17 Thread Bill Arlofski via Bacula-users

On 6/17/24 11:38 AM, Adam Weremczuk wrote:

Hi Gary,

I know what you mean by "hijacking a thread" now.
I was reading that message when I decided to post my question from the
same window.

Today I cleared the tape drive with an official cleaning cartridge but
it made no difference to the error:

-

sg_raw -o - -r 1024 -t 60 -v /dev/sg1 8c 00 00 00 00 00 00 00  00 00 00
00 04 00 00 00
  cdb to send: [8c 00 00 00 00 00 00 00 00 00 00 00 04 00 00 00]
SCSI Status: Check Condition

Sense Information:
Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
   Raw sense data (in hex), sb_len=21, embedded_len=21
  70 00 05 00 00 00 00 0d  00 00 00 00 20 00 00 00
  00 00 00 00 00

Error 9 occurred, no data received
Illegal request, Invalid opcode



-


If it was some kind of a hardware fault I should be seeing other
problems, shouldn't I?

All backups and Bacula scripts keep completing without errors, it's just
this one command that fails.
I've even run a big restore as a test and it all looks perfectly fine.

I don't have any spare hardware try different configuration.
Is there anything else to try to determine the root cause?
Why would a reboot alone trigger it?

Regards,
Adam


Hello Adam,

Not sure if this will be any help at all, but I just checked and see that in my `mtx-changer-python.py` drop-in replacement 
script, to check for tape cleaning required messages, I am using:

8<
 sg_logs --page=0xc /dev/sg##
8<

Then I look for:
8<
Cleaning action not required (or completed)

or

Cleaning action required
8<

To determine if I need to automatically find and load a cleaning tape before 
returning contr
ol to the SD...

I had been using `tapealert` instead of sg_logs, but I found that tapeinfo clears all alert messages on the drive, so they 
could never be caught by the SD when it calls tapeinfo in the tapealert script. Sg_logs just reports what I need without 
clearing the other flags that the SD needs to know when a drive or tape is bad/damaged, etc.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-13 Thread Josh Fisher via Bacula-users



On 6/13/24 08:13, Gary R. Schmidt wrote:

On 13/06/2024 20:12, Stefan G. Weichinger wrote:


interested as well, I need to speedup my weekly/monthly FULL runs 
(with LTO6, though: way slower anyway).


Shouldn't the file daemon do multiple jobs in parallel?

To tape you can only write ONE stream of data.

To the spooling disk there could be more than one stream.


Yes, that seems wrong:
$ grep Concurrent *.conf
bacula-dir.conf:  Maximum Concurrent Jobs = 50
bacula-dir.conf:  Maximum Concurrent Jobs = 50
bacula-fd.conf:  Maximum Concurrent Jobs = 50
bacula-sd.conf:  Maximum Concurrent Jobs = 50


Sorry, I still don't understand what to adjust ;-)

that interleaving to tape sounds dangerous to me.


That's how Bacula works - and has since day one.

We've been using it like that since 2009, starting with an LTO-4 
autoloader, currently using an LTO-6, and I'm about to start agitating 
to upgrade to LTO-9.


Interleaving is not really an issue when data spooling is enabled. Data 
is despooled to tape one job at a time. Only when the spool size is too 
small will there be any interleaving. Even then, the interleaving will 
be a whole bunch of one job's blocks followed by a whole bunch of 
another. I't not a problem, and with sufficient disk space for the 
spool, it doesn't even happen.




What I want to have: the fd(s) should be able to dump backups to the 
spooling directory WHILE in parallel the sd spools previous backup 
jobs from spooling directory to tape (assuming I have only one tape 
drive, which is the case)


Bacula does not work that way.  No doubt if you tried really hard with 
priority and concurrency and pools you could maybe make it work like 
that, but just RTFM and use it as designed.


Why not? According to 
https://www.bacula.org/15.0.x-manuals/en/main/Data_Spooling.html it 
works exactly that way already. Most importantly, concurrent jobs 
continue to spool while one job is despooling to tape. Only one job is 
ever despooling at a given time.


On the other hand, the job that is despooling has exclusive control of 
the tape drive. On the last despool for a job (there ma be more than one 
if the job data exceeds the maximum spool size), the job has to also 
despool the job's spooled attributes to the catalog database before it 
releases the tape drive. Thus, even when other concurrent jobs are 
waiting to be despooled, the tape drive will be idle (or at least 
writing from its internal buffer) while the database writes occur. This 
is one of the reasons that database performance is so important in 
Bacula. I believe that the attributes are despooled before releasing the 
tape drive in order to ensure that despooling of both data and 
attributes is an atomic operation at job completion, probably to avoid 
race conditions.






-> parallelizing things more
It all seems quite parallel to me.


Cheers,
    Gary    B-)


_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Bill Arlofski via Bacula-users

On 6/11/24 10:45 AM, Marco Gaiarin wrote:


Sorry, i really don't understand and i need feedback...

I've read many time that tapes are handled better as they are, sequential
media; so they need on storage:

Maximum Concurrent Jobs = 1


Hello Marco,

If you are using DataSpooling for all of your jobs, this setting is somewhat redundant because Bacula will de-spool exactly 
one Job's Data Spool file at a time.


With DataSpooling enabled in all jobs, the only "interleaving" that you will have on your tapes is one big block of Job 1's 
de-spooled data, then maybe another Job 1 block, or a Job 2 block, or a Job 3 block, and so on, depending on which Job's 
DataSpool file reached the defined maximum Job Spool size at particular times throughout the backup window, or when one hits 
the total MaximumSpool size and begins de-spooling.


If, on the other hand, you have many clients and enough network bandwidth, you can disable Data Spooling, and increase the 
Ta

pe Drive's MaximumConcurrentJobs setting and Bacula will stream and interleave 
the data from all the concurrently running jobs.

But, you can probably never really guarantee that all jobs will be streaming enough data concurrently to saturate the link to 
the Tape Drive, so using DataSpooling to *fast*, local, SSD, flash, or NVMe etc drives is probably a better and more 
consistent solution.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Error on Full Backups

2024-06-05 Thread Bill Arlofski via Bacula-users



And I see on jobid=777, the Bacula SD reports that the volume is 'read-only', so maybe that tape's lock slider is in the lock 
position?


The jobid=777 logs do not include what tape volume was being used. (which is 
odd, unless you snipped those lines)
8<

04-Jun 01:41 -sd JobId 777: Writing spooled data to Volume. Despooling 
1,000,000,242,693 bytes ...

> 04-Jun 01:41 -sd JobId  777: Fatal error: block.c:163 [SF0205] Attempt to write on 
read-only Volume. dev="LTO9-1" (/dev/nst0)
8<


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Error on Full Backups

2024-06-04 Thread Bill Arlofski via Bacula-users

On 6/5/24 12:08 AM, Dr. Thorsten Brandau wrote:

Hi,
I get from time to time errors when backup up via my LTO-9 autochanger. This happens mostly when I do a full backup (ca. 30 
TB data), sometimes when using differential backup.
I am using the updated mtx-changer-script by Bill Arlofski, as the one out of the box was always running in timeouts when 
tape changing was needed.


This error popped up recently. And persisted now for several full backups, now 
additionally for a differential one.

Anyone any idea where the problem is and how to solve it?

Thank you.
Cheers


Hello Dr. Thorsten Brandau,

What does my script log at this time?

Most likely the mtx unload command is failing due to an issue with the change/library.  Just a guess of course, but a guess 
based on experience. :)


Also, thanks for letting me know you are using this script. I have no idea who 
has even tested it in the wild.

Keep in mind, you can increase the logging level in the config file if necessary, but I am still going with an mtx changer 
error, and I think this might get logged, regardless.


And, who knows, it seems like you might also be reaching the Bacula SD's Device 'MaximumChangerWait' timeout, and my script 
is being killed by the SD. Perhaps sometimes the time it takes your library to change a tape is right on the edge of the 
default for this Bacula SD timeout threshold and all that needs to be done is to increase that timeout.


But as I read, I see that the default for this timeout is 5 mins, and your log paste seems to show that your SD kills my 
script at about the 8 minute mark. Close enough for horseshoes and hand grenades? Maybe but seems a bit off for computers. :)🤷


And, FYI keep an eye on the Gitlab repository because I have been making 
changes/improvements to it. ;)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Installation of bacula 11.0.6 from community repositories

2024-06-04 Thread Borut Rozman via Bacula-users
Hi everyone,

I am having a trouble installing community version of bacula ver
11.0.6, please don't start the debate about the version, I just want to
get it installed on ubuntu 22.04

I get

The repository 'https://www.bacula.org/packages/MYID/debs/11.0.6 jammy
Release' does not have a Release file

when I try to update the packages


my bacula.lists file is :

#Bacula Community
deb [arch=amd64] https://www.bacula.org/packages/MYID
/debs/11.0.6 jammy main


What am I missing?

Any help appreciated.
B.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk Volumes Showing 1B Read-Only

2024-06-03 Thread Bill Arlofski via Bacula-users

On 6/3/24 12:03 PM, Ken Mandelberg wrote:


My Volumes are all Disk files. Several now show up as 1B Read-Only. In fact, as files "ls -l" shows them at their correct 
size, with modification dates that go back correctly to when they were filled.


This is likely due to my transition from Ubuntu 23.10 to 24.04. There was a period during the transition where the file 
system containing these backup volumes was either not mounted or had ownerships set incorrectly.


I'm guessing that bacula noticed that and marked those backup files 1B Read-Only. These files are the oldest of the backup 
files, the slightly more recent ones are fine.


Is there any way to convince bacula that they are good?



Hello Ken,

What does this bconsole command show?:

* list volume=xxx

If they have a volstatus of `Error`, and they really are good volumes on disk you can just try changing their volstatus back 
to Append with:


* update volstatus=Append volume=


B
UT, keep in mind that if they are old, then they will probably be past their retention periods and Bacula will probably 
immediately recycle and re-use them. If this is OK, then you are all set. Otherwise, if the data on them is important to you 
then you should disable these volumes until you are sure there is no data that you might need/want to restore:


* update enabled=no volume=

or

* update volstatus=Read-Only volume=


Then, Bacula will not touch these volumes except to read for restores, copies, 
migrations, or verifies.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] binaries for Ubuntu 24.04

2024-05-31 Thread Bill Arlofski via Bacula-users

On 5/31/24 9:59 AM, d...@bornfree.org wrote:


Currently there are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories
for Bacula CE versions 15.0.2 and 13.0.4.  Will there be?



Yes. The builds for Bacula Enterprise packages for this very new platform are currently going through testing. Community 
packages should follow soon. I cannot give an ETA, of course. :)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-31 Thread Bill Arlofski via Bacula-users

On 5/31/24 8:56 AM, Marco Gaiarin wrote:



If you *really* want to automatically delete failed jobs (I personally don't 
think this is a good idea), you can use a
RunScript in an Admin type Job like:


Why in and 'Admin' job? I've tried to add something like:

Run After Job = /etc/bacula/scripts/deleteFailedJobs "%c" "%l"

to the backup job; script effectively get run, but seems no parameters get
passed to them.


First, you are passing them incorrectly.  Just quote the whole line like:
8<
Run After Job = "/etc/bacula/scripts/deleteFailedJobs %c %l"
8<

Second, this will most likely *not* work - and it is why I offered an Admin job 
as a solution.

If you do this, I am not sure exactly what will happen because (behind the scenes) the job is really still running when the 
RunAfterJob is triggered. So you would be trying to delete a job from the catalog while it is still running, and most likely, 
the Director would re-insert/update 
the job after your script deleted it, and u, I can only imagine what trouble this 
might cause.


Stick with the Admin job and the script is my advice here.



The script (little modification of yours) simply filter by client name (eg,
delete jobs of that client, not overral failed jobs) and run only for
VirtualFull level jobs.

Rationale: if a correct VirtualFull job happen, i can safely delete also the 
failed jobs


VirtualFull jobs will never pull in a failed job. They only collect and 
consolidate Backup jobs
that have terminated "Backup OK" (jobstatus='T' in the catalog), so you can 
deleted them any time you like.



time this Admin job is run will be in the Admin Job's joblog. Alternately, you 
can trigger the script from cron, and the
bconsole output will be in the email that cron sends.


The script run by hand works as expected, but clearly i prefere to run from 
bacula.


Yes, as I recommended, but I always try to offer optional solutions w
hen I can. :)


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-29 Thread Bill Arlofski via Bacula-users

On 5/24/24 2:39 AM, Marco Gaiarin wrote:



I suspect that 'job counter' get resetted if and only if all jobs in a
volume get purged; this lead me to think that my configuration simpy does
not work in a real situation, because sooner or later jobs get 'scattered'
between volumes and virtual job of consolidation stop to work, so jobs and
volume purging.


Sorry, i need feedback on that. I restate this.


Seems to me that if i use 'job based retention' on volumes, eg:

Maximum Volume Jobs = 6

on the pool, simply does not work. Because the 'job counter' on the volume
get resetted if and only if *ALL* job on that volume get purged.

If i have a volume in state 'Used' because got 6 job within, and i
purge/delete jobs but not all, media state does not switch to 'Append', and
even if i put manually in 'Append' mode, bacula reject the volume and put on
'Used' state because have reached 'Maximum Volume Jobs'.
If i delete *ALL

* job in that volume, get correctly recycled.



It is right?


There's some 'knob' i can tackle with to make volume management more
'aggressive'?


Why not set in your Pool(s) `MaximumVolumeJobs = 1`

Then control your JobRetention and FileRetention periods as needed in your Pool(s). Typically you will set JobRetention > 
FileRetention if you want to aggressively manage the amount of storage your catalog uses, but it is usually best if possible 
to set JobRetention = FileRetention.


This way, each file volume will have one job on it, and when that job is pruned from the catalog, the volume will be 
pruned/purged/truncated/recycled.   (I am going on zero hours sleep this morning, so some details may be sketchy  :)


Don't try to force Bacula to use some arbitrary (small) number of file volumes. Let Bacula manage your volumes based on your 
chosen Job/File retention times, disk space available, etc.


You will want to limit your volumes sizes (MaximumVolum
eBytes = xxGB for example), and you will want to limit the number of 
volumes allowed in a Pool (MaximumVolumes = xxx).  This way with a little bit of calculations you can make sure that Bacula 
never fills your partition to capacity.  You can monitor this as time goes on, and you can make adjustments as needed.


If you have multiple Pools and you want Bacula to be able to freely move volumes from different pools when they are 
available, and so they don't get "stuck" in one pool forever, you can use the Pool's `ScratchPool` and `RecyclePool` pool 
settings and then create a Scratch Pool that all Pools would point both of those settings to.


If you prefer to have volumes stay in a pool they were initially created in 
forever, ignore that previous paragraph. :)

Not sure if anyone has answered, but to delete a job, the bconsole `delete jobid=xxx` is what you want. This will delete the 
Job and Files records from the catalog, and free up any volume(s) used in the job being prun

ed for re-use as described above.

If you *really* want to automatically delete failed jobs (I personally don't think this is a good idea), you can use a 
RunScript in an Admin type Job like:

8<
Job {
  Name = Admin_Delete_Failed_Jobs
  Type = Admin
  ...other settings...

  RunScript {
RunsWhen = before
RunsOnClient = no
Command = /opt/bacula/scripts/deleteFailedJobs.sh
}
8<----

Then, in that `/opt/bacula/scripts/deleteFailedJobs.sh` script, something like:
8<
#!/bin/bash

bcbin="/opt/bacula/bin/bconsole"
bccfg="/opt/bacula/etc/bconsole.conf"

# The "gui on" removes commas in jobids so you don't have to use tr, or sed to 
do it
failed_jobids=$(echo -e "gui on\nlist jobs jobstatus=f\nquit\n" | $bcbin -c $bccfg | grep 
"^| [0-9]" | awk '{print $2}')

for jobid in $failed_jobids; do
  echo -e "delete yes jobid=$jobid\nquit\n" | $bcbin -c $bccfg
done
8<

If you do it this way (using the `Command =` in the Admin
Job's RunScript, the bconsole output of the jobs being deleted each 
time this Admin job is run will be in the Admin Job's joblog. Alternately, you can trigger the script from cron, and the 
bconsole output will be in the email that cron sends.



Hope this helps!
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-05-18 Thread Josh Fisher via Bacula-users




On 5/17/24 06:29, Marco Gaiarin wrote:

I'm still fiddling with LTO9 and backup performances; finally i've managed
to test a shiny new server with an LTO9 tape (library indeed, but...) and i
can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM
specification say that the tape could perform 400 MB/s.

Also, following suggestion, i'm using spooling to prevent tape to spin
up&down; but this clearly 'doubles' the backup time... there's some way to
do spooling in parallel? EG, while creating the next spool file, bacula
write to tape the current one?


Not for a single job. When the storage daemon is writing a job's spooled 
data to tape, the client must wait. However, if multiple jobs are 
running in parallel, then the other jobs will continue to spool their 
data while one job is despooling to tape.


It is not clear that spooling doubles the backup time. That would only 
be true if the client is able to supply data at 300 MB/s, or if multiple 
clients running in parallel can supply a cumulative stream of data at 
300 MB/s. Even then, i am skeptical that LTO9 is possible without spooling.





Anyway, i'm hit another trouble. Seems that creating the spool file took an
insane amount of time: source to backup are complex dirs, with millions of
files. Filesystem is ZFS.

'insane amount' means that having a despool performance of 300MB/s, i'm a
overral backup performance of 40MB/s...


It depends greatly on the client, the network, and the type of job. Only 
full jobs will supply a somewhat continuous stream of data. Client 
performance is also a factor. Is the client busy while being backed up? 
How busy is the network?


At the end of data despooling, the attributes are despooled and written 
to the database, so that is also a part of the overall backup 
performance. Check the performance of the database writes.





How can i do to improve the spooling performance? What factors impact more?


Thanks.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] missing repositories

2024-05-11 Thread Bill Arlofski via Bacula-users

On 5/11/24 7:46 AM, d...@bornfree.org wrote:


There currently are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories
for Bacula CE versions 15.0.2 and 13.0.4.  Please promptly build the
repositories.  Much appreciated.

---


On 5/9/2024 12:25 PM, d...@bornfree.org wrote:


There are no "Ubuntu 24.04 LTS (Noble Numbat)" repositories for 15.02
and 13.04.  Please promptly build the repositories.  Thank you.

---


Hello,

Repeating the same request every two days is not how this list works.

Please be patient for the people responsible for creating the repositories have time to get this completed. These tasks are 
typically done in peoples' free time. It is possible that they have not even seen your first message yet - ie maybe on vacation?


In the mean time, you can easily build and install from source. This is the way I have been using Bacula Community for about 
20 years now. It is not difficult.



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
_______
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula community repo SHA1 pubkey

2024-05-11 Thread Davide F. via Bacula-users
Hello Rob,

There’s already an issue open since a year Bacula bug tracker

https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2681

Up to now, I haven’t seen any progress or plan to fix this issue, this is
why I’ve built my own binaries.

If nobody from Bacula side provide a “fix”, I will simply share the rpm I
have built on my own.

Let’s wait a couple days of days to see how f something happen.

I’ll keep you posted.

Best regards

Davide

On Thu, May 9, 2024 at 17:37 Rob Gerber  wrote:

> Hello,
>
> The bacula community repo currently signs their packages with a SHA1 key.
> SHA1 is deprecated in EL9 onwards, and poses a security risk that only
> increases over time.
>
> Do the community package maintainers have any plans to update the package
> signing process to use a SHA256 or greater SHA cipher? This would be a good
> move for a project which positions itself in the enterprise software space.
>
> I appreciate that this change would entail change and difficulty, and that
> there might be some downsides for users of older bacula distributions, or
> for those who have previously installed bacula using an older key. I do not
> know if it is possible to sign a package with both the old SHA1 key and a
> newer SHA256+ key (I suspect not, but this isn't my field of expertise).
>
> Given that bacula 15.x is in beta, this might be a good time to sign the
> next 15.x release with a new SHA256+ key, so at least packages 15.x onwards
> are signed with a more secure cipher standard.
>
> Here is a brief writeup on the subject. I hope it is useful.
>
> https://www.redhat.com/en/blog/rhel-security-sha-1-package-signatures-distrusted-rhel-9
>
> Regards,
> Robert Gerber
> 402-237-8692
> r...@craeon.net
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-10 Thread Josh Fisher via Bacula-users



On 5/9/24 10:02, Marco Gaiarin wrote:

I've setup some backup jobs for some (mostly windows) client computer; i
mean 'client' as 'not always on'.
...


2) There's some way i can get the 'jobs in volume X'? I can query jobs for
  volume, but i've not found a way to query volumes for jobs


I use the following in my query.sql file:

# 14

:List Jobs stored for a given Volume name

*Enter Volume name:

SELECT DISTINCT Job.JobId as JobId,Job.Name as Name,Job.StartTime as StartTime,

  Job.Type as Type,Job.Level as Level,Job.JobFiles as Files,

  Job.JobBytes as Bytes,Job.JobStatus as Status

 FROM Media,JobMedia,Job

 WHERE Media.VolumeName='%1'

 AND Media.MediaId=JobMedia.MediaId

 AND JobMedia.JobId=Job.JobId

 ORDER by Job.StartTime;

With this, the query command in bconsole will have:

  14: List Jobs stored for a given Volume name

as one of the query command options.







3) In this setup failed jobs make only noise; there's some way to delete/purge
  failed jobs?

  Or there's some way i can setup the 'RunScript {}' job property to delete
  failed jobs?



Thanks.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] After configure, make command error fails to install Bacula-Community

2024-05-07 Thread Eric Bollengier via Bacula-users

Hello James,

On 5/3/24 22:57, James Israel via Bacula-users wrote:

I am trying to configure bacula-15.0.2 (from the bacula-15.0.2.tar.gz file 
found at https://www.bacula.org/source-download-center, the “Download Bacula 
Community” link), and after many installs of software that was needed for the 
configure to work, the make script is stalling out at:

make[1]: Entering directory '/tmp/bacula-15.0.2/src/filed'
Compiling restore.c
restore.c: In function ‘bool decompress_data(JCR*, int32_t, char**, 
u_int32_t*)’:
restore.c:1387:13: error: ‘compress_len’ was not declared in this scope; did 
you mean ‘comp_len’?
  1387 | compress_len = jcr->compress_buf_size;
   | ^~~~
   | comp_len
restore.c:1388:13: error: ‘cbuf’ was not declared in this scope
  1388 | cbuf = (const unsigned char*)*data + 
sizeof(comp_stream_header);
   | ^~~~
restore.c:1389:13: error: ‘real_compress_len’ was not declared in this scope
  1389 | real_compress_len = *length - sizeof(comp_stream_header);
   | ^
make[1]: *** [Makefile:189: restore.o] Error 1

Something about the function isn’t working, but I can’t figure out how to fix 
it. It is saying some things were “not declared in this scope”, how to declare 
them? (compress_len, cbuf, real_compress_len)


This is a problem that I'm going to fix, but I believe that something like gzip
is not installed. I would recommend to install them in order to use the
compression feature.


Is something more needed in the configure operation? I configured with a script 
that included:


Look the output of the configure, you might see if gzip is detected properly.

Best Regards,
Eric



CFLAGS="-g -Wall" \
   ./configure \
   --sbindir=/opt/bacula/bin \
   --sysconfdir=/etc/bacula \
   --enable-smartalloc \
   --enable-conio \
   --enable-bat \
   --with-postgresql \
   --with-working-dir=/opt/bacula/working \
   --with-scriptdir=/etc/bacula/scripts \
   --with-plugindir=/etc/bacula/plugins \
   --with-pid-dir=/var/run \
   --with-subsys-dir=/var/run \
   --with-dump-email=em...@humortimes.com \
   --with-job-email=em...@humortimes.com \
   --with-smtp-host=mail.humortimes.com \
   --with-aws \
   --with-baseport=9101

James Israel




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell TL2000 Tape Library with HHLTO7 drive

2024-05-04 Thread Bill Arlofski via Bacula-users

On 5/4/24 6:44 PM, Neil Balchin wrote:

Everything is working but on every tape job I get this error:

bacularis-sd JobId 5: Warning: Alert: Volume="LR7782L7" alert=25: ERR=A 
redundant interface port on the tape drive has failed. Failure of one interface port in a 
dual-port configuration, e.g. Fibrechannel.
Any Ideas on how to get rid of it ?  Backups appear to be working just fine 
I’ve tested restore jobs several times


How is your tape drive connected?  ie: Is it actually a dual connected fibre 
channel? Is one link actually down?

The Bacula tapealert script, which calls the `tapeinfo` utility is reporting that there is a TapeAlert[25] error. This is not 
a Bacula issue specifically, just an external script/utility which the SD calls that is reporting back to the SD what the 
drive itself is telling tapeinfo when queried.


Who manages the hardware ie: Tape library, drive(s), and Linux server that this 
SD runs on? Maybe they can assist?

I'd have a look in that direction.

Let us know what you find. It is always nice to see an issue's primary cause 
reported after a solution is found.

Good luck! :)


Thank you,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Install Bacula-Community from repository?

2024-05-03 Thread James Israel via Bacula-users
Where it says “YOURLICENSEKEY”, shouldn’t that be where the hash goes? So, it 
could be a variable, filled by the hash creation, like in the other scripts?

James 

From: Rob Gerber
Sent: Friday, May 3, 2024 1:59 PM
To: James Israel
Cc: Davide F.; Mehrdad Ravanbod; bacula-users
Subject: Re: [Bacula-users] Install Bacula-Community from repository?

Personally, I allowed the use of sha1 to sign packages. This was the only way 
to use the bacula community packages from the repository. The packages are 
signed with SHA1. Can't determine authenticity without SHA1. Once the Bacula 
community project issues a SHA512 key or something similar and signs their 
packages with that, then SHA1 will be unnecessary.

Here is my runbook from my notes on installing bacula on rocky linux 9. secrets 
removed and instances of my username have been replaced by 'YOURUSERNAME'. 

Please note that I run bacula 13.x. Bacula 15.x is in beta and I personally 
decided not to deploy it in production. The text below proceeds from that 
perspective.

Bacula rocky 9 install and configuration process


# most of the following has to be done as root. I'm assuming you've done "sudo 
su" prior to start.
# RL 9 doesn't allow sha1 signing by default. gotta enable it for bacula.
update-crypto-policies --set DEFAULT:SHA1

# import bacula project key
cd /tmp
wget 
https://www.bacula.org/downloads/Bacula-4096-Distribution-Verification-key.asc
rpm --import Bacula-4096-Distribution-Verification-key.asc
rm Bacula-4096-Distribution-Verification-key.asc -y

# Add the following entries to your /etc/yum.repos.d/Bacula.repo file:
nano /etc/yum.repos.d/Bacula.repo
# note this URL is customized from the install guide to be for RHEL 9!
[Bacula-Community]
name=CentOS - Bacula - Community
baseurl=https://www.bacula.org/packages/YOURLICENSEKEY/rpms/13.0.3/el9/x86_64/
enabled=1
protect=0
gpgcheck=1


# correct syntax to find all bacula packages on every repo is dnf list|grep -i 
bacula
# we want to disable all the bacula packages in the RHEL appstream repo. they 
can break bacula installs from the bacula community repo.
# lets try adding some exclude lines to /etc/yum.conf. this is symlinked with 
/etc/dnf/dnf.conf so isn't necessary to edit both
nano /etc/yum.conf
exclude=bacula-common.x86_64 bacula-console.x86_64 bacula-director.x86_64 
bacula-libs-sql.x86_64 bacula-logwatch.noarch bacula-storage.x86_64
# with the above string in yum.conf, yum list|grep -i bacula only shows 13.x 
bacula repo packages and doesn't show any appstream repo bacula packages, which 
were version 11.x 
# same applies to dnf. 
#WARNING: BACULA 15.X APPEARS TO FEATURE A BACULA-CONSOLE PACKAGE, WHICH MIGHT 
BE BLACKLISTED BY THE ABOVE PROCESS DURING AN INSTALLATION/UPGRADE OF 15.X

# install postgresql and bacula
yum install postgresql-server -y
service postgresql initdb
#output: Hint: the preferred way to do this is now "/usr/bin/postgresql-setup 
--initdb --unit postgresql"
yum install chkconfig -y
chkconfig postgresql on
yum install bacula-postgresql -y
systemctl start postgresql.service
su - postgres
/opt/bacula/scripts/create_postgresql_database
/opt/bacula/scripts/make_postgresql_tables
/opt/bacula/scripts/grant_postgresql_privileges
exit

# give bacula user a shell so I can su into that user
chsh -s /bin/bash bacula

# add bacula user to tape group
usermod -a -G tape bacula

# start bacula
/opt/bacula/scripts/bacula start

# give my user rwx access to bacula dir. used so I can filezilla into the 
server and edit stuff from windows
setfacl -R -m YOURUSERNAME:rwx /opt/bacula/

# make symlinks to all bacula programs in /usr/sbin so they can be ran without 
a full path
cp /opt/bacula/bin/* /usr/sbin -s



Robert Gerber
402-237-8692
r...@craeon.net

On Fri, May 3, 2024, 3:40 PM James Israel via Bacula-users 
 wrote:
Thanks for the suggestion, Davide.
 
However, I had tried that script before (used the one for CentOS, as that OS is 
pretty close to RHEL), and I get the following errors. (I tried it again just 
now, same result):
 
First, SHA1 checksums don’t work on this RHEL 9 server, as they don’t with many 
modern OSes, as they’ve been deemed insecure. So, I get:
 
warning: Signature not supported. Hash algorithm SHA1 not available.
error: /tmp/Bacula-4096-Distribution-Verification-key.asc: key 1 import failed.
 
As a work around, I downloaded the .asc file to my local Windows machine, which 
can still do SHA1, and used the resulting hash in the URL in the script, 
commenting out the hash creation parts.
 
After doing that and running the script again, I get:
 
Errors during downloading metadata for repository 'Bacula-Community':
  - Status code: 404 for 
https://www.bacula.org/packages/bf417a80d9108b58a8a3fc8b78110f9f5b181ae1/rpms/11.0.5/el7/repodata/repomd.xml
 (IP: 94.103.98.87)
Error: Failed to download metadata for repo 'Bacula-Community': Cannot download 
repomd.xml: Cannot download rep

Re: [Bacula-users] Install Bacula-Community from repository?

2024-05-03 Thread James Israel via Bacula-users
Thanks, Robert, I’ll give this a try. I assume this should be saved as an 
executable script?

James 

From: Rob Gerber
Sent: Friday, May 3, 2024 1:59 PM
To: James Israel
Cc: Davide F.; Mehrdad Ravanbod; bacula-users
Subject: Re: [Bacula-users] Install Bacula-Community from repository?

Personally, I allowed the use of sha1 to sign packages. This was the only way 
to use the bacula community packages from the repository. The packages are 
signed with SHA1. Can't determine authenticity without SHA1. Once the Bacula 
community project issues a SHA512 key or something similar and signs their 
packages with that, then SHA1 will be unnecessary.

Here is my runbook from my notes on installing bacula on rocky linux 9. secrets 
removed and instances of my username have been replaced by 'YOURUSERNAME'. 

Please note that I run bacula 13.x. Bacula 15.x is in beta and I personally 
decided not to deploy it in production. The text below proceeds from that 
perspective.

Bacula rocky 9 install and configuration process


# most of the following has to be done as root. I'm assuming you've done "sudo 
su" prior to start.
# RL 9 doesn't allow sha1 signing by default. gotta enable it for bacula.
update-crypto-policies --set DEFAULT:SHA1

# import bacula project key
cd /tmp
wget 
https://www.bacula.org/downloads/Bacula-4096-Distribution-Verification-key.asc
rpm --import Bacula-4096-Distribution-Verification-key.asc
rm Bacula-4096-Distribution-Verification-key.asc -y

# Add the following entries to your /etc/yum.repos.d/Bacula.repo file:
nano /etc/yum.repos.d/Bacula.repo
# note this URL is customized from the install guide to be for RHEL 9!
[Bacula-Community]
name=CentOS - Bacula - Community
baseurl=https://www.bacula.org/packages/YOURLICENSEKEY/rpms/13.0.3/el9/x86_64/
enabled=1
protect=0
gpgcheck=1


# correct syntax to find all bacula packages on every repo is dnf list|grep -i 
bacula
# we want to disable all the bacula packages in the RHEL appstream repo. they 
can break bacula installs from the bacula community repo.
# lets try adding some exclude lines to /etc/yum.conf. this is symlinked with 
/etc/dnf/dnf.conf so isn't necessary to edit both
nano /etc/yum.conf
exclude=bacula-common.x86_64 bacula-console.x86_64 bacula-director.x86_64 
bacula-libs-sql.x86_64 bacula-logwatch.noarch bacula-storage.x86_64
# with the above string in yum.conf, yum list|grep -i bacula only shows 13.x 
bacula repo packages and doesn't show any appstream repo bacula packages, which 
were version 11.x 
# same applies to dnf. 
#WARNING: BACULA 15.X APPEARS TO FEATURE A BACULA-CONSOLE PACKAGE, WHICH MIGHT 
BE BLACKLISTED BY THE ABOVE PROCESS DURING AN INSTALLATION/UPGRADE OF 15.X

# install postgresql and bacula
yum install postgresql-server -y
service postgresql initdb
#output: Hint: the preferred way to do this is now "/usr/bin/postgresql-setup 
--initdb --unit postgresql"
yum install chkconfig -y
chkconfig postgresql on
yum install bacula-postgresql -y
systemctl start postgresql.service
su - postgres
/opt/bacula/scripts/create_postgresql_database
/opt/bacula/scripts/make_postgresql_tables
/opt/bacula/scripts/grant_postgresql_privileges
exit

# give bacula user a shell so I can su into that user
chsh -s /bin/bash bacula

# add bacula user to tape group
usermod -a -G tape bacula

# start bacula
/opt/bacula/scripts/bacula start

# give my user rwx access to bacula dir. used so I can filezilla into the 
server and edit stuff from windows
setfacl -R -m YOURUSERNAME:rwx /opt/bacula/

# make symlinks to all bacula programs in /usr/sbin so they can be ran without 
a full path
cp /opt/bacula/bin/* /usr/sbin -s



Robert Gerber
402-237-8692
r...@craeon.net

On Fri, May 3, 2024, 3:40 PM James Israel via Bacula-users 
 wrote:
Thanks for the suggestion, Davide.
 
However, I had tried that script before (used the one for CentOS, as that OS is 
pretty close to RHEL), and I get the following errors. (I tried it again just 
now, same result):
 
First, SHA1 checksums don’t work on this RHEL 9 server, as they don’t with many 
modern OSes, as they’ve been deemed insecure. So, I get:
 
warning: Signature not supported. Hash algorithm SHA1 not available.
error: /tmp/Bacula-4096-Distribution-Verification-key.asc: key 1 import failed.
 
As a work around, I downloaded the .asc file to my local Windows machine, which 
can still do SHA1, and used the resulting hash in the URL in the script, 
commenting out the hash creation parts.
 
After doing that and running the script again, I get:
 
Errors during downloading metadata for repository 'Bacula-Community':
  - Status code: 404 for 
https://www.bacula.org/packages/bf417a80d9108b58a8a3fc8b78110f9f5b181ae1/rpms/11.0.5/el7/repodata/repomd.xml
 (IP: 94.103.98.87)
Error: Failed to download metadata for repo 'Bacula-Community': Cannot download 
repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
 
As you can see, th

  1   2   3   4   5   6   7   8   9   10   >