Re: [Bacula-users] Again LTO9 and performances...

2024-06-25 Thread Josh Fisher via Bacula-users


On 6/24/24 11:04, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


Except when the MaximumSpoolSize for the Device resource is reached or
the Spool Directory becomes full. When there is no more storage space
for data spool files, all jobs writing to that device are paused and the
spool files for each of those jobs are written to the device, one at a
time. I'm not sure if every job's spool file is written to tape at that
time, or if the forced despooling stops once some threshold for
sufficient space has been freed up. So, more accurately, other jobs
continue spooling IFF there is sufficient space in the spool directory.

Ok, doing some more test in real condition (and still arguing on why logs
row are not ordered by date...) i've splitted a job:

...


Sicerily, i hoped there's some more 'smart' way to manage despooling in
bacula; the only way to boost performance seems to me tune MaximumSpoolSize
so tapes it takes more or less the same time to despool of the other
tasks(to) to spool, so dead time interleaved can be minimized.

Seems an hard task. ;-)



When you set MaximumSpoolSize in the Device resource, that sets the 
maximum storage available for all spool files, not for each spool file. 
When that is reached, it means there is no more space for any job's 
spool file, so they all must pause and write their spool files to tape.


Another way that might be better for your case is to leave 
MaximumSpoolSize = 0 (unlimited) and specify a MaximumJobSpoolSize in 
the Device resource instead. The difference is that when one job reaches 
the MaximumJobSpoolSize it will begin writing its spool file to tape, 
but it will not cause the other job(s) to pause until they also reach 
the MaximumJobSpoolSize. Then by starting one job a few minutes after 
the other it should be possible to mostly avoid both jobs pausing 
(de-spooling) at the same time.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-21 Thread Josh Fisher via Bacula-users




On 6/20/24 18:58, Bill Arlofski via Bacula-users wrote:

On 6/20/24 8:58 AM, Marco Gaiarin wrote:

Once that is hit, the spoofles are written to tape, during which active
jobs have to wait because the spool is full.

There's no way to 'violate' this behaviour, right?! A single SD process
cannot spool and despool at the same time?


An SD can be spooling multiple jobs wile *one* and only one Job spool 
file is despooling to one drive.


Add another drive and and the same is still true, but the SD can now 
be despooling two jobs at the same time while other jobs are spooling, 
and so on as you add drives.




Except when the MaximumSpoolSize for the Device resource is reached or 
the Spool Directory becomes full. When there is no more storage space 
for data spool files, all jobs writing to that device are paused and the 
spool files for each of those jobs are written to the device, one at a 
time. I'm not sure if every job's spool file is written to tape at that 
time, or if the forced despooling stops once some threshold for 
sufficient space has been freed up. So, more accurately, other jobs 
continue spooling IFF there is sufficient space in the spool directory.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-06-13 Thread Josh Fisher via Bacula-users



On 6/13/24 08:13, Gary R. Schmidt wrote:

On 13/06/2024 20:12, Stefan G. Weichinger wrote:


interested as well, I need to speedup my weekly/monthly FULL runs 
(with LTO6, though: way slower anyway).


Shouldn't the file daemon do multiple jobs in parallel?

To tape you can only write ONE stream of data.

To the spooling disk there could be more than one stream.


Yes, that seems wrong:
$ grep Concurrent *.conf
bacula-dir.conf:  Maximum Concurrent Jobs = 50
bacula-dir.conf:  Maximum Concurrent Jobs = 50
bacula-fd.conf:  Maximum Concurrent Jobs = 50
bacula-sd.conf:  Maximum Concurrent Jobs = 50


Sorry, I still don't understand what to adjust ;-)

that interleaving to tape sounds dangerous to me.


That's how Bacula works - and has since day one.

We've been using it like that since 2009, starting with an LTO-4 
autoloader, currently using an LTO-6, and I'm about to start agitating 
to upgrade to LTO-9.


Interleaving is not really an issue when data spooling is enabled. Data 
is despooled to tape one job at a time. Only when the spool size is too 
small will there be any interleaving. Even then, the interleaving will 
be a whole bunch of one job's blocks followed by a whole bunch of 
another. I't not a problem, and with sufficient disk space for the 
spool, it doesn't even happen.




What I want to have: the fd(s) should be able to dump backups to the 
spooling directory WHILE in parallel the sd spools previous backup 
jobs from spooling directory to tape (assuming I have only one tape 
drive, which is the case)


Bacula does not work that way.  No doubt if you tried really hard with 
priority and concurrency and pools you could maybe make it work like 
that, but just RTFM and use it as designed.


Why not? According to 
https://www.bacula.org/15.0.x-manuals/en/main/Data_Spooling.html it 
works exactly that way already. Most importantly, concurrent jobs 
continue to spool while one job is despooling to tape. Only one job is 
ever despooling at a given time.


On the other hand, the job that is despooling has exclusive control of 
the tape drive. On the last despool for a job (there ma be more than one 
if the job data exceeds the maximum spool size), the job has to also 
despool the job's spooled attributes to the catalog database before it 
releases the tape drive. Thus, even when other concurrent jobs are 
waiting to be despooled, the tape drive will be idle (or at least 
writing from its internal buffer) while the database writes occur. This 
is one of the reasons that database performance is so important in 
Bacula. I believe that the attributes are despooled before releasing the 
tape drive in order to ensure that despooling of both data and 
attributes is an atomic operation at job completion, probably to avoid 
race conditions.






-> parallelizing things more
It all seems quite parallel to me.


Cheers,
    Gary    B-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again LTO9 and performances...

2024-05-18 Thread Josh Fisher via Bacula-users




On 5/17/24 06:29, Marco Gaiarin wrote:

I'm still fiddling with LTO9 and backup performances; finally i've managed
to test a shiny new server with an LTO9 tape (library indeed, but...) and i
can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM
specification say that the tape could perform 400 MB/s.

Also, following suggestion, i'm using spooling to prevent tape to spin
up but this clearly 'doubles' the backup time... there's some way to
do spooling in parallel? EG, while creating the next spool file, bacula
write to tape the current one?


Not for a single job. When the storage daemon is writing a job's spooled 
data to tape, the client must wait. However, if multiple jobs are 
running in parallel, then the other jobs will continue to spool their 
data while one job is despooling to tape.


It is not clear that spooling doubles the backup time. That would only 
be true if the client is able to supply data at 300 MB/s, or if multiple 
clients running in parallel can supply a cumulative stream of data at 
300 MB/s. Even then, i am skeptical that LTO9 is possible without spooling.





Anyway, i'm hit another trouble. Seems that creating the spool file took an
insane amount of time: source to backup are complex dirs, with millions of
files. Filesystem is ZFS.

'insane amount' means that having a despool performance of 300MB/s, i'm a
overral backup performance of 40MB/s...


It depends greatly on the client, the network, and the type of job. Only 
full jobs will supply a somewhat continuous stream of data. Client 
performance is also a factor. Is the client busy while being backed up? 
How busy is the network?


At the end of data despooling, the attributes are despooled and written 
to the database, so that is also a part of the overall backup 
performance. Check the performance of the database writes.





How can i do to improve the spooling performance? What factors impact more?


Thanks.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-10 Thread Josh Fisher via Bacula-users



On 5/9/24 10:02, Marco Gaiarin wrote:

I've setup some backup jobs for some (mostly windows) client computer; i
mean 'client' as 'not always on'.
...


2) There's some way i can get the 'jobs in volume X'? I can query jobs for
  volume, but i've not found a way to query volumes for jobs


I use the following in my query.sql file:

# 14

:List Jobs stored for a given Volume name

*Enter Volume name:

SELECT DISTINCT Job.JobId as JobId,Job.Name as Name,Job.StartTime as StartTime,

  Job.Type as Type,Job.Level as Level,Job.JobFiles as Files,

  Job.JobBytes as Bytes,Job.JobStatus as Status

 FROM Media,JobMedia,Job

 WHERE Media.VolumeName='%1'

 AND Media.MediaId=JobMedia.MediaId

 AND JobMedia.JobId=Job.JobId

 ORDER by Job.StartTime;

With this, the query command in bconsole will have:

  14: List Jobs stored for a given Volume name

as one of the query command options.







3) In this setup failed jobs make only noise; there's some way to delete/purge
  failed jobs?

  Or there's some way i can setup the 'RunScript {}' job property to delete
  failed jobs?



Thanks.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] advice about tape drives

2024-04-22 Thread Josh Fisher via Bacula-users
Why not migrate the LTO-2 volumes to disk, then install whatever version 
of tape drive you wish and migrate the disk volumes to the new LTO tapes?



On 4/22/24 11:29, Gary R. Schmidt wrote:

On 23/04/2024 00:58, Alan Polinsky wrote:
I have used Bacula for many years, since version 5. In the past, I 
have mentioned my two Nas's along with various Windows and Linux 
machines get backed up on a nightly basis to tape. Currently that 
tape drive is an LTO3 based drive. Some of the older backups are on 
LTO2 tapes. My tape drive is starting to show its age, and within a 
period of time it will have to be replaced. (Since I am a retired 
programmer on a fixed income, cost, as always becomes an issue.) I 
need to understand the backward capabilities of more recent drives. 
How high could I go with LTO based machines while still maintaining 
the ability to read (and hopefully write) those old LTO2 tapes?



Thank you everyone for your help.



All anyone could ever want to know about LTO tapes: 
.


The rule of thumb is read two back, and write one, but that changed 
with LTO-8.  Sort of.  Sigh.  Read the wikipedia page.


Cheers,
    Gary    B-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Any suggestions for fail2ban jail for Bacula Director ?

2024-04-03 Thread Josh Fisher via Bacula-users
Nothing against fail2ban, which is quite good at mitigating brute force 
and dictionary attacks against password protection, but for opening Dir 
to the public internet, I would most definitely suggest looking into 
using TLS certificates issued by your own private CA instead.



On 4/2/24 19:05, MylesDearBusiness via Bacula-users wrote:

I nailed this.

I created a cron job that, every ten minutes or so, runs "journalctl 
-u bacula-dir > /opt/bacula/log/bacula-dir-journal.log" (since I 
opened bacula-dir's firewall port up to the public internet).


I then created a fail2ban jail that scanned for authentication failure 
patterns and banned (via temporary firewall rules) users who 
repeatedly failed to log in successfully.


root:/etc/fail2ban/jail.d# cat bacula.conf
[bacula]
enabled  = true
port = 9101
filter   = bacula
logpath  = /opt/bacula/log/bacula-dir-journal.log
maxretry = 10
findtime = 3600
bantime  = 900
action = iptables-allports

root:/etc/fail2ban/filter.d# cat /etc/fail2ban/filter.d/bacula.conf

# Fail2Ban filter for Bacula Director
[Definition]
failregex = Hello from client: is invalid
ignoreregex =

root:/etc/fail2ban/filter.d#

Best,



On 2023-12-04 12:22 p.m., MylesDearBusiness wrote:

Hello,

I just installed Bacula director on one of my cloud servers.

I have set the firewall to allow traffic in/out of port 9101 to allow 
it to be utilized to orchestrate remote backups as well.


What I want to do is to identify the potential attack surface and 
create a fail2ban jail configuration.


Does anybody have an exemplar that I can work with?

Also, is there a way to simulate a failed login attempt with a tool 
such as netcat?  I could possibly use PostMan and dig into the REST 
API spec, but I was hoping the community would be able to shortcut 
this effort.


What say you?

Thanks,






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-04-03 Thread Josh Fisher via Bacula-users


I have found the problem. See below.

On 4/2/24 05:33, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


This is the Easter weekend in italy, so backup will fail in most of my
sites; i'm enabling debug for sites, i'll come back here on monday...

When the magazine is ejected and no magazine is in drive, the output of
'list media' command from bconsole should be saved to see if it shows all
volumes to be not in changer, if that is possible for you.

...


So, trying to determine what happened... cartdrige 1 (0) ejected on friday 
morning:

Mar 29 07:00:02:  [30941]: restored state of magazine 0
Mar 29 07:00:02:  [30941]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:02:  [30941]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) mounted at /mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18
Mar 29 07:00:02:  [30941]: magazine 0 has 10 volumes on 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18
Mar 29 07:00:02:  [30941]: 10 volumes on magazine 0 assigned slots 1-10
Mar 29 07:00:02:  [30941]: magazine 1 is not mounted
Mar 29 07:00:02:  [30941]: magazine 2 is not mounted
Mar 29 07:00:02:  [30941]: saved state of magazine 0
Mar 29 07:00:02:  [30941]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:02:  [30941]: found symlink for drive 0 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0001
Mar 29 07:00:02:  [30941]: drive 0 previously loaded from slot 2 
(VIPVE2RDX__0001)
Mar 29 07:00:02:  [30941]: found symlink for drive 1 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0006
Mar 29 07:00:02:  [30941]: drive 1 previously loaded from slot 7 
(VIPVE2RDX__0006)
Mar 29 07:00:02:  [30941]: found symlink for drive 2 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0007
Mar 29 07:00:02:  [30941]: drive 2 previously loaded from slot 8 
(VIPVE2RDX__0007)
Mar 29 07:00:02:  [30941]:  preforming UNLOAD command
Mar 29 07:00:02:  [30941]: deleted symlink for drive 0
Mar 29 07:00:02:  [30941]: deleted state file for drive 0
Mar 29 07:00:02:  [30941]: unloaded drive 0
Mar 29 07:00:02:  [30941]:   SUCCESS unloading slot 2 from drive 0
Mar 29 07:00:05:  [31075]: restored state of magazine 0
Mar 29 07:00:05:  [31075]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:05:  [31075]: device /dev/sdc1 not found in system mounts, 
searching all udev device aliases
Mar 29 07:00:05:  [31075]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) not mounted
Mar 29 07:00:05:  [31075]: magazine 0 is not mounted
Mar 29 07:00:05:  [31075]: update slots needed. magazine 0 no longer mounted; 
previous: 10 volumes in slots 1-10
Mar 29 07:00:05:  [31075]: magazine 1 is not mounted
Mar 29 07:00:05:  [31075]: magazine 2 is not mounted
Mar 29 07:00:05:  [31075]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:05:  [31075]: drive 0 previously unloaded
Mar 29 07:00:05:  [31075]: volume VIPVE2RDX__0006 no longer available, 
unloading drive 1
Mar 29 07:00:05:  [31075]: deleted symlink for drive 1
Mar 29 07:00:05:  [31075]: volume VIPVE2RDX__0007 no longer available, 
unloading drive 2
Mar 29 07:00:05:  [31075]: deleted symlink for drive 2
Mar 29 07:00:05:  [31075]:  preforming REFRESH command
Mar 29 07:00:05:  [31075]: running '/usr/sbin/bconsole -n -u 30'
Mar 29 07:00:05:  [31075]: popen: child stdin uses pipe (4 -> 5)
Mar 29 07:00:05:  [31075]: popen: child stdout uses pipe (6 -> 7)
Mar 29 07:00:05:  [31075]: popen: forking now
Mar 29 07:00:05:  [31075]: popen: parent closing pipe ends 4,7,-1 used by child
Mar 29 07:00:05:  [31075]: popen: parent writes child's stdin to 5
Mar 29 07:00:05:  [31075]: popen: parent reads child's stdout from 6
Mar 29 07:00:05:  [31075]: popen: parent returning pid=31076 of child
Mar 29 07:00:05:  [31075]: sending bconsole command 'update slots storage="VIPVE2RDX" 
drive="0"'
Mar 29 07:00:05:  [31076]: popen: child closing pipe ends 5,6,-1 used by parent
Mar 29 07:00:05:  [31076]: popen: child will read stdin from 4
Mar 29 07:00:05:  [31076]: popen: child will write stdout to 7
Mar 29 07:00:05:  [31076]: popen: child executing '/usr/sbin/bconsole'
Mar 29 07:00:06:  [31079]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:06:  [31079]: device /dev/sdc1 not found in system mounts, 
searching all udev device aliases
Mar 29 07:00:06:  [31079]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) not mounted
Mar 29 07:00:06:  [31079]: magazine 0 is not mounted
Mar 29 07:00:06:  [31079]: magazine 1 is not mounted
Mar 29 07:00:06:  [31079]: magazine 2 is not mounted
Mar 29 07:00:06:  [31079]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:06:  [31079]: drive 0 previously unloaded
Mar 29 07:00:06:  [31079]:  preforming SLO

Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-28 Thread Josh Fisher via Bacula-users



On 3/28/24 09:30, Marco Gaiarin wrote:

I can't explain that, unless the volumes that are in changer, those
assigned to a slot > 0, are not getting their current slot set to zero
when the magazine is ejected.

Exactly.


This is the Easter weekend in italy, so backup will fail in most of my
sites; i'm enabling debug for sites, i'll come back here on monday...


When the magazine is ejected and no magazine is in drive, the output of
'list media' command from bconsole should be saved to see if it shows all
volumes to be not in changer, if that is possible for you.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-26 Thread Josh Fisher via Bacula-users



On 3/21/24 16:36, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


'Log Level = LOG_DEBUG' in the vchanger.conf file. That will log everything

'log level = 7', do you mean, right?

Yes. The configuration parser will also understand "LOG_DEBUG" (same as 7)



I've found an installation where i've forgot a log level = 7, so, last
friday:

...

Mar 15 07:00:06:  [6778]: bconsole output:
Connecting to Director bacula.lnf.it:9101
1000 OK: 103 lnfbacula-dir Version: 9.4.2 (04 February 2019)
Enter a period to cancel a command.
update slots storage="SDPVE2RDX" drive="0"
Automatically selected Catalog: BaculaLNF
Using Catalog "BaculaLNF"
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103 ...
3306 Issuing autochanger "slots" command.
Device "RDXAutochanger" has 10 slots.
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103 ...
3306 Issuing autochanger "list" command.
No Volumes found to label, or no barcodes.
You have messages.

Mar 15 07:00:06:  [6778]: bconsole update slots command success
...


OK. That looks correct. I wish we knew what Bacula thought was 
in-changer at this point in time. The 'update slots' command succeeded 
and the LIST command that Bacula sent to vchanger suceeded and listed no 
volumes. Bacula should show that no volumes are in changer at this point 
and the slot number of every volume should be zero.




And then the operator insert the new cartdrige:

Mar 15 11:01:59:  [31055]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:01:59:  [31055]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:01:59:  [31055]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:01:59:  [31055]: update slots needed. magazine 0 has 10 volumes, 
previously had 0
Mar 15 11:01:59:  [31055]: magazine 1 is not mounted
Mar 15 11:01:59:  [31055]: magazine 2 is not mounted
Mar 15 11:01:59:  [31055]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:01:59:  [31055]: saved state of magazine 0
Mar 15 11:01:59:  [31055]: saved dynamic configuration (max used slot: 10)
Mar 15 11:01:59:  [31055]: drive 0 previously unloaded
Mar 15 11:01:59:  [31055]:  preforming REFRESH command
Mar 15 11:01:59:  [31055]: running '/usr/sbin/bconsole -n -u 30'
Mar 15 11:01:59:  [31055]: popen: child stdin uses pipe (4 -> 5)
Mar 15 11:01:59:  [31055]: popen: child stdout uses pipe (6 -> 7)
Mar 15 11:01:59:  [31055]: popen: forking now
Mar 15 11:01:59:  [31055]: popen: parent closing pipe ends 4,7,-1 used by child
Mar 15 11:01:59:  [31055]: popen: parent writes child's stdin to 5
Mar 15 11:01:59:  [31055]: popen: parent reads child's stdout from 6
Mar 15 11:01:59:  [31055]: popen: parent returning pid=31056 of child
Mar 15 11:01:59:  [31055]: sending bconsole command 'update slots storage="SDPVE2RDX" 
drive="0"'
Mar 15 11:01:59:  [31056]: popen: child closing pipe ends 5,6,-1 used by parent
Mar 15 11:01:59:  [31056]: popen: child will read stdin from 4
Mar 15 11:01:59:  [31056]: popen: child will write stdout to 7
Mar 15 11:01:59:  [31056]: popen: child executing '/usr/sbin/bconsole'
Mar 15 11:02:00:  [31076]: restored state of magazine 0
Mar 15 11:02:00:  [31076]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:02:00:  [31076]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31076]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31076]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:02:00:  [31076]: magazine 1 is not mounted
Mar 15 11:02:00:  [31076]: magazine 2 is not mounted
Mar 15 11:02:00:  [31076]: saved state of magazine 0
Mar 15 11:02:00:  [31076]: saved dynamic configuration (max used slot: 10)
Mar 15 11:02:00:  [31076]: drive 0 previously unloaded
Mar 15 11:02:00:  [31076]:  preforming SLOTS command
Mar 15 11:02:00:  [31076]:   SUCCESS reporting 10 slots
Mar 15 11:02:00:  [31078]: restored state of magazine 0
Mar 15 11:02:00:  [31078]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:02:00:  [31078]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31078]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31078]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:02:00:  [31078]: magazine 1 is not mounted
Mar 15 11:02:00:  [31078]: magazine 2 is not mounted
Mar 15 11:02:00:  [31078]: saved state of magazine 0
Mar 15 11:02:00:  [31078]: saved dynamic configuration (max used slot: 10)
M

Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-20 Thread Josh Fisher via Bacula-users



On 3/19/24 08:31, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


This Looks like the volume is marked as being in a slot in the bacula
catalog, but the RDX cartridge containing that volume is not actually
mounted. This can happen if a cartridge is removed but an 'update slots'
command is never run or else failed due to an error.

Also replying to Bill: no, script seems to work as expèected and udev rules
that run them too.

Apart strange things that sometime happens, what happen is simple: on friday
morning i eject the cartdrige by a script; operator so find the cartdrige
expelled, and change it.

If change wrong cartdrige (eg, remove the '3' and put in the '2' insted of
the '1') could be that the umount script/udev rule does not act, but surelt
the mount script/rule act as expected: i found the volumes of cartdrige 2
correctly 'inchanger'.


It would be a bug if Bacula is trying to mount a volume not inchanger, but
even though it seems that way, that might not be true. You can set
'Log Level = LOG_DEBUG' in the vchanger.conf file. That will log everything
that vchanger does. The udev script will run vchanger with the REFRESH 
command.
If you don't see a REFRESH command being logged in the vchanger log file 
when

the cartridge is removed, then Bill is correct, the RDX device is not
generating an ACTION="remove" event in udev when the cartridge is removed.




Simply they are not purgeable, so bacula start to purge volumes in cartdrige
1 (right) and mount them (wrong), puting them on error.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-19 Thread Josh Fisher via Bacula-users




On 3/18/24 14:36, Bill Arlofski via Bacula-users wrote:

This is in response to Josh...

In my experience, with RDX, the docking bay itself shows up as a 
device... (/dev/usbX or /dev/sdX, I forget)


But plugging/unplugging an RDX cartridge does not notify the kernel in 
any way, so udev rules are not possible to do anything automatically 
with RDX.


This was my experience about 8 or more years ago which is why I 
abandoned any attempts to use RDX with my own customers, and went with 
plain old removable eSATA drives, fully encrypted with LUKs, and 
auto-mounted with autofs.


Do you remember if you checked for an ACTION="change" event on media 
change? That would be sufficient to trigger a launch of vchanger REFRESH 
to perform the update slots. It would be a feature of the device driver 
and may or may not exist. If not, then there's definitely no way to 
automate it and the update slots must be run manually from bconsole any 
time a cartridge is inserted (or removed).





I'd love to know if something has changed in this regard in the past 8 
years or so. :)




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-18 Thread Josh Fisher via Bacula-users



On 3/15/24 12:28, Marco Gaiarin wrote:

Following the hint on:

https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2683

i (re)post here, seeking feedback.


Situation: bacula 9.4 (debian buster), using RDX cassette/disks for backup,
using the wonderful 'vchanger' virtual autochanger script.

Following the vchanger doc:
https://vchanger.sourceforge.io/

https://sourceforge.net/projects/vchanger/files/vchangerHowto.html/download

it is needed to create a virtual changer device for every 'media' in the
'media pool'; so my SD configuration is:

  Autochanger {
Name = RDXAutochanger
Description = "RDX Virtual Autochanger on ODPVE2"
Device = RDXStorage0
Device = RDXStorage1
Device = RDXStorage2
Changer Command = "/usr/bin/vchanger %c %o %S %a %d"
Changer Device = "/etc/vchanger/ODPVE2RDX.conf"
  }
  Device {
Name = RDXStorage0
Description = "RDX 0 File Storage on ODPVE2"
Drive Index = 0
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/0"
  }
  Device {
Name = RDXStorage1
Description = "RDX 1 File Storage on ODPVE2"
Drive Index = 1
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/1"
  }
  Device {
Name = RDXStorage2
Description = "RDX 2 File Storage on ODPVE2"
Drive Index = 2
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/2"
  }

every 'media' in 'media pool' have some volumes in the pool, more or less
like inserting in an (real) autochanger a set of tapes.

So when i insert a cartdrige, i get:

  root@odpve2:~# bconsole
  Connecting to Director bacula.lnf.it:9101
  1000 OK: 103 lnfbacula-dir Version: 9.4.2 (04 February 2019)
  Enter a period to cancel a command.
  *list media pool=VEN-OD-ODPVE2RDXPool
  Automatically selected Catalog: BaculaLNF
  Using Catalog "BaculaLNF"
  
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
  | mediaid | volumename  | volstatus | enabled | volbytes| 
volfiles | volretention | recycle | slot | inchanger | mediatype | voltype | 
volparts | lastwritten | expiresin |
  
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
  |  25 | ODPVE2RDX__ | Used  |   1 |  15,258,511,119 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-04 23:09:08 |   798,842 |
  |  26 | ODPVE2RDX__0001 | Used  |   1 |  17,769,884,030 | 
   4 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-06 23:11:49 |   971,803 |
  |  27 | ODPVE2RDX__0002 | Used  |   1 |  65,296,705,760 | 
  15 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-03 02:09:00 |   636,834 |
  |  28 | ODPVE2RDX__0003 | Used  |   1 |  14,995,402,621 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-02 23:09:13 |   626,047 |
  |  29 | ODPVE2RDX__0004 | Used  |   1 |  16,099,504,717 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-05 23:12:59 |   885,473 |
  |  30 | ODPVE2RDX__0005 | Used  |   1 |  15,067,862,578 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-03 23:11:20 |   712,574 |
  |  31 | ODPVE2RDX__0006 | Used  |   1 |  15,359,960,121 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-08 09:55:38 | 1,096,832 |
  |  32 | ODPVE2RDX__0007 | Used  |   1 | 259,203,030,230 | 
  60 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-01 23:35:29 |   541,223 |
  |  55 | ODPVE2RDX_0001_ | Used  |   1 |  16,354,496,268 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-12 23:10:10 | 1,486,504 |
  |  56 | ODPVE2RDX_0001_0001 | Used  |   1 |  15,253,608,839 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-10 23:11:32 | 1,317,386 |
  |  57 | ODPVE2RDX_0001_0002 | Used  |   1 |  65,139,795,652 | 
  15 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-10 02:09:59 | 1,241,693 |
  | 

Re: [Bacula-users] LTO tape performances, again...

2024-01-25 Thread Josh Fisher via Bacula-users


On 1/24/24 12:48, Marco Gaiarin wrote:

My new IBM LTO9 tape unit have a data sheet performace of:


https://www.ibm.com/docs/it/ts4500-tape-library/1.10.0?topic=performance-lto-specifications

so on worst case (compression disabled) seems to perform 400 MB/s on an LTO9
tape.


Practically on Bacula i get 70-80 MB/s. I've just:


1) followed:


https://www.bacula.org/9.6.x-manuals/en/problems/Testing_Your_Tape_Drive_Wit.html#SECTION00422000

  getting 237.7 MB/s on random data (worst case).


2) checked disk performance (data came only from local disk); i've currently
  3 servers, some perform better, some worster, but the best one have a read
disk performance pretty decent, at least 200MB/s on random access (1500 MB/s
on sequential one).



Disk that is local to the server does not mean it is local to the 
bacula-sd process or tape drive. If the connection is 1 gigabit 
Ethernet, then max rate is going to be 125 MB/s.



3) disabled data spooling, of course; as just stated, data came only from
  local disks. Enabled attribute spooling.



That is probably not what you want to do. You want the the bacula-sd 
process to spool data on its local disk so that when it is despooled to 
the tape drive it is reading only from local disk, not from a small RAM 
buffer that is being filled through a network socket. Even with a 10 G 
Ethernet network it is better to spool data for LTO tape drives, since 
the client itself might not be able to keep up with the tape drive, or 
is busy, or the network is congested, etc.






Clearly i can expect some performance penalty on Bacula and mixed files, but
really 70MB/s are slow...


What else can i tackle with?


Thanks.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-29 Thread Josh Fisher via Bacula-users


On 11/29/23 05:47, MylesDearBusiness via Bacula-users wrote:


Hello, Bacula experts.

Due to message length limitations of this mailing list, I have been 
unable to post the majority of necessary details, which is why I was 
using my github gist system to store, apologies for the confusion or 
inconvenience this caused.  I just thought it would be more confusing 
to break up the details into multiple messages.


The latest after following up on some of Bill's suggestions, I added a 
second device in my File Changer and now bconsole shows I am being 
asked to execute the "label" command, which is failing.


As a reminder, I'm running bacula-dir under user "bacula" (which does 
not have access to the storage mount /mnt/my_backup).
I'm running bacula-sd and bacula-fd under user "backupuser" which has 
sole permission to re ad/write files under this mount.




Is SELinux or AppArmor enabled? That could block writes even if Unix 
permissions are correct.



Please see 
https://gist.github.com/mdear/99ed7d56fd5611216ce08ecff6244c8b for 
more, I just added a new comment with additional details.


Thanks,






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-08 Thread Josh Fisher via Bacula-users


On 11/8/23 13:32, Martin Simmons wrote:

On Wed, 8 Nov 2023 11:09:44 -0500, Josh Fisher via Bacula-users said:

On 11/7/23 19:26, Lionel PLASSE wrote:

I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
backup my servers :) .
I was talking about ssh (scp) transfer just to Just to show out  I have no 
problem when uploading big continuous data using other tools through this wan. 
The wan connection is quite stable.

"So it is fine when the NIC is up. Since this is Windows,"
no windows. I discarder windows problem hypothesis by using a migration job, so 
from linux sd to linux sd

OK. I see that now. You also tried without compression and without
encryption. Have you tried reducing Maximum Network Buffer Size back to
the default 32768?

Are you sure it is 32768?

I thought the default comes from this in bacula/src/baconfig.h:

#define DEFAULT_NETWORK_BUFFER_SIZE (64 * 1024)



In the docs it says the default is 32768, but if it's in the source, 
then that's what it is. :)






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-08 Thread Josh Fisher via Bacula-users


On 11/7/23 19:26, Lionel PLASSE wrote:

I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
backup my servers :) .
I was talking about ssh (scp) transfer just to Just to show out  I have no 
problem when uploading big continuous data using other tools through this wan. 
The wan connection is quite stable.

"So it is fine when the NIC is up. Since this is Windows,"
no windows. I discarder windows problem hypothesis by using a migration job, so 
from linux sd to linux sd


OK. I see that now. You also tried without compression and without 
encryption. Have you tried reducing Maximum Network Buffer Size back to 
the default 32768? There must be some reason why the client seems to be 
sending 30 bytes more than its Maximum Network Buffer Size. Bacula first 
tries the Maximum Network Buffer Size, but if the OS does not accept 
that size, then it adjusts the value down until the OS accepts it. Maybe 
the actual buffer size gets calculated differently on Debian 12? Why is 
the send size exceeding the buffer size? Or could there be a typo in the 
Maximum Network Buffer Size setting on one side?





Thanks for all, I will find out a solution
Best regards

PLASSE Lionel | Networks & Systems Administrator
221 Allée de Fétan
01600 TREVOUX - FRANCE
Tel : +33(0)4.37.49.91.39
pla...@cofiem.fr
www.cofiem.fr | www.cofiem-robotics.fr

  






-Message d'origine-----
De : Josh Fisher via Bacula-users 
Envoyé : mardi 7 novembre 2023 18:01
À : bacula-users@lists.sourceforge.net
Objet : Re: [Bacula-users] fd lose connection


On 11/7/23 04:34, Lionel PLASSE wrote:

Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and
it seems to be more stable. (short sized  job right done , longest job
already running : 750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.


So it is fine when the NIC is up. Since this is Windows, the first thing to do 
is turn off power saving for the network interface device in Device Manager. 
Make sure that the NIC doesn't ever power down its PHY.
If any switch, router, or VPN doesn't handle energy-efficient internet in the 
same way, then it can look like a dropped connection to the other side.

Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.



I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
   - DIR and SD in the same LAN.
   - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area
through WAN to outsource volumes physical support : step nok The final goal: 
outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
  Fatal error: filed/backup.c:1008 Network send error to SD.
ERR=Input/output error Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
  Fatal error: filed/backup.c:1008 Network send error to SD.
ERR=Input/output error Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
  Error: bsock.c:571 Read expected 131118 got 114684 from
Storage daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd
daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mail

Re: [Bacula-users] fd lose connection

2023-11-07 Thread Josh Fisher via Bacula-users


On 11/7/23 04:34, Lionel PLASSE wrote:

Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and it seems 
to be more stable. (short sized  job right done , longest job already running : 
750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.



So it is fine when the NIC is up. Since this is Windows, the first thing 
to do is turn off power saving for the network interface device in 
Device Manager. Make sure that the NIC doesn't ever power down its PHY. 
If any switch, router, or VPN doesn't handle energy-efficient internet 
in the same way, then it can look like a dropped connection to the other 
side.


Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.





I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
  - DIR and SD in the same LAN.
  - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area through WAN 
to outsource volumes physical support : step nok
The final goal: outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
 Error: bsock.c:571 Read expected 131118 got 114684 from Storage 
daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-29 Thread Josh Fisher via Bacula-users


On 9/29/23 06:07, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


I'm really getting mad. This make sense for the behaviour (the first
VirtualFull worked because read full and incremental for the same pool) but
still the docs confuse me.
  https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html

No. Not because they were in the same pool, but rather because the
volumes were all loadable and readable by the same device.

OK.



readable by that particular device. Hence, all volumes must have the
same Media Type, because they must all be read by the same read device.

OK.



In a nutshell, you must have multiple devices and you must ensure that
one device reads all of the existing volumes and another different
device writes the new virtual full volumes. This is why it is not
possible to do virtual fulls or migration with only a single tape drive.

OK. Try to keep it simple.

Storage daemon have two devices, that differ only for the name:

   Device {
 Name = FileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }
  Device {
 Name = VirtualFileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }

director side i've clearly defined two storages:

  Storage {
 Name = SVPVE3File
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = FileStorage
 Media Type = File
  }
  Storage {
 Name = SVPVE3VirtualFile
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = VirtualFileStorage
 Media Type = File
}


Then on client i've defined a single pool:

  Pool {
 Name = FVG-SV-ObitoFilePoolIncremental
 Pool Type = Backup
 Storage = SVPVE3File
 Maximum Volume Jobs = 6
 Volume Use Duration = 1 week
 Recycle = yes
 AutoPrune = yes
 Action On Purge = Truncate
 Volume Retention = 20 days
  }

and a single job:

  Job {
 Name = FVG-SV-Obito
 JobDefs = DefaultJob
 Storage = SVPVE3File
 Pool = FVG-SV-ObitoFilePoolIncremental
 Messages = StandardClient
 NextPool = FVG-SV-ObitoFilePoolIncremental
 Accurate = Yes
 Backups To Keep = 2
 DeleteConsolidatedJobs = yes
 Schedule = VirtualWeeklyObito
 Reschedule On Error = yes
 Reschedule Interval = 30 minutes
 Reschedule Times = 8
 Max Run Sched Time = 8 hours
 Client = fvg-sv-obito-fd
 FileSet = ObitoTestStd
 Write Bootstrap = "/var/lib/bacula/FVG-SV-Obito.bsr"
  }



The NextPool needs to be specified in the 
FVG-SV-ObitoFilePoolIncremental pool resource, not in the job resource. 
In the Copy/Migration/VirtualFull discussing the applicable Pool 
resource directives for these job types, it states under Important 
Migration Considerations that::


The Next Pool = ... directive must be defined in the *Pool* referenced 
in the Migration Job to define the Pool into which the data will be 
migrated.


Other than that, you are specifically telling the job to run on a single 
Storage resource. That will not work, unless the single Storage resource 
is an autochanger with multiple devices. You need to somehow ensure that 
Bacula can select a different device for writing the new virtual full 
job. If you are using version 13.x, then you can define the job's 
Storage directive as a list of Storage resources to select from. For 
example:


Job {
 Name = FVG-SV-Obito
 Storage = SVPVE3File,SVPVE3VirtualFile
 ...
}

I believe that a virtual full job will only select a single read device, 
so the above may be all that is needed.


Otherwise, you can use a virtual disk autochanger.




If i run manually:

run job=FVG-SV-Obito

work as expected, eg run an incremental job. If i try to run:

run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile

The job run, seems correctly:

  *run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile
  Using Catalog "BaculaLNF"
  Run Backup job
  JobName:  FVG-SV-Obito
  Level:VirtualFull
  Client:   fvg-sv-obito-fd
  FileSet:  ObitoTestStd
  Pool: FVG-SV-ObitoFilePoolIncrementa

Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-27 Thread Josh Fisher via Bacula-users



On 9/26/23 12:48, Marco Gaiarin wrote:

Mandi! Rados??aw Korzeniewski
   In chel di` si favelave...


Because of this. To make Virtual Full Bacula needs to read all backup jobs
starting from Full, Diff and all Incrementals. Your Full (as volume
suggests) is stored on Obito_Full_0001 which has an improper media type.
Correct your configuration and backups and start again.

I'm really getting mad. This make sense for the behaviour (the first
VirtualFull worked because read full and incremental for the same pool) but
still the docs confuse me.

 https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html



No. Not because they were in the same pool, but rather because the 
volumes were all loadable and readable by the same device.




Say in 'Migration and Copy':

  For migration to work properly, you should use Pools containing only Volumes 
of the same Media Type for all migration jobs.

but in 'Important Migration Considerations':

  * Each Pool into which you migrate Jobs or Volumes must contain Volumes of 
only one Media Type.



While true, I find that misleading. It is true only because a Device 
resource can specify only a single Media Type. What is really going on 
is that Bacula makes a one-time decision when the job starts as to which 
device it will read from and which device it will write to. Once that 
read device is established, all volumes that need to be read must be 
readable by that particular device. Hence, all volumes must have the 
same Media Type, because they must all be read by the same read device.




  [...]
  * Bacula currently does only minimal Storage conflict resolution, so you must 
take care to ensure that you don't try to read
and write to the same device or Bacula may block waiting to reserve a drive 
that it will never find.
In general, ensure that all your migration pools contain only one Media 
Type, and that you always migrate to pools with different Media Types.

and in 'Virtual Backup Consolidation':

  In some respects the Virtual Backup feature works similar to a Migration job, 
in that Bacula normally reads the data from the pool
  specified in the Job resource, and writes it to the Next Pool specified in 
the Job resource. Note, this means that usually the output
  from the Virtual Backup is written into a different pool from where your 
prior backups are saved. Doing it this way guarantees that you
  will not get a deadlock situation attempting to read and write to the same 
volume in the Storage daemon. If you then want to do
  subsequent backups, you may need to move the Virtual Full Volume back to your 
normal backup pool. Alternatively, you can set your
  Next Pool to point to the current pool. This will cause Bacula to read and 
write to Volumes in the current pool. In general, this will
  work, because Bacula will not allow reading and writing on the same Volume. 
In any case, once a VirtualFull has been created, and a
  restore is done involving the most current Full, it will read the Volume or 
Volumes by the VirtualFull regardless of in which Pool the
  Volume is found.



In a nutshell, you must have multiple devices and you must ensure that 
one device reads all of the existing volumes and another different 
device writes the new virtual full volumes. This is why it is not 
possible to do virtual fulls or migration with only a single tape drive.




So, after an aspirin, seems to me that:

1) for a virtual backup, the reading part need to READ jobs from volumes
  with the same 'Media Type'; so i can use different pools/storage, but have
to use the same 'Media Type'.



Yes. Also, if you have multiple devices with the same Media Type, you 
must make certain that any volume with that Media Type can be loaded 
into any of those devices.




2) i can use the same pool, but clearly i cannot guarantee that i'll have
  only TWO full backup; error in rotation, ... could sooner or later lead to
a virtualfull job will write to an empty volume, making another copy of
data; surely i need TWO full copy (one for reading, one for writing).



A new full backup is made each time the virtual full job runs. You can 
purge volumes in a RunAfter script or you limit the number of volumes in 
the pool, but I guess the only way to guarantee there are only two 
copies is to manually purge old volumes.




3) i can use different pool, but with the same media type; in this way i can
  somewhat limit the number of full backup to two (using two media in the
full pool). But still i think that there's no way to have ONE full copy...
clearly without 'scripting' something around (create a scratch pool/volume,
consolidate, migrate back to original pool/volume, delete scratch).



I don't understand the goal, I think, but MaximumVolumeJobs in the Pool 
resource might work.






I'm missing something? Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Slow spooling and hashing speed

2023-09-15 Thread Josh Fisher via Bacula-users


On 9/14/23 15:35, Rob Gerber wrote:
Bacula is transferring data at a fraction of the available link speed. 
I am backing up an SMB share hosted on a fast NAS appliance. The share 
is mounted on the bacula server in /mnt/NAS/sharename. I have 
dedicated 10gbe copper interfaces on the NAS and the bacula server.


When backing up the NAS, cifsiostat shows around 250MB/s during the 
spooling phase (and 0 kb/s during the despool phase). When using cp to 
copy files from the NAS to the Bacula server, I can easily saturate my 
10gbe link (avg throughput around 1GB/s, or a little lower).



So that tells you that there's nothing wrong with the underlying SMB 
file system. The Bacula client just reads the files like any other 
directory it's backing up.





I think the problem lies in Bacula because I can copy data much faster 
using cp instead of bacula. Obviously bacula is doing a lot more than 
cp, so there will be differences. However I would hope for transfer 
speeds closer to the available link speed.


top shows that a couple cores are maxed out during the spooling 
process. Maybe hashing speed is the limitation here? If so, could 
multicore hashing support speed this up? I have two e5-2676 v3 
processors in this server. I am using SHA512 right now, but I saw 
similar speeds from bacula when using MD5.



The hashing speed doesn't account for a 4x slower transfer, and likely 
not for saturating 2 cores. Do you have compression enabled for the job? 
Or encryption? You definitely do not want compression, since the tape 
drive will handle compression itself. Also, the client and sd are the 
same machine in this case, but make sure it is not configured to use TLS 
connections.





Average write speed to LTO-8 media winds up being about 120-150MB/s 
once the times to spool and despool are considered.


My spool is on a 76GB ramdisk (spool size is 75G in bacula dir conf), 
so I don't think spool disk access speed is a factor.



Might be overkill. A NVMe SSD is plenty fast enough for both the 10G 
network and for despooling to the LTO8 drive. If the catalog DB is also 
on this server, then you might be better off with the spool on SSD and 
far more RAM dedicated to postgresql. If the DB is on another server, 
then the attributes are being despooled to the DB over the 1G network.





I have not tested to see if bacula could back up faster if it wasn't 
accessing a share via SMB. I don't think SMB should be an issue here 
but I have to consider every possibility. The SMB share I'm backing up 
is mounted on /mnt/NAS/sharename. Bacula is backing that mount folder up.


Currently, my only access to the NAS appliance is via SMB. The 
appliance does support iscsi in read only mode but i'm not sure if 
there would be any performance improvements.


I don't think the traffic could be going out through the wrong 
interface. The NAS is directly attached to my bacula server using a 
short cat6 cable. The NAS and my server each have 10gbe copper 
interfaces. The relevant interfaces have ip addresses statically 
assigned. These addresses are unique to the LAN configuration (local 
lan is 10.1.1.0/24 , 10gbe interfaces assigned to 
192.168.6.25 and 192.168.6.100). My bacula server's only other 
connection is to the gigabit LAN switch.


Is there any information that I could provide to help the list help 
me, or does anyone have any thoughts for me?


Regards,
Robert Gerber
402-237-8692
r...@craeon.net





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-25 Thread Josh Fisher via Bacula-users



On 8/25/23 12:06, Martin Simmons wrote:

On Thu, 24 Aug 2023 15:51:18 -0400, Josh Fisher via Bacula-users said:


   Probably you have compression and/or encryption turned on. In
that case, Bacula cannot simply fseek to the offset. It has to
decompress and/or decrypt all data in order to find it, making restores
far slower than backups.

The compression and/or encryption is done within each block, so tnat doesn't
affect seek time.



Interesting. So after decompression and decryption, does the 
uncompressed/decrypted data contain a partial block(s), or are the 
compressed/encrypted blocks originally written with variable block sizes 
so that the original data is handled as fixed size blocks?





__Martin



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-24 Thread Josh Fisher via Bacula-users



On 8/24/23 05:32, Lionel PLASSE wrote:

Hello,

For usb harddrive and file media volume,   when I do a restore job I get a long waiting  step : 
"Forward spacing Volume "T1" to addr=249122650420"
I remember I managed to configure the storage resource to quickly restore sdd 
drives.

Should I use fastforward, blockpositionning and HardwareEndOfFile for USB disks 
and file volume?
How to avoid this long forwarding when not a tape device?



I don't think any of those affect file type storage (random access 
devices). Probably you have compression and/or encryption turned on. In 
that case, Bacula cannot simply fseek to the offset. It has to 
decompress and/or decrypt all data in order to find it, making restores 
far slower than backups.


.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] compression with lto?

2023-05-18 Thread Josh Fisher via Bacula-users

On 5/17/23 14:14, Phil Stracchino wrote:

On 5/17/23 12:52, Marco Gaiarin wrote:


I've LTO tape that work in very dusty environment by at least 6 years.

Istead of dust, i've found that setting properly spooling and buffers
prevent the spinup/spindown effect, that effectively can be very
stressful...


Yes, I went to great lengths to try to keep mine streaming and avoid 
shoe-shining, but with only moderate success.


I have had many fewer problems, as well as much better performance, 
since I abandoned tape and went to disk-to-disk-to-removable-disk 
(with both of the destination disk stages being RAID).  Full backup 
cycles that used to take 18 hours and two or three media changes, with 
about a 10% failure rate due to media errors, now take 3 or 4 hours 
with no media changes and nearly 100% success.



However, we are getting further and further off the subject of 
compression.



That approach actually affects both tape and disk. Software compression 
happens on the client, so performance greatly depends on the type of 
clients being backed up. For example, there may be NAS boxes with low 
power processors, Software compression will definitely slow the backup 
of such clients.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula, Autochangers, insist on loading 'not in changer' media...

2023-05-12 Thread Josh Fisher via Bacula-users
I would add that this problem appears to occur when all in-changer tapes 
are unavailable due to volume status AND the job's pool does contain 
available tapes, but those tapes are not in-changer. Bacula will attempt 
to load a tape that is not in-changer, where it should send an operator 
intervention notice.


On 5/12/23 06:22, Marco Gaiarin wrote:

We have some setups using bacula (debian buster, 9.4.2-2+deb10u1) and RDX
media, by the way of the 'vchanger' (Virtual Changer) script.

All works as expected, until the current mounted media exaust the 'in
changer' media (because exausted it; or because simply users load the
incorrect media...).

After that, bacula try to mount expired (purge them) or generically
avaliable volumes from other media, that are 'not in changer', putting them
on error.
We have extensivaly debugged vchanger script, that seems behave correctly.

Bacula seems have a current and correct state of 'in changer' volumes, and
anyway a 'update volumes' on console does not solve the trouble.


On director we have:

Autochanger {
Name = SDPVE2RDX
Address = sdpve2.sd.lnf.it
SDPort = 9103
Password = "unknown"
Maximum Concurrent Jobs = 5
Device = RDXAutochanger
Media Type = RDX
}

Pool {
Name = VEN-SD-SDPVE2RDXPool
Pool Type = Backup
Volume Use Duration = 1 days
Maximum Volume Jobs = 1
Recycle = yes
AutoPrune = yes
Action On Purge = Truncate
Volume Retention = 20 days
}


On SD we have:

Autochanger {
   Name = RDXAutochanger
   Device = RDXStorage0
   Device = RDXStorage1
   Device = RDXStorage2
   Changer Command = "/usr/bin/vchanger %c %o %S %a %d"
   Changer Device = "/etc/vchanger/SDPVE2RDX.conf"
}

Device {
   Name = RDXStorage0
   Drive Index = 0
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/0"
}

Device {
   Name = RDXStorage1
   Drive Index = 1
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/1"
}

Device {
   Name = RDXStorage2
   Drive Index = 2
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/2"
}


Someone have some clue? Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Full backups where nextpool is a different sd

2023-03-22 Thread Josh Fisher via Bacula-users

On 3/22/23 13:29, Bill Arlofski via Bacula-users wrote:

On 3/22/23 10:23, Mark Guz wrote:

Hi,

I'm trying to get virtual full backups working.  I'm running 13.0.2.

The differentials are being backed-up to a tape library in 1 location
which is not usually manned, so I'm trying to get the virtual fulls to
be sent to a tape library in our main location.

However when the virtual full starts, it tries to mount volumes from
the differential pool in the full pool library, which of course is
physically impossible.

...


Hello Mark,

Unfortunately, this is not possib
le currently with Virtual Fulls.

Migration and Copy jobs can be between different SDs, but not Virtual 
Fulls.


I have been informed by the developers that it will also not be 
supported very soon.


Sorry for the bad news.


Best regards,
Bill



I have long wished that Bacula would pick next volume and drive as a 
pair in an atomic operation. This could solve other race conditions 
caused by certain configurations, for example concurrent jobs writing to 
the same pool with a multi-drive autochanger. As long as a single read 
drive is assigned statically at job start, things such as utilizing 
multiple SDs or multiple autochanger drives for a single job will not be 
possible. Unfortunately, that is not a simple task.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-19 Thread Josh Fisher via Bacula-users


On 12/19/22 04:55, Marco Gaiarin wrote:

Mandi! Josh Fisher
   In chel di` si favelave...


Does the firewall on sdpve2.sd.lnf.it allow connections on TCP 9103? Is
there an SD process on sdpve2.sd.lnf.it listening on port 9103? You can try
to 'telnet sdpve2.sd.lnf.it 9103' command from the machine that bacula-dir
runs on to see if it is connecting. Check that the SD password set in
bacul-dir.conf matches the one in bacula-sd.conf. Something is preventing
bacula-dir from connecting to bacula-sd.

Comunication between dir and SD works as expected, there's no firewall or
other connection troubles in between.

I can send mount or umount command, simply they are ignored:



Try checking vchanger functions directly. Use the -u and -g flags to run 
vchanger as the same user and group that bacula-sd runs as. As root, try:


        vchanger -u bacula -g tape /path/to/vchanger.conf load 5 
/dev/null 0


The 3rd positional argument is the slot number, the 5th the drive number.

It could be a permission issue. When vchanger is invoked by bacula-sd, 
it will run as the same user and group as bacula-sd. So the vchanger 
work directory needs to be writable by that user.


Also, the filesystem that the backup is being written to needs to be 
writable by the user that bacula-sd runs as.





  Device status:
  Autochanger "RDXAutochanger" with devices:
"RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0)
"RDXStorage1" (/var/spool/vchanger/VIPVE2RDX/1)
"RDXStorage2" (/var/spool/vchanger/VIPVE2RDX/2)

  Device File: "RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0) is not open.
Device is being initialized.
Drive 0 is not loaded.
  ==

  Device File: "RDXStorage1" (/var/spool/vchanger/VIPVE2RDX/1) is not open.
Slot 6 was last loaded in drive 1.
  ==

  Device File: "RDXStorage2" (/var/spool/vchanger/VIPVE2RDX/2) is not open.
Drive 2 is not loaded.
  ==
  

  Used Volume status:
  Reserved volume: VIPVE2RDX_0002_0003 on File device "RDXStorage0" 
(/var/spool/vchanger/VIPVE2RDX/0)
 Reader=0 writers=0 reserves=2 volinuse=1 worm=0



  *umount storage=VIPVE2RDX
  Automatically selected Catalog: BaculaLNF
  Using Catalog "BaculaLNF"
  Enter autochanger drive[0]:
  3901 Device ""RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0)" is already 
unmounted.


Seems that simply the SD is on a 'unknown' state: they pretend to have a
volume 'reserved' (what mean?) and do nothing.

After a restart of the SD, current (stalling) job got canceled, but if i
rerun it, thay work as expected...



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-16 Thread Josh Fisher via Bacula-users

On 12/15/22 04:37, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


Also, you have tried using 'umount' and 'update slots' in bconsole, but
did you try the 'mount' command? It is the mount command that would
cause bacula-dir to choose a volume and invoke vchanger to load it.

After restarting the SD in logs i got:

  15-Dec 10:30 sdinny-fd JobId 2318: Fatal error: job.c:2004 Bad response to 
Append Data command. Wanted 3000 OK data, got 3903 Error append data: 
vol_mgr.c:420 Cannot reserve Volume=SDPVE2RDX_0001_0003 because drive is busy 
with Volume=SDPVE2RDX_0002_0004 (JobId=0).
  15-Dec 10:30 lnfbacula-dir JobId 0: Fatal error: authenticate.c:123 Director unable to 
authenticate with Storage daemon at "sdpve2.sd.lnf.it:9103". Possible causes:
  Passwords or names not the same or
  Maximum Concurrent Jobs exceeded on the SD or
  SD networking messed up (restart daemon).
  For help, please see: 
http://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html


Does the firewall on sdpve2.sd.lnf.it allow connections on TCP 9103? Is 
there an SD process on sdpve2.sd.lnf.it listening on port 9103? You can 
try to 'telnet sdpve2.sd.lnf.it 9103' command from the machine that 
bacula-dir runs on to see if it is connecting. Check that the SD 
password set in bacul-dir.conf matches the one in bacula-sd.conf. 
Something is preventing bacula-dir from connecting to bacula-sd.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-13 Thread Josh Fisher via Bacula-users



On 12/12/22 10:47, Marco Gaiarin wrote:

I'm using 'vchanger' script defining some virtual changer.


I don't see that Dir is reporting any errors invoking vchanger, no 
timeouts or vchanger errors, but are there any vchanger processes still 
running? The vchanger processes should be very short lived. In


Are you sure that the filesystem is mounted at the mount point pointed 
to by /var/spool/vchanger/SDPVE2RDX/0 ? Versions of vchanger before 
1.0.3 used the nohup command in udev scripts which does not work as 
expected when invoked by udev and can cause the udev auto-mounting to fail.


Another problem with versions before 1.0.3 is that the locking used to 
serializes concurrent vchanger processes had a race condition that could 
prevent a vchanger instance from running and cause a LOAD or UNLOAD 
command to fail, although that should be logged as a timeout error by 
bacula-dir. As a diagnostic aid, you can turn off this behavior in the 
vchanger config by setting bconsole="". That will prevent vchanger from 
invoking bconsole at all and eliminate the possibility of the race 
condition.


Also, you have tried using 'umount' and 'update slots' in bconsole, but 
did you try the 'mount' command? It is the mount command that would 
cause bacula-dir to choose a volume and invoke vchanger to load it.





Sometimes, and i've looked at but found nothing in logs, the SD 'stalled';
typical situation:

*status storage=SDPVE2RDX
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103

sdpve2-sd Version: 9.4.2 (04 February 2019) x86_64-pc-linux-gnu debian 10.5
Daemon started 19-Sep-22 13:01. Jobs: run=90, running=1.
  Heap: heap=139,264 smbytes=799,681 max_bytes=1,701,085 bufs=209 max_bufs=375
  Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0 newbsr=0
  Res: ndevices=3 nautochgr=1

Running Jobs:
Writing: Incremental Backup job Sdinny JobId=2263 Volume=""
 pool="SDPVE2RDXPool" device="RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=5
Writing: Full Backup job SDPVE2-VMs JobId=2274 Volume=""
 pool="SDPVE2RDXPool" device="RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=5 fd=7


Jobs waiting to reserve a drive:


Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
===
   2120  Incr  7,388168.4 M  OK   01-Dec-22 23:01 Sdinny
   2137  Full620,898179.7 G  OK   02-Dec-22 22:07 Sdinny
   2147  Incr94993.80 M  OK   03-Dec-22 23:02 Sdinny
   2158  Full  646.26 G  OK   04-Dec-22 02:05 SDPVE2-VMs
   2168  Incr54293.40 M  OK   04-Dec-22 23:02 Sdinny
   2185  Incr  8,063227.5 M  OK   05-Dec-22 23:02 Sdinny
   2202  Incr  8,497257.1 M  OK   06-Dec-22 23:06 Sdinny
   2219  Incr  9,638228.3 M  OK   07-Dec-22 23:02 Sdinny
   2236  Incr98693.80 M  OK   08-Dec-22 23:02 Sdinny
   2253  Full  0 0   Error09-Dec-22 20:01 Sdinny


Device status:
Autochanger "RDXAutochanger" with devices:
"RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
"RDXStorage1" (/var/spool/vchanger/SDPVE2RDX/1)
"RDXStorage2" (/var/spool/vchanger/SDPVE2RDX/2)

Device File: "RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0) is not open.
Device is being initialized.
Drive 0 is not loaded.
==

Device File: "RDXStorage1" (/var/spool/vchanger/SDPVE2RDX/1) is not open.
Drive 1 is not loaded.
==

Device File: "RDXStorage2" (/var/spool/vchanger/SDPVE2RDX/2) is not open.
Drive 2 is not loaded.
==


Used Volume status:
Reserved volume: SDPVE2RDX_0002_0004 on File device "RDXStorage0" 
(/var/spool/vchanger/SDPVE2RDX/0)
 Reader=0 writers=0 reserves=2 volinuse=1 worm=0


Attr spooling: 0 active jobs, 0 bytes; 86 total jobs, 178,259,171 max bytes.



eg, there are jobs stalled waiting a volume, in director:

Running Jobs:
Console connected at 12-Dec-22 14:31
  JobId  Type Level Files Bytes  Name  Status
==
   2263  Back Incr  0 0  Sdinnyis running
   2274  Back Full  0 0  SDPVE2-VMsis running


but seems that the virtual changer is still on the 'Reserved volume:
SDPVE2RDX_0002_0004'.

If i try to 'umount', 'update slots', ... nothing changed. Volumes status
seems OK:

*list media pool=SDPVE2RDXPool
Using Catalog "BaculaLNF"
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
| mediaid | volumename  | volstatus | 

Re: [Bacula-users] VirtualFull, file storage, rsnapshot-like...

2022-11-03 Thread Josh Fisher via Bacula-users



On 11/2/22 14:13, Marco Gaiarin wrote:

Mandi! Marco Gaiarin
   In chel di` si favelave...


Pool definition:
  Pool {
 Name = VDMTMS1FilePool
 Pool Type = Backup
 Volume Use Duration = 1 week
 Recycle = yes # Bacula can automatically 
recycle Volumes
 AutoPrune = yes   # Prune expired volumes
 Volume Retention = 21 days# 3 settimane
 NextPool = VDMTMS1FilePool
  }

OK, i've added to 'pool' definition:

Storage = PPPVE3File

and now i'm on the next error:

  02-Nov 19:06 lnfbacula-dir JobId 1596: Start Virtual Backup JobId 1596, 
Job=VDMTMS1.2022-11-02_19.06.49_02
  02-Nov 19:06 lnfbacula-dir JobId 1596: Warning: This Job is not an Accurate 
backup so is not equivalent to a Full backup.



This warning is because the job definition should have Accurate=yes when 
using virtual full backups, where a virtual full is made by 
consolidating a previous full (or virtual full) backup with a string of 
subsequent incremental backups. If Accurate=no, then deleted and renamed 
files will not be handled properly. A renamed file may be restored 
twice, once with each name, etc.




  02-Nov 19:06 lnfbacula-dir JobId 1596: Warning: Insufficient Backups to Keep.



I believe this is because you have a non-zero value for the Backups to 
Keep directive and there aren't enough backup jobs to both create the 
virtual full and leave that many jobs out of the consolidation. For 
example, if you have Backups to Keep = 7 and there have been fewer than 
8 incremental jobs since the last full, then the job cannot run. It has 
to leave the last 7 out of the consolidation, so that doesn't leave any 
jobs to consolidate.




  02-Nov 19:06 lnfbacula-dir JobId 1596: Error: Bacula lnfbacula-dir 9.4.2 
(04Feb19):
   Build OS:   x86_64-pc-linux-gnu debian 10.5
   JobId:  1596
   Job:VDMTMS1.2022-11-02_19.06.49_02
   Backup Level:   Virtual Full
   Client: "vdmtms1-fd" 7.4.4 (202Sep16) 
x86_64-pc-linux-gnu,debian,9.13
   FileSet:"DebianBackup" 2022-06-21 17:53:53
   Pool:   "VDMTMS1FilePool" (From Job Pool's NextPool resource)
   Catalog:"BaculaLNF" (From Client resource)
   Storage:"PPPVE3File" (From Job Pool's NextPool resource)
   Scheduled time: 02-Nov-2022 19:06:38
   Start time: 02-Nov-2022 19:06:51
   End time:   02-Nov-2022 19:06:51
   Elapsed time:   1 sec
   Priority:   10
   SD Files Written:   0
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Volume name(s):
   Volume Session Id:  0
   Volume Session Time:0
   Last Volume Bytes:  0 (0 B)
   SD Errors:  0
   SD termination status:
   Termination:*** Backup Error ***

So, two warning, no error, but backup in error.


What i'm missing now?! Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-19 Thread Josh Fisher via Bacula-users



On 10/18/22 17:51, Bill Arlofski via Bacula-users wrote:

Hello Josh (everyone else too),

I can confirm that if the FD --> DIR connection is opened, then the 
Director does use this socket to communicate to the FD.



Excellent!


However, the "Connecting to Client..." message does not change, and 
incorrectly (IMHO) reports that it is making an outbound
connection to the IP:port specified in the Director's resource for 
this Client:

8<
*s client=stinky-fd
Connecting to Client stinky-fd at 10.1.1.222:9102
8<

I did an `estimate` and then ran a job. Packet traces confirm that the 
connection(s) created by the client are used and the
Director does not actually call out to it. A nice feature request 
would be to change this Connecting message to something like:

8<
*s client=stinky-fd
Connecting to Client stinky-fd on inbound connection opened by Client
8<



Yes. That would be less confusing.




Interestingly, if the Client's connection into the Director is down 
(ie: kill the FD), then the Director does actually make

the attempt to connect to the C
lient on it's defined IP:port, which of course fails. :)

I think this is also incorrect behavior, or at least it is not 
behaving as documented, or in the way I would expect/want it
to for Clients behind NAT when we know the Director will never be able 
to make the connection.




That seems like a bug to me too. If 'Connect to Director = yes', the Dir 
should never attempt to open a connection to FD.





This is all nice information and it proves this feature is (mostly) 
working as documented, but now we still need to solve

Wanderlei's issue. :)



Yes. It seems like the FD > Dir connection is not active or is 
firewalled. Yet, Rodrigo tested with telnet from the client to port 9101 
on the Director and that worked, even though 'status client' in bconsole 
times out. This is why I still wonder if 802.3az EEE is not waking up as 
expected on the client's router or the client is on WiFi and going into 
sleep mode. I don't know if that is the case, but it might explain why 
telnet from the client works, but bconsole commands do not. A quick test 
would be to use bconsole from the client machine and see if a status 
client is possible.






I am guessing one or more of a few things are the possible culprit:

- Port forwarding at the firewall is not working as expected
- The `Address = ` in the Director{} section of the FD is not correct
- The FD has not been restarted (I have seen systemctl restart 
bacula-fd not always work)



@Wanderlei, I would recommend to do a tcpdump on the Director when 
things are quiet (ie: no jobs running) to see if this
inbound connection from the client is actually making it to the 
Director through your firewall:


First stop the FD.

Then start a tcp
dump session as root on the Director:
8<
# tcpdump -i any tcp port 9101 or 9102 -vvv -w bacula.dump
8<

Then, start the FD. The "Got #" message should increment. If it does 
not, we have our answer. If it does, do a 'status
client=' in bconsole for this client. The "Got #" should increment 
some...


Kill the tcpdump and open the dump file in Wireshark to see what was 
happening.



You can even start the FD in foreground and debug mode to see what it 
is doing and look for its outbound connection

attempt(s) to the Director to confirm it is trying to do the right thing:
8<
# killall bacula-fd

# /path/to/bacula-fd -f -d100 -c /path/to/bacula-fd.conf
8<

Let us know what you find!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-18 Thread Josh Fisher via Bacula-users


On 10/17/22 20:09, Bill Arlofski via Bacula-users wrote:

On 10/17/22 13:14, Rodrigo Reimberg via Bacula-users wrote:
>
Telnet from DIR to FD on port 9102 not ok, I’m using 
ConnectToDirector parameter.


Hello Rodrigo,


Of course this will not work. The client is behind NAT, and the 
Director cannot connect to it on port 9102 (or any other port :).


As you know, with the 'ConnectToDirector' feature enabled, the FD 
calls into the Dir on port 9101 (default). There is no requirement to 
set any connection Schedule. The Client will remain connected until 
the `ReconnectionTime` is reached (default 40 mins), at which point 
the connection will be dropped and immediately re-established.


Per the documentation, this FD -> DIR connection *should be* used for 
all communications between the Director and this client:

8<
ConnectToDirector =  When the ConnectToDirector directive is 
set to true, the Client will contact the Director according to the 
rules. The connection initiated by the Client will be then used by the 
Director to start jobs or issue bconsole commands.

8<


I just ran some tests, and it looks like there is a bug.



Maybe. However, it depends on what the client is doing. If the client 
goes into sleep-mode, it will not re-establish the connection after the 
ReconnectionTime. In that case, the Director would not have an active 
socket and so may well try to establish one.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull, file storage, rsnapshot-like...

2022-10-17 Thread Josh Fisher via Bacula-users


On 10/16/22 12:21, Marco Gaiarin wrote:

Mandi! Radosław Korzeniewski
   In chel di` si favelave...

...

I do not understand your requirements. What is an "initial backup" you want to
make? Are you referring to the first Full backup which has to be executed on
the client?

Exactly. VirtualFull can be (at least for me) a very good way to backup 6TB
data on a 10 Mbit/s link, because data vary low.
But still i need a way to do the first full...



If the client is on the other end of a 10Mbps link, then the options are 
to make the initial full backup over the slow link or temporarily move 
the client to the site where Dir/SD runs just to make the initial full 
backup. Another more convoluted way that doesn't involve moving the 
client machine or taking it offline for a long time is:


Clone the client's data disks, making sure that the filesystem UUIDs are 
identical, and take them to the server's site


Create a basic VM and install bacula client, using the same client 
config as the real client


Attach the cloned disks to the VM, making sure that they are mounted at 
the same mountpoints as the real client.


Alter the Director's config for the client to reflect the VM's address

Run a full backup of the VM

Change the Director's config for the client back to the client's real 
address


The first incremental backup will be larger than normal because the 
basic VM's root partition isn't a clone of the real client, but I assume 
that most of the data is on the cloned disk partitions.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-17 Thread Josh Fisher via Bacula-users


On 10/16/22 11:44, Rodrigo Reimberg via Bacula-users wrote:


Hello,

Can someone help me?

I did the configuration of the client behind nat.

The client is communicating with the director as there is no error in 
the "working" directory.


When I access bconsole in the director and run the status client 
command, the timeout error occurs.




Because the status client command is the opposite direction, director 
contacting client.




I have a question, does the storage need to be public too?

Below the configuration files:

bacula-fd.conf

Director {

  Name = man-ind-1004-dir

  Password = "  "    # Director must know this 
password


  Address = public-IP      # Director address to connect

  Connect To Director = yes   # FD will call the Director

}

bacula-dir.conf

Client {

  Name = "gfwv-brerpsql01-fd"

  Password = ""

  Catalog = "MyCatalog"

  AllowFDConnections = yes

}

*From:*Jose Alberto 
*Sent:* domingo, 5 de dezembro de 2021 11:20
*To:* Wanderlei Huttel 
*Cc:* bacula-users@lists.sourceforge.net
*Subject:* Re: [Bacula-users] Bacula Client Behind NAT

 When Run JOB:

Bacula-dir      FD  9102

and

FD >>  SD   9103     (NAT)       with  DNS   or  IP 
Public.


try  telnet from client fd     to  IP or DNS   port  9103  ,    connect?

On Thu, Dec 2, 2021 at 10:59 AM Wanderlei Huttel 
 wrote:


I'm trying to configure the new feature in in Bacula, but manual
is not clear about it.

https://www.bacula.org/11.0.x-manuals/en/main/New_Features_in_11_0_0.html#SECTION00230
In the company have some employees that sometimes are working at
home with their laptops and the most of time are working internal

So, I've thought include "Client Behind Nat" to backup their
laptops when they are remote

1) I've create 2 rules in Firewall to forward ports 9101 and 9103
from FW Server to Bacula Server (The connection it looks OK)

2) I've configured the laptop client (bacula-fd.conf)

Director {

  Name = bacula-dir

  Password = "mypassword"

  Address = mydomain.com 

  Connect To Director = yes

}

3) In bacula-dir.conf on client-XXX I've configured the option:

Allow FD Connections = yes

Should I include "FD Storage Address = mydomain.com
" to backup when the employee is remote?


4) If I want to modify the ports from client behind NAT connect,
how to do? Is possible?

5) This Kind of configuration will work when the employee is in
the local network or in remote network?

I've made a test and didn't worked using the configuration like
manual and didn't worked.

==

2021-12-02 11:45:02   bacula-dir JobId 28304: Start Backup JobId
28304, Job=Backup_Maquina_Remota.2021-12-02_11.45.00_03

2021-12-02 11:45:02   bacula-dir JobId 28304: Using Device
"DiscoLocal1" to write.

2021-12-02 11:48:02   bacula-dir JobId 28304: Fatal error: No Job
status returned from FD.

2021-12-02 11:48:02   bacula-dir JobId 28304: Error: Bacula
bacula-dir 11.0.5 (03Jun21):

  Build OS:  x86_64-pc-linux-gnu debian 9.13

  JobId:               28304

  Job: Backup_Maquina_Remota.2021-12-02_11.45.00_03

  Backup Level:           Incremental, since=2021-12-01 17:30:01

  Client:                "remota-fd" 11.0.5 (03Jun21) Microsoft
Windows 8 Professional (build 9200), 64-bit,Cross-compile,Win64

  FileSet:               "FileSet_Remota" 2015-03-12 16:05:45

  Pool:                "Diaria" (From Run Pool override)

  Catalog:               "MyCatalog" (From Client resource)

  Storage:               "StorageLocal1" (From Pool resource)

  Scheduled time:         02-Dec-2021 11:45:00

  Start time:             02-Dec-2021 11:45:02

  End time:  02-Dec-2021 11:48:02

  Elapsed time:           3 mins

Priority:               10

  FD Files Written:       0

  SD Files Written:       0

  FD Bytes Written:       0 (0 B)

  SD Bytes Written:       0 (0 B)

  Rate:                0.0 KB/s

  Software Compression:   None

  Comm Line Compression:  None

Snapshot/VSS:           no

Encryption:             no

Accurate:               yes

  Volume name(s):

  Volume Session Id:      80

  Volume Session Time:    1637867221

  Last Volume Bytes: 2,064,348,469 (2.064 GB)

  Non-fatal FD errors:    1

  SD Errors:              0

  FD termination status:  Error

  SD termination status:  Waiting on FD

Termination:            *** Backup Error ***

2021-12-02 11:48:02   bacula-dir JobId 28304: shell command: run
AfterJob "/etc/bacula/scripts/_webacula_update_filesize.sh 28304
Backup Error"

2021-12-02 11:48:02   bacula-dir JobId 28304: AfterJob: The
JobSize and FileSize of JobId 28304 were updated 

Re: [Bacula-users] best practices in separating servers support Bacula processes

2022-10-13 Thread Josh Fisher via Bacula-users


On 10/12/22 15:10, Robert M. Candey wrote:


I've been using Bacula to back up many servers and desktops to a tape 
library since early on, but always had one server running all of the 
Bacula processes except for the individual file servers.


I'm setting up a new tape library and have new data servers, so I'm 
wondering if there is a more efficient architecture for backing up 
1PB, mostly stored on one server and NFS-mounted to the other data 
servers.


Does it make sense to run the PostgreSQL database server and storage 
servers on their own servers dedicated to Bacula?


Is there value in running the Director on one or the other?



Yes. When the Director and PostgreSQL run on the same server, database 
writes do not require a network transfer.



Should I continue to run the storage daemon on the server that hosts 
the large data?


I'm thinking that the NFS server might be more efficient if run on its 
own, and transfer its data over the network (100GbE) to the Bacula 
storage server attached to the tape library.  And perhaps PostgreSQL 
could have dedicated memory and CPU. I don't know what if anything is 
slowing down our backups.  Full backups take 4-6 weeks for 500 TB now.


Ideas?  Thoughts?  Suggestions?



Use fast, direct-attach SSD for the spooling storage. De-spooling must 
be able to sustain sequential reads fast enough to exceed the maximum 
LTO write speed.




Thank you

Robert Candey





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem Importing Tapes Into Catalog

2022-09-16 Thread Josh Fisher via Bacula-users
The volumes have been purged from the catalog, so thecatalog needs to be 
rebuilt using the bscan utility. See 
https://www.bacula.org/13.0.x-manuals/en/utility/Volume_Utility_Tools.html#SECTION00172


On 9/9/22 07:02, Charles Tassell wrote:

Hello Everyone,

  I'm having a problem importing some existing tapes in my autochanger 
into bacula.  I am pretty sure these tapes were in use before but were 
expired, so I want to get them back into the pool.  They show up when 
I do an mtx status, but not in my media list.  I can't label them 
since they are already labeled and that causes an error, and when I 
try to import them with "update slots" it recognizes them but doesn't 
add them to the catalog so they still don't show up.  Here is the 
output of update slots:

...



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO via USB

2022-08-23 Thread Josh Fisher via Bacula-users
I don't think it can be done by USB. The SAS-to-USB chips operate in 
"mass storage" mode, suitable for SAS hard drives. The LTO drive will 
need an adapter that operates in SCSI emulation mode. I don't know of 
any adapters that aren't strictly for hard drives, but maybe there are.


Another approach is to use a Thunderbolt port instead. Thunderbolt is 
essentially an encapsulation of PCI-e. There are many Thunderbolt PCI-e 
Expansion chassis available that are simply one or more PCI-e slots 
connected via Thunderbolt (but with their own power supply). You can 
insert a PCI-e SAS controller in the expansion chassis and the server 
will treat it exactly like one of its own PCI-e slots, so the OS will 
see the tape drive in exactly the same way and so then would Bacula.



On 8/22/22 11:29, Jose Alberto wrote:

USB if supported by Bacula.

I have worked for lab and training with DDS3 USB drives.

you detect them the same as scsi (lsscsi -g) Of course here you don't 
see the "mediumx" you only see the TAPE , you don't declare 
autochanger, only the devices


width= 
 
	Libre de virus.www.avast.com 
 




On Thu, Aug 18, 2022 at 8:44 AM Adam Weremczuk 
 wrote:


Hi all,

This might be considered slightly off topic but has anybody tried
installing USB3 PCIe card in an external LTO tape drive?

Many models appear to have 2 slots with only one occupied by SAS by
default, e.g:

https://www.bechtle.com/de-en/shop/quantum-lto-7-hh-sas-external-drive--4038311--p

The idea would be for the tape drive to operate via SAS 99% of the
time
but occasionally move it elsewhere and easily access via USB (from
any
desktop or laptop).

Somebody has done it before on a factory level:

https://www.fujifilm.com/in/en/business/data-management/datastorage/lto-tape-drive/brochure
but I would prefer not to be limited and locked to this one
particular
model + software.

I'm assuming that I should still be able to run Bacula with the above?

Any advise?

Regards,
Adam



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
#
#   Sistema Operativo: Debian      #
#        Caracas, Venezuela          #
#


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users