Re: [Bacula-users] Remote Client Security Alerts

2024-09-17 Thread Martin Simmons
They are being sent to the Director by the client (nuc2).

I suggest adding some firewall rules on nuc2 to only allow connections to port
9102 from the Director.

__Martin


> On Tue, 17 Sep 2024 11:41:14 +0100, Chris Wilkinson said:
> 
> I keep getting security alerts from a remote client backup. The backups
> always run to success. The IPs that are listed in the job log are different
> every time and in various locations including some in Russia but also in
> London and European data centres. There are no entries at all in the remote
> client bacula log. This only happens with remote client backups, never with
> local client backups.
> 
> It's not clear to me whether these alerts are coming from the DIR or being
> sent to the Director by the client.
> 
> I'm not sure whether to just ignore these or take some steps to block them.
> Is there an FD directive that would reject these perhaps?
> 
> Any advice welcomed.
> 
> Thanks
> 
> -Chris Wilkinson
> 
> -- Forwarded message -
> From: Bacula 
> Date: Tue, 17 Sep 2024, 03:50
> Subject: Bacula: Backup OK of Client:nuc2 Fileset:nuc2 Incremental
> To: 
> 
> 
> 17-Sep 03:50 raspberrypi-dir JobId 7536: Start Backup JobId 7536, 
> Job=nuc2.2024-09-17_03.50.00_03
> 17-Sep 03:50 raspberrypi-dir JobId 7536: Using Device "qnap-usb3" to write.
> 17-Sep 03:50 raspberrypi-dir JobId 7536: Sending Accurate information to the 
> FD.
> 17-Sep 03:50 raspberrypi-sd JobId 7536: Volume "nuc2-incremental6040" 
> previously written, moving to end of data.
> 17-Sep 03:50 raspberrypi-sd JobId 7536: Ready to append to end of Volume 
> "nuc2-incremental6040" size=162,983,874
> 16-Sep 07:25 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.167. Len=-4.
> 17-Sep 03:50 raspberrypi-sd JobId 7536: Elapsed time=00:00:01, Transfer 
> rate=90.58 K Bytes/second
> 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.159. Len=-4.
> 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.148. Len=-4.
> 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.154. Len=-4.
> 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.155. Len=-2147483608.
> 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.163. Len=49.
> 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.163. Len=110.
> 16-Sep 07:27 nuc2 JobId 0: Security Alert: bsock.c:560 Read error from 
> client:87.236.176.156:9102: ERR=No data available
> 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.156. Len=0.
> 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.161. Len=-4.
> 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.178. Len=-4.
> 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.156. Len=-4.
> 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.170. Len=-4.
> 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.159. Len=-4.
> 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.152. Len=-4.
> 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.156. Len=-4.
> 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.170. Len=-4.
> 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.168. Len=0.
> 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.171. Len=0.
> 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 87.236.176.166. Len=-4.
> 16-Sep 19:54 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got 
> bad command from 80.66.76.134. Len=-4.
> 17-Sep 03:50 raspberrypi-sd JobId 7536: Sending spooled attrs to the 
> Director. Despooling 6,131 bytes ...
> 17-Sep 03:50 raspberrypi-dir JobId 7536: Bacula raspberrypi-dir 11.0.6 
> (10Mar22):
>   Build OS:   aarch64-unknown-linux-gnu debian 11.3
>   JobId:  7536
>   Job:nuc2.2024-09-17_03.50.00_03
>   Backup Level:   Incremental, since=2024-09-16 03:50:03
>   Client: "nuc2" 11.0.6 (10Mar22) 
> x86_64-pc-linux-gnu,debian,12.7
>   FileSet:"nuc2" 2023-09-26 03:50:00
>   Pool:   "nuc2-incremental" (From Job IncPool override)
>   Catalog:"MyCatalog" (Fro

Re: [Bacula-users] Warning: Unexpected stream="0"

2024-09-09 Thread Martin Simmons
> On Sat, 7 Sep 2024 15:54:37 +0200, Dragan Milivojević said:
> 
> > I was planning to run the director and SD in debug mode also a bit
> > latter, will post the debug logs
> > if I'm able to trigger the issue.
> 
> Update to this issue is stuck in moderation due to message size, so
> posting it again
> with links to files.
> 
> bacula fd, dir and sd were run with -d 300 -v -m parameters

It looks like you have set accurate=mcpino5 in the fileset.  I suspect there
is a bug when combining the o option with one of the signature checking
options (1, 2, 3, or 5).

You could try removing the o and 5 options to see if that fixes it.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote Client Backup Failing

2024-09-09 Thread Martin Simmons
Yes, I always remove that /etc/hosts line on Debian.

There is usually no need to specify fdaddress.  It could be useful for
security, e.g. on a machine with multiple network cards or set to localhost if
you are also running the Director and Storage Daemon on the same machine.

__Martin


> On Sun, 8 Sep 2024 03:41:17 +0100, Chris Wilkinson said:
> 
> I have answered my own question. My error was to set the fdaddress
> directive in the fd(s) to the remote hostname(s). My /etc/hosts files have
> the line
> 
> 127.0.1.1 
> 
> The fdaddress directives did exactly that. This is a typical /etc/hosts on
> Debian and derivative systems I understand. This directive should be
> omitted to force the fd to bind to address 0.0.0.0. With that change port
> 9102 is open and the remote client backup is working again.
> 
> I can't think of a situation where you would not want the fd to bind to
>  address.
> 
> On Sat, 7 Sep 2024, 13:02 Chris Wilkinson,  wrote:
> 
> > I have a remote client whose backup is now failing because the FD doesn't
> > return a status. Till now it was OK.
> >
> >  2024-09-07 06:09:28 raspberrypi-dir JobId 7330: Error: Bacula
> > raspberrypi-dir 11.0.6 (10Mar22):
> >  Build OS: aarch64-unknown-linux-gnu debian 11.3
> >  JobId: 7330 Job: nuc2.2024-09-07_06.04.16_55
> >  Backup Level: Incremental, since=2024-09-06 03:50:03
> >  Client: "nuc2" 11.0.6 (10Mar22) x86_64-pc-linux-gnu,debian,12.1
> >  FileSet: "nuc2" 2023-09-26 03:50:00
> >  Pool: "nuc2-incremental" (From Job IncPool override)
> >  Catalog: "MyCatalog" (From Pool resource)
> >  Storage: "remote-clients" (From Command input)
> >  Scheduled time: 07-Sep-2024 06:04:16
> >  Start time: 07-Sep-2024 06:04:18
> >  End time: 07-Sep-2024 06:09:28
> >  Elapsed time: 5 mins 10 secs
> >  Priority: 10
> >  FD Files Written: 0
> >  SD Files Written: 0
> >  FD Bytes Written: 0 (0 B)
> >  SD Bytes Written: 0 (0 B)
> >  Rate: 0.0 KB/s
> >  Software Compression: None
> >  Comm Line Compression: None
> >  Snapshot/VSS: no
> >  Encryption: no
> >  Accurate: no
> >  Volume name(s):
> >  Volume Session Id: 205
> >  Volume Session Time: 1724622798
> >  Last Volume Bytes: 156,319,421 (156.3 MB)
> >  Non-fatal FD errors: 1
> >  SD Errors: 0
> >  FD termination status: Error
> >  SD termination status: Waiting on FD
> >  Termination: *** Backup Error ***
> >  2024-09-07 06:09:28 raspberrypi-dir JobId 7330: Fatal error: No Job
> > status returned from FD.
> >  2024-09-07 06:04:18 raspberrypi-dir JobId 7330: Using Device "qnap-usb3"
> > to write.
> >  2024-09-07 06:04:18 raspberrypi-dir JobId 7330: Start Backup JobId 7330
> > Job=nuc2.2024-09-07_06.04.16_55
> >
> > I have port 9102 (FD) open on the remote router but when I look at netstat
> > on the remote client I see the local address is 127.0.1.1 for port 9102
> > whereas it is 0.0.0.0 for other ports.
> >
> > ~$ netstat -tln
> > Active Internet connections (only servers)
> > Proto Recv-Q Send-Q Local Address Foreign Address State
> > tcp 0 0 127.0.0.1:61209 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:1 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:5201 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:22000 0.0.0.0:* LISTEN
> > tcp 0 0 127.0.1.1:9102 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:631 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:12865 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:8384 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
> > tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
> >
> > The other data point is that pointing a port open tool at the public IP of
> > the client comes back as 9102 closed which explains the failure. The other
> > opened ports are correctly showing as open. I'm guessing that the reason
> > for that is the 127.0.1.1.
> >
> > This all came about after the remote client router was changed from an
> > ADSL one to VDSL. I haven't been able to figure out why this one port would
> > have a different local address than the others.
> >
> > This is probably more of a networking question but perhaps someone here
> > might have an insight.
> >
> > Many Thanks
> >
> > -Chris Wilkinson
> >
> -Chris Wilkinson
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Warning: Unexpected stream="0"

2024-09-06 Thread Martin Simmons
Try running the bacula-fd with -d 300 to capture much more information when it
fails.

Also, what is the FileSet definition?

Is the Storage Daemon also running Bacula version 15.0.2?

__Martin


> On Thu, 5 Sep 2024 19:44:40 +0200, Dragan Milivojević said:
> 
> Hi all
> 
> An issue with Bacula version 15.0.2, basically Bacula fails to save files, 
> all that is
> stored is file names and no data, not even metadata.
> 
> For example:
> 
> 
> *list jobid=81
> +---++-+--+---+--+--+---+
> | jobid | name   | starttime   | type | level | jobfiles | 
> jobbytes | jobstatus |
> +---++-+--+---+--+--+---+
> |81 | GalileoJob | 2024-09-03 21:29:10 | B| I |  480 |
> 9,459 | T |
> +---++-+--+---+--+--+---+
> *list files jobid=81
> ++
> | filename
>  |
> ++
> | /boot/grub2/grubenv 
>  |
> | 
> /home/galileo/.local/share/keybase/kbfs_settings/v1/kbfsSettings.leveldb/004100.log
>   |
> 
> 
> Clipped for verbosity ...
> 
> | 
> /home/galileo/.config/google-chrome/CertificateRevocation/9084/manifest.json  
>|
> | /home/hs/tmp/drvo/  
>  |
> | /home/hs/tmp/drvo/drvo_notes.txt
>  |
> | /home/hs/tmp/drvo/fplrn008.pdf  
>  |
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/_metadata/
>|
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/_metadata/verified_contents.json
>  |
> | /home/galileo/.config/google-chrome/CertificateRevocation/9082/_metadata/   
>  |
> | 
> /home/galileo/.config/google-chrome/CertificateRevocation/9082/_metadata/verified_contents.json
>   |
> | /home/galileo/.config/google-chrome/Profile 
> 1/blob_storage/dfd7d552-d0fb-4fdd-8b75-ef7a9d27db71/ |
> | /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/
>  |
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/LICENSE
>   |
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/keys.json
> |
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/manifest.fingerprint
>  |
> | 
> /home/galileo/.config/google-chrome/TrustTokenKeyCommitments/2024.9.3.1/manifest.json
> |
> | /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/_metadata/  
>  |
> | 
> /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/_metadata/verified_contents.json
>  |
> | /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/
>  |
> | 
> /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/manifest.fingerprint
>  |
> | /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/manifest.json   
>  |
> | /home/galileo/.config/google-chrome/TpcdMetadata/2024.9.2.1/metadata.pb 
>  |
> ++
> +---++-+--+---+--+--+---+
> | jobid | name   | starttime   | type | level | jobfiles | 
> jobbytes | jobstatus |
> +---++-+--+---+--+--+---+
> |81 | GalileoJob | 2024-09-03 21:29:10 | B| I |  480 |
> 9,459 | T |
> +---++-+--+---+--+--+---+
> *restore jobid=81
> You have selected the following JobId: 81
> 
> Building directory tree for JobId(s) 81 ...  +
> 54 files inserted into the tree.
> 
> You are now entering file selection mode where you add (mark) and
> remove (unmark) files to be restored. No files are initially added, unless
> you used the "all" keyword on the command line.
> Enter "done" to leave this mode.
> 
> cwd is: /
> $ c/home/hs/tmp/drvo/o/
> cwd is: /home/hs/tmp/drvo/
> $ dir
> --   0 root root   0  1970-01-01 01:00:00  
> /home/hs/tmp/drvo/drvo_notes.txt/
> --   0 root root   0  

Re: [Bacula-users] Director migration question

2024-09-03 Thread Martin Simmons
> On Mon, 2 Sep 2024 12:45:22 -0400, Phil Stracchino said:
> 
> Since nobody else but me will be starting or stopping jobs, I don't see 
> that being an issue.  It looks like nothing went amiss up to the point 
> of verifying that everything is correctly configured.  It looks as 
> though there may be some issues around two Directors talking to the same 
> SD at the same time, even if one of them is only "watching".

The Director updates various database tables on startup to match the resources
in the conf file, so that might cause issues.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wrong config files defining storage

2024-08-27 Thread Martin Simmons
If you made any changes to the SD config file, did you restart the SD
afterwards?

What is the output of:

status storage=DiskFile

in bconsole?

__Martin


> On Tue, 27 Aug 2024 12:54:51 +0200, Mehrdad Ravanbod said:
> 
> Hi guys
> 
> I am trying to do some testbackups with bacula and atm, It is not 
> working the error I get is is
> 
> "Storage daemon "DiskFile" didn't accept Device "DiskAC_Dev0" because: 
> 3924 Device "DiskAC_Dev0" not in SD Device resources or no matching 
> Media Type or is disabled."
> 
> I am kinda stuck, i think it looks ok,  hoping that someone here can 
> give me some pointers as to what is wrong
> 
> I have a bacula installatopn on a RockyLinux9 server, trying to backup 
> files from a remote Windows machine(to a local directory on that 
> machine), but i can not get it to work, i seem to have some error in 
> defining storage. Btw, Director can connect to the FD and SD on the 
> remote machine, so client definition in bacula-dir and conf files on 
> remote machine seem to be working, no problems there
> 
> here are rel. parts of my config files:
> 
> #from bacula-dir
> 
> Storage {
>    Name = "DiskFile"
>    SdPort = 9103
>    Address = "192.168.1.210"
>    FdStorageAddress = "192.168.1.210"
>    Password = "GySzPw9oniyUY6Cevt5OlHJaV7lst9Dsod+v97Xk/fc5"
>    Device = DiskAC_Dev0
>    MediaType = DiskVolume
>    Autochanger = no
>    MaximumConcurrentJobs = 10
> }
> 
> Pool {
>    Name = "BckupDisk_Pool1"
>    PoolType = "Backup"
>    Storage=DiskFile
>    LabelFormat = "\"BckTest-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}\""
>    LabelType = "Bacula"
>    MaximumVolumes = 100
>    MaximumVolumeBytes = 53687091200
>    VolumeRetention = 31536000
>   AutoPrune = no
>    Recycle = no
>    ScratchPool = "Scratch"
> }
> 
> 
> Job {
>    Name = "Test"
>    Type = "Backup"
>    Level = "Full"
>    Pool = "BckupDisk_Pool1"
>    JobDefs = "WinBckupToDisk"
>    Storage = "DiskFile"
> }
> 
> JobDefs {
>    Name = "WinBckupToDisk"
>    Type = "Backup"
>    Messages = "Standard"
>    Storage = "DiskFile"
>    Pool = "BckupDisk_Pool1"
>    Client = "bckuptest-fd"
>    Fileset = "FS_BckupDisk1"
>    Schedule = "Weekly1"
>    WriteBootstrap = "/opt/bacula/working/%c.bsr"
>    SpoolAttributes = yes
>    Priority = 10
> }
> 
> #part of  SD conf file
> 
> Storage { # definition of myself
>    Name = bckuptest-sd
>    SDPort = 9103  # Director's port
>    WorkingDirectory = "C:\\Program Files\\Bacula\\working"
>    Pid Directory = "C:\\Program Files\\Bacula\\working"
>    Maximum Concurrent Jobs = 10
> }
> 
> Device {
>    Name = DiskAC_Dev0
>    MediaType = DiskVolume
>    ArchiveDevice = "D:\Backup1"
>    LabelMedia = Yes
>    RandomAccess = Yes
>    AutomaticMount = Yes
>    RemovableMedia = No
>    AlwaysOpen = No
>    MaximumConcurrentJobs = 5
> }
> 
> So this is what i get when i try to run the job Test
> 
> localhost-dir JobId 56: Fatal error: Failed to start job on any of the 
> storages defined!
> localhost-dir JobId 56: Storage daemon "DiskFile" didn't accept Device 
> "DiskAC_Dev0" because: 3924 Device "DiskAC_Dev0" not in SD Device 
> resources or no matching Media Type or is disabled.
> localhost-dir JobId 56: Connected to Storage "DiskFile" at 
> 192.168.1.210:9103 with TLS
> 
> 
> -- 
> 
> Mehrdad Ravanbod System administrator
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume names, time stamping and creation

2024-08-22 Thread Martin Simmons
> On Thu, 22 Aug 2024 10:15:23 +0200, Mehrdad Ravanbod said:
> 
> -Bacula documentation states that it is possible to use variables in 
> volume names but i am having no success with it, anyone who has this 
> working and care to share a couple examples from .conf files??

I'm using a counter variable to make volumes IncrB001, IncrB002 etc like this:

Pool {
  Name = IncrPool
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 100 years# forever
  Maximum Volume Bytes = 1g
  Label Format = "IncrB${NextIncrB+:p/3/0/r}"
}

Counter {
  Name = NextIncrB
  Minimum = 1
  Catalog = MyCatalog
}

> -Is it possible to direct Bacula to create a new volume on a particular 
> week days, say sunday?? Background is that i would to see if it is 
> possible to have a full back and following diff backups during the week 
> in same Volume, i.e one volume for a whole weeks backups

I think the answer is "not directly", but you can set Volume Use Duration and
then ensure that you start using the volume on a Sunday.  You would need to
set the Volume Use Duration to 7 days minus a few hours to allow for timing
differences.  It is difficult to make this work reliably though, for example
if the backups fail to run on the Sunday then it would get out of sync.

Another way would be to use a script to set VolStatus = Used for the last
volume and schedule that to run just before the backups on Sunday (either as a
cron job or as a Bacula job with Type = Admin).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very long retention time

2024-08-14 Thread Martin Simmons
I have archival backups going back 10 years without any problems.

If you want to be able to restore any single file from the backup, then you
need to explicitly configure File Retention and Job Retention in the Client
resource, because they default to 60 and 180 days respectively.  I've set them
both to 50 years.

You also need to set the Volume Retention in the pool (which defaults to 1
year).

After changing any of these, update the db and volumes using the bconsole
update command.

__Martin


> On Wed, 14 Aug 2024 14:21:48 +0200, Mehrdad Ravanbod said:
> 
> Thanx for the response, appreciate it
> 
> As to media, plan is to put it on disk, with some sort of raid(1, 5, 6) 
> to ensure safety/integrity, that is where the files are right now
> 
> as to the amount of data, it is not that huge, weekly full backups is 
> around 50 GB, and it lends it self well to compression, and data does 
> not change that much so incre. backups should be small, my concern is 
> mainly the database(using postgresql) and whether Bacula can handle such 
> retention times of say 3560+ days
> 
> I have seen reten times of one year but that is about it, if anyone have 
> experience of handling longer retention times and care to share their 
> experiences or config files for the jobs, I would be grateful
> 
> 
> Regards /Mehrdad
> 
> On 2024-08-14 13:36, Gary R. Schmidt wrote:
> > On 14/08/2024 16:19, Mehrdad Ravanbod wrote:
> >>
> >> hello everyone
> >>
> >> I need to set up backups for a set of files that need to be saved and 
> >> be possible to access for a very long time(approx. 10 years, for 
> >> compliance reasons), is this possible in Bacula or even advisable?? 
> >> Or do we need to solve this some other way
> >>
> > The first problem you have is finding media that is guaranteed to last 
> > for 10+ years.
> >
> > Tape is generally considered the best for this, but...  YMMV if you 
> > don't store it correctly.
> >
> > Another option is long-term optical disk, which is not the same as DVD 
> > or Blu-Ray.
> >
> > Talk to the suppliers of archival services in your country for 
> > information on what is available and sensible.
> >
> > The second problem is storage for the database.  If you have millions 
> > of files that change frequently you will need a lot of space for the 
> > database.
> >
> > Non-Bacula solution: Back when I was at SGI we offered HFS - 
> > Hierarchical File System - for people who had this sort of problem.  
> > And very, very deep pockets.
> > I don't know if Rackable kept it alive when they purchased SGI, maybe 
> > talk to them?
> >
> > Cheers,
> >     Gary    B-)
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> -- 
> 
> Mehrdad Ravanbod    System administrator
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrate job

2024-08-05 Thread Martin Simmons
>>>>> On Mon, 5 Aug 2024 13:52:52 +0200, Stefan G Weichinger said:
> 
> Am 02.08.24 um 10:04 schrieb Martin Simmons:
> >>>>>> On Fri, 2 Aug 2024 07:36:36 +0200, Stefan G Weichinger said:
> >>
> >> Am 01.08.24 um 18:08 schrieb Martin Simmons:
> >>
>>>>> finds no volumes to migrate
> >>>>>
>>>>> "media list" lists ~40 volumes in Pool "File"
> >>>>
> >>>> I would still appreciate some help here.
> >>>
> >>> Maybe the problem is the VolStatus?  It must be "Full", "Used" or "Error" 
> >>> to
> >>> be considered for migration with SelectionType = "OldestVolume".
> >>
> >> Well, the source volumes are in Status "Full", re-checked that right now.
> > 
> > That look OK.
> > 
> > Maybe the job records have been pruned from the catalog?
> > 
> > SelectionType = "OldestVolume" uses this SQL query to find the source 
> > volume:
> > 
> > SELECT Media.MediaId FROM Media,Pool,JobMedia WHERE
> >  Media.MediaId in (SELECT DISTINCT MediaId from JobMedia) AND
> >  Media.VolStatus in ('Full','Used','Error') AND Media.Enabled=1 AND
> >  Media.PoolId=Pool.PoolId AND Pool.Name='File'
> >  ORDER BY LastWritten ASC LIMIT 1;
> 
> hmm, some volumes are without jobs on them, right (cleaning up now)
> 
> But I did a fresh test:
> 
> run backup job to file-based volumes (in Pool "File) successfully
> 
> and without touching (or pruning) anythin I immediately started a 
> migration job from "File" to "Daily" after the succesful backup -> no 
> volumes found
> 
> strange

So the SELECT query above returns no results?  If so, you will have to pick
apart the various tests in the WHERE clause to understand why.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrate job

2024-08-02 Thread Martin Simmons
>>>>> On Fri, 2 Aug 2024 07:36:36 +0200, Stefan G Weichinger said:
> 
> Am 01.08.24 um 18:08 schrieb Martin Simmons:
> 
> >>> finds no volumes to migrate
> >>>
> >>> "media list" lists ~40 volumes in Pool "File"
> >>
> >> I would still appreciate some help here.
> > 
> > Maybe the problem is the VolStatus?  It must be "Full", "Used" or "Error" to
> > be considered for migration with SelectionType = "OldestVolume".
> 
> Well, the source volumes are in Status "Full", re-checked that right now.

That look OK.

Maybe the job records have been pruned from the catalog?

SelectionType = "OldestVolume" uses this SQL query to find the source volume:

SELECT Media.MediaId FROM Media,Pool,JobMedia WHERE
Media.MediaId in (SELECT DISTINCT MediaId from JobMedia) AND
Media.VolStatus in ('Full','Used','Error') AND Media.Enabled=1 AND
Media.PoolId=Pool.PoolId AND Pool.Name='File'
ORDER BY LastWritten ASC LIMIT 1;

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrate job

2024-08-01 Thread Martin Simmons
> On Thu, 1 Aug 2024 17:23:30 +0200, Stefan G Weichinger said:
> 
> Am 09.07.24 um 13:16 schrieb Stefan G. Weichinger:
> > 
> > I have:
> > 
> > Pool {
> >    Name = "Daily"
> >    Description = "daily backups"
> >    PoolType = "Backup"
> >    MaximumVolumes = 30
> >    VolumeRetention = 864000
> >    VolumeUseDuration = 432000
> >    Storage = "HP-Autoloader"
> > }
> > 
> > Pool {
> >    Name = "File"
> >    PoolType = "Backup"
> >    LabelFormat = "Vol-"
> >    MaximumVolumes = 45
> >    MaximumVolumeBytes = 53687091200
> >    VolumeRetention = 432000
> >    NextPool = "Daily"
> >    Storage = "File"
> >    AutoPrune = yes
> >    Recycle = yes
> > }
> > 
> > Job {
> >    Name = "migrate-to-tape"
> >    Type = "Migrate"
> >    Messages = "Standard"
> >    Pool = "File"
> >    Client = "samba-fd"
> >    Fileset = "Full Set"
> >    PurgeMigrationJob = yes
> >    MaximumConcurrentJobs = 4
> >    SelectionType = "OldestVolume"
> > }
> > Storage {
> >    Name = "File"
> >    SdPort = 9103
> >    Address = "samba"
> >    Password = "xx"
> >    Device = "FileStorage"
> >    MediaType = "File"
> > }
> > Storage {
> >    Name = "HP-Autoloader"
> >    SdPort = 9103
> >    Address = "samba"
> >    Password = "xx"
> >    Device = "HP-Autoloader"
> >    MediaType = "LTO-6"
> >    Autochanger = "HP-Autoloader"
> >    MaximumConcurrentJobs = 2
> > }
> > 
> > 
> > 
> > 
> > 
> > the run:
> > 
> > Run Migration job
> > JobName:   migrate-to-tape
> > Bootstrap: *None*
> > Client:    samba-fd
> > FileSet:   Full Set
> > Pool:  File (From Job resource)
> > NextPool:  Daily (From Job Pool's NextPool resource)
> > Read Storage:  File (From Pool resource)
> > Write Storage: HP-Autoloader (From Job Pool's NextPool resource)
> > JobId: *None*
> > When:  2024-07-09 13:12:52
> > Catalog:   MyCatalog
> > Priority:  10
> > 
> > 
> > finds no volumes to migrate
> > 
> > "media list" lists ~40 volumes in Pool "File"
> 
> I would still appreciate some help here.

Maybe the problem is the VolStatus?  It must be "Full", "Used" or "Error" to
be considered for migration with SelectionType = "OldestVolume".

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] volumes, pools, media types and labels

2024-06-25 Thread Martin Simmons
>>>>> On Tue, 25 Jun 2024 18:16:42 +0200, Stefan G Weichinger said:
> 
> Am 25.06.24 um 17:43 schrieb Martin Simmons:
> >>>>>> On Tue, 25 Jun 2024 12:59:03 +0200, Stefan G Weichinger said:
> >>
> >> I see various issues, maybe related to defective tapes also.
> >>
> >> Volumes that are in mode "Append" are loaded and not written to as in:
> >>
> >>
> >> 25-Jun 12:52 samba-sd JobId 2840: 3304 Issuing autochanger "load Volume
> >> CMR945L6, Slot 6, Drive 0" command.
> >> 25-Jun 12:52 samba-sd JobId 2840: 3305 Autochanger "load Volume
> >> CMR945L6, Slot 6, Drive 0", status is OK.
> >> 25-Jun 12:53 samba-sd JobId 2840: Volume "CMR945L6" previously written,
> >> moving to end of data.
> >> 25-Jun 12:53 samba-sd JobId 2840: Warning: For Volume "CMR945L6":
> >> The number of files mismatch! Volume=350 Catalog=349
> >> Correcting Catalog
> >> 25-Jun 12:53 samba-sd JobId 2840: [SI0202] End of Volume "CMR945L6" at
> >> 350:1 on device "HP-Ultrium" (/dev/nst0). Write of 262144 bytes got -1.
> >> 25-Jun 12:53 samba-sd JobId 2840: Re-read of last block succeeded.
> >> 25-Jun 12:53 samba-sd JobId 2840: End of medium on Volume "CMR945L6"
> >> Bytes=1,310,721 Blocks=1 at 25-Jun-2024 12:53.
> >> 25-Jun 12:53 samba-sd JobId 2840: 3307 Issuing autochanger "unload
> >> Volume CMR945L6, Slot 6, Drive 0" command.
> >> 25-Jun 12:54 samba-sd JobId 2840: 3304 Issuing autochanger "load Volume
> >> CMR915L6, Slot 5, Drive 0" command.
> >>
> >>
> >> I "Purge" them, mark them as "Append" ... doesn't work.
> > 
> > Don't manually mark them as "Append" because that tells Bacula to wind 
> > forward
> > to the end of the data before writing.  The "Purge" operation just affects 
> > the
> > catalog, not the data on the tape.
> >   
> > If configured correctly, Bacula should automatically reuse tapes that have
> > been marked as purged (either explicitly marked or implicitly after the
> > retention time has expired).
> 
> So I only "Purge" a "Full" tape?

No, you can purge a volume in "Append", "Full" "Used" or "Error" state.

Bacula sets the "Used" status if a tape reaches its "Volume Use Duration".

I suggest looking at
https://www.bacula.org/15.0.x-manuals/en/main/Automatic_Volume_Recycling.html


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] volumes, pools, media types and labels

2024-06-25 Thread Martin Simmons
> On Tue, 25 Jun 2024 12:59:03 +0200, Stefan G Weichinger said:
> 
> I see various issues, maybe related to defective tapes also.
> 
> Volumes that are in mode "Append" are loaded and not written to as in:
> 
> 
> 25-Jun 12:52 samba-sd JobId 2840: 3304 Issuing autochanger "load Volume 
> CMR945L6, Slot 6, Drive 0" command.
> 25-Jun 12:52 samba-sd JobId 2840: 3305 Autochanger "load Volume 
> CMR945L6, Slot 6, Drive 0", status is OK.
> 25-Jun 12:53 samba-sd JobId 2840: Volume "CMR945L6" previously written, 
> moving to end of data.
> 25-Jun 12:53 samba-sd JobId 2840: Warning: For Volume "CMR945L6":
> The number of files mismatch! Volume=350 Catalog=349
> Correcting Catalog
> 25-Jun 12:53 samba-sd JobId 2840: [SI0202] End of Volume "CMR945L6" at 
> 350:1 on device "HP-Ultrium" (/dev/nst0). Write of 262144 bytes got -1.
> 25-Jun 12:53 samba-sd JobId 2840: Re-read of last block succeeded.
> 25-Jun 12:53 samba-sd JobId 2840: End of medium on Volume "CMR945L6" 
> Bytes=1,310,721 Blocks=1 at 25-Jun-2024 12:53.
> 25-Jun 12:53 samba-sd JobId 2840: 3307 Issuing autochanger "unload 
> Volume CMR945L6, Slot 6, Drive 0" command.
> 25-Jun 12:54 samba-sd JobId 2840: 3304 Issuing autochanger "load Volume 
> CMR915L6, Slot 5, Drive 0" command.
> 
> 
> I "Purge" them, mark them as "Append" ... doesn't work.

Don't manually mark them as "Append" because that tells Bacula to wind forward
to the end of the data before writing.  The "Purge" operation just affects the
catalog, not the data on the tape.
 
If configured correctly, Bacula should automatically reuse tapes that have
been marked as purged (either explicitly marked or implicitly after the
retention time has expired).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bpipe problems on ver. 15.0.2

2024-06-17 Thread Martin Simmons
The trace output says:

> Please install a debugger (gdb) to receive a traceback.

so that's the first thing to try.  The traceback might give more information
about where the problem is.

__Martin


> On Fri, 14 Jun 2024 13:41:18 +, Žiga Žvan  said:
> 
> Hi!
> I'm using bacula to backup some virtual machines from my esxi hosts. It 
> worked on version 9.6.5 (Centos), however I'm having problems on version 
> 15.0.2 (Ubuntu). Backup job ends with a success, however bacula-fd service 
> gets killed in the process...
> Does anybody experience similar problems?
> Any suggestion how to fix this?
> 
> Kind regards,
> Ziga Zvan
> 
> 
>  Relevant part of conf 
> Job {
> Name = "esxi_donke_SomeHost-backup"
> JobDefs = "SomeHost-job"
> ClientRunBeforeJob = "sshpass -p 'SomePassword' ssh -o 
> StrictHostKeyChecking=no SomeUser@esxhost.domain.local 
> /ghettoVCB-master/ghettoVCB.sh -g /ghettoVCB-master/ghettoVCB.conf -m 
> SomeHost"
> ClientRunAfterJob = "sshpass -p 'SomePassword' ssh -o 
> StrictHostKeyChecking=no SomeUser@esxhost.domain.local rm -rf 
> /vmfs/volumes/ds2_raid6/backup/SomeHost"
> }
> 
> 
> FileSet {
> Name = "SomeHost-fileset"
> Include {
> Options {
> signature = MD5
> Compression = GZIP1
> }
> Plugin = "bpipe:/mnt/bkp_SomeHost.tar:sshpass -p 'SomePassword' ssh -o 
> StrictHostKeyChecking=no SomeUser@esxhost.domain.local /bin/tar -c 
> /vmfs/volumes/ds2_raid6/backup/SomeHost:/bin/tar -C 
> /storage/bacula/imagerestore -xvf -"
> }
> Exclude {
> }
> }
> 
>  Bacula-fd state after backup finished ###
> 
> × bacula-fd.service - Bacula File Daemon service
>  Loaded: loaded (/lib/systemd/system/bacula-fd.service; enabled; vendor 
> preset: enabled)
>  Active: failed (Result: signal) since Tue 2024-06-11 13:00:08 CEST; 20h 
> ago
> Process: 392733 ExecStart=/opt/bacula/bin/bacula-fd -fP -c 
> /opt/bacula/etc/bacula-fd.conf (code=killed, signal=SEGV)
>Main PID: 392733 (code=killed, signal=SEGV)
> CPU: 3h 33min 48.142s
> 
> Jun 11 13:00:08 bacula bacula-fd[392733]: Bacula interrupted by signal 11: 
> Segmentation violation
> Jun 11 13:00:08 bacula bacula-fd[393952]: bsmtp: bsmtp.c:508-0 Failed to 
> connect to mailhost localhost
> Jun 11 13:00:08 bacula bacula-fd[392733]: The btraceback call returned 1
> Jun 11 13:00:08 bacula bacula-fd[392733]: LockDump: 
> /opt/bacula/working/bacula.392733.traceback
> Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
> Orphaned buffer: bacula-fd 280 bytes at 55fad3bdf278>
> Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
> Orphaned buffer: bacula-fd 280 bytes at 55fad3bdff08>
> Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
> Orphaned buffer: bacula-fd 536 bytes at 55fad3beb678>
> Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Main process exited, 
> code=killed, status=11/SEGV
> Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Failed with result 
> 'signal'.
> Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Consumed 3h 33min 
> 48.142s CPU time.
> 
> 
> # Trace output##
> Check the log files for more information.
> 
> Please install a debugger (gdb) to receive a traceback.
> Attempt to dump locks
> threadid=0x7f16f1023640 max=2 current=-1
> threadid=0x7f16f1824640 max=2 current=-1
> threadid=0x7f16f202d640 max=0 current=-1
> threadid=0x7f16f2093780 max=0 current=-1
> Attempt to dump current JCRs. njcrs=0
> List plugins. Hook count=1
> Plugin 0x55fad3b0bf28 name="bpipe-fd.so"
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP 1/8 G2 Autoloader

2024-06-06 Thread Martin Simmons
> On Thu, 6 Jun 2024 09:50:06 +0200, Stefan G Weichinger said:
> 
> I have questions around migrate jobs and scheduling. Unsure if to start 
> a new thread, it gets off-topic regarding the subject of the original 
> posting ...

I would start a new thread for unrelated discussion.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and correct volume management...

2024-05-29 Thread Martin Simmons
> On Fri, 24 May 2024 10:39:06 +0200, Marco Gaiarin said:
> 
> > I suspect that 'job counter' get resetted if and only if all jobs in a
> > volume get purged; this lead me to think that my configuration simpy does
> > not work in a real situation, because sooner or later jobs get 'scattered'
> > between volumes and virtual job of consolidation stop to work, so jobs and
> > volume purging.
> 
> Sorry, i need feedback on that. I restate this.
> 
> 
> Seems to me that if i use 'job based retention' on volumes, eg:
> 
>   Maximum Volume Jobs = 6
> 
> on the pool, simply does not work. Because the 'job counter' on the volume
> get resetted if and only if *ALL* job on that volume get purged.
> 
> If i have a volume in state 'Used' because got 6 job within, and i
> purge/delete jobs but not all, media state does not switch to 'Append', and
> even if i put manually in 'Append' mode, bacula reject the volume and put on
> 'Used' state because have reached 'Maximum Volume Jobs'.
> If i delete *ALL* job in that volume, get correctly recycled.
> 
> 
> It is right?

Yes, that is how volumes work.  Bacula can only append at the end of a volume,
so the volume size would increase forever if it could switch back to Append
after purging some jobs.  To reuse a volume, it needs to be recycled, which
only happens when all jobs have been purged.


> There's some 'knob' i can tackle with to make volume management more
> 'aggressive'?

I think you need to configure it somehow to put all jobs with the same
lifetime on the same volumes, so they can all expire at the same time and
those volumes will then be recycled.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error syncing volume "IMS0502L6" on device "DRIVE3"

2024-05-16 Thread Martin Simmons
Does this happen with every job?

You could try adding

SyncOnClose = no

to the device settings to prevent it from doing the sync.

__Martin


> On Thu, 16 May 2024 10:56:54 -0400, Jose Alberto said:
> 
> Hi.
> 
> I have the log  when bacula finish Job.
> 
> the Job finish  with stauts OK  but with Warning.
> 
> LOG:
> 15-May 10:05 backup-sd JobId 92: Error: Error syncing volume "IMS0502L6" on
> device "DRIVE3" (/dev/tape/by-id/scsi-3200a000e11161f33-nst). ERR=Invalid
> argument.
> 
> Library IBM ts3200  3 drive lto6 and 48 slots.   conect:  FC
> 
> It happens with the 4 drive.
> 
> My SD:
> 
> Autochanger {
>   Name = "TS3200"
>   Device = "DRIVE0"
>   Device = "DRIVE1"
>   Device = "DRIVE2"
>   Device = "DRIVE3"
>   ChangerDevice = "/dev/tape/by-id/scsi-1IBM_3573-TL_00L4U78W4076_LL0"
>   ChangerCommand = "/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
> }
> 
> N+4
> Device {
>   Name = "DRIVE"
>   MediaType = "LTO-6"
>   DeviceType = "Tape"
>   ArchiveDevice = "/dev/tape/by-id/scsi-3200a000e11161f33-nst"
>   RemovableMedia = yes
>   RandomAccess = no
>   AutomaticMount = yes
>   AlwaysOpen = yes
>   Autochanger = yes
>   ChangerCommand = "/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
>   AlertCommand = "sh -c 'smartctl -H -l error %c'"
>   DriveIndex = 3
>   AutoSelect = yes
> }
> 
> 
> thanks.
> 
> 
> 
> -- 
> #
> #   Sistema Operativo: Debian  #
> #Caracas, Venezuela  #
> #
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-fd appearing to use wrong storage server

2024-04-25 Thread Martin Simmons
It would be useful to see the bacula-dir.conf Job and Client resources for
this job and also the full job log.

__Martin


> On Thu, 25 Apr 2024 11:17:18 +0200, gaston gloesener--- via Bacula-users 
> said:
> 
> Until now I did run bacula in a virtual machine running the director and 
> storage deamon. The storage daemon was stroing data to files on a shared 
> directory as the storage is on a NAS.
>  
> Now I have build bacula-sd for the NAS to avoid this duplicate transfer. I 
> have configured one client to use the new storage but while it uses it, it 
> claims to still contact the “old” storage daemon on the bacula node.
>  
> Using bacula 13.0.4
>  
> Here is the storage definition in bacula-dir.conf (vTape1 is the original 1, 
> vTape2 the new one): 
> Storage {
>   Name = "vTape1"
>   SdPort = 9103
>   Address = "bacula.home "
>   Password = "…deleted…"
>   Device = "vChanger1"
>   MediaType = "vtape1"
>   Autochanger = "vTape1"
>   MaximumConcurrentJobs = 2
> }
> Storage {
>   Name = "vTape2"
>   SdPort = 9103
>   Address = "nas1.home "
>   Password = "…deleted…"
>   Device = "vChanger2"
>   MediaType = "vtape2"
>   Autochanger = "vTape2"
>   MaximumConcurrentJobs = 2
> }
>  
> The client pool definition looks now like:
>  
> Pool {
>   Name = "james1-Full-Pool"
>   Description = "Pool for client james1 full backups"
>   PoolType = "Backup"
>   LabelFormat = "james1-full-"
>   MaximumVolumeJobs = 1
>   MaximumVolumeBytes = 200
>   VolumeRetention = 8726400
>   Storage = "vTape2"
>   Catalog = "MyCatalog"
> }
>  
> I have tried a manual and the scheduled job with same result:
>  
> *(James1-fd says:
>  
> 25-Apr-2024 01:00:00 james1-fd: bsockcore.c:472-7062 OK connected to server  
> Storage daemon bacula.home:9103. socket=10.1.10.111.60144:10.1.200.12.9103 
> s=0x7fa01c01ac88
> 2
>  
> The full log:
>  
> 25-Apr-2024 01:00:00 james1-fd: bnet_server.c:235-0 Accept 
> socket=10.1.10.111.9102:10.1.200.12.45200 s=0x563bfba72728
> 25-Apr-2024 01:00:00 james1-fd: authenticate.c:67-0 authenticate dir: Hello 
> Director bacula-dir calling 10002 tlspsk=100
> 25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:365-0 TLSPSK Remote need 
> 100
> 25-Apr-2024 01:00:00 james1-fd: authenticate.c:90-0 *** No FD compression to 
> DIR
> 25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:335-0 TLSPSK Local need 
> 100
> 25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:563-0 TLSPSK Start PSK
> 25-Apr-2024 01:00:00 james1-fd: bnet.c:96-0 TLS server negotiation 
> established.
> 25-Apr-2024 01:00:00 james1-fd: cram-md5.c:68-0 send: auth cram-md5 challenge 
> <2032624038.1713999600@james1-fd> ssl=0
> 25-Apr-2024 01:00:00 james1-fd: cram-md5.c:156-0 sending resp to challenge: 
> L6/ZMB/xti+re9kmB4sR+D
> 25-Apr-2024 01:00:00 james1-fd: events.c:48-0 Events: code=FC0002 
> daemon=james1-fd ref=0x7fa01c00b0a8 type=connection source=bacula-dir 
> text=Director connection
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:1714-7062 Instantiate 
> plugin_ctx=563bfbb34378 JobId=7062
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378 
> JobId=7062
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name= len=0 
> plugin=bpipe-fd.so plen=5
> 25-Apr-2024 01:00:00 james1-fd: job.c:2499-7062 level_cmd: level = full  
> mtime_only=0
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378 
> JobId=7062
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name= len=0 
> plugin=bpipe-fd.so plen=5
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378 
> JobId=7062
> 25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name= len=0 
> plugin=bpipe-fd.so plen=5
> 25-Apr-2024 01:00:00 james1-fd: bsockcore.c:472-7062 OK connected to server  
> Storage daemon bacula.home:9103. socket=10.1.10.111.60144:10.1.200.12.9103 
> s=0x7fa01c01ac88
> 25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:335-7062 TLSPSK Local 
> need 100
> 25-Apr-2024 01:00:05 james1-fd: hello.c:183-7062 Recv caps from SD failed. 
> ERR=Success
> 25-Apr-2024 01:00:05 james1-fd: hello.c:185-7062 Recv caps from SD failed. 
> ERR=Success
> 25-Apr-2024 01:00:05 james1-fd: events.c:48-7062 Events: code=FC0001 
> daemon=james1-fd ref=0x7fa01c00b0a8 type=connection source=bacula-dir 
> text=Director disconnection
> 25-Apr-2024 01:00:05 james1-fd: fd_plugins.c:1749-7062 Free instance 
> plugin_ctx=563bfbb34378 JobId=7062
>  
> The job log on the director confirms:
>  
> 2024-04-25 01:00:00 bacula-dir JobId 7062: Connected to Storage "vTape2" at 
> bacula.home:9103 with TLS
>  
> So correct storage but bad server. Note that vTape 2 has always been pointing 
> to nas1 (not bacula) and the director (as well as james1-fd and both storage 
> deamons) was restarted several times
>  
> The storage daemon on nas1 reports only a connection from the director, the 
> ip address of the client (10.1.10.111) is never seen:
>  
> 25-Apr-2024 01:00:00 nas1-sd: bnet_server.c:235-0 Accept 
> socket=1

Re: [Bacula-users] Fix documentation on deduplication

2024-04-24 Thread Martin Simmons
> On Wed, 24 Apr 2024 23:40:31 +1000, Gary R Schmidt said:
> 
> On 24/04/2024 22:33, Gary R. Schmidt wrote:
> > On 24/04/2024 21:30, Roberto Greiner wrote:
> >>
> >> Em 24/04/2024 04:30, Radosław Korzeniewski escreveu:
> >>> Hello,
> >>>
> >>> wt., 23 kwi 2024 o 13:33 Roberto Greiner  
> >>> napisał(a):
> >>>
> >>>
> >>>     Em 23/04/2024 04:34, Radosław Korzeniewski escreveu:
>      Hello,
> 
>      śr., 17 kwi 2024 o 14:01 Roberto Greiner 
>      napisał(a):
> 
> 
>      The error is at the end of the page, where it says that you
>      can see how
>      much space is being used using 'df -h', but the problem is
>      that df can't
>      actually see the space gain from dedup, it shows how much
>      would be used
>      without dedup.
> 
> 
>      This command (df -h) shows how much allocated and free space is
>      available on the filesystem. So when you have a dedup ratio 20:1,
>      and you wrote 20TB, then your df command shows 1TB allocated.
> >>>
> >>>     But that is the exact problem I had. df did NOT show 1TB
> >>>     allocated. It indicated 20TB allocated (yes, in ZFS).
> >>>
> >>> I have not used ZFS Dedup for a long time (I'm a ZFS user from the 
> >>> first beta in Solaris), so I'm curious - if your zpool is 2TB in size 
> >>> and you have a 20:1 dedup ratio with 20TB saved and 1TB allocated 
> >>> then what df shows for you?
> >>> Something like this?
> >>> Size: 2TB
> >>> Used: 20TB
> >>> Avail: 1TB
> >>> Use%: 2000%
> >>>
> >> No, the values are quite different. I wrote 20tb to stay with the 
> >> example previously given. My actual numbers are:
> >>
> >> df: 2,9TB used
> >> zpool list: 862GB used, 3.4x dedup level.
> >> Actual partition size: 7.2TB
> >>
> > You use zpool list to examine filespace.
> > Or zfs list.

On FreeBSD at least, zfs list will show the same as df (i.e. will include all
copies of the deduplicated data in the USED column).

I think the reason is that deduplication is done at the pool level, so there
is no single definition of which dataset owns each deduplicated block.  As a
result, the duplicates have to be counted multiple times.  This is different
from a cloned dataset, where the original dataset owns any blocks that are
shared.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fix documentation on deduplication

2024-04-24 Thread Martin Simmons
> On Wed, 24 Apr 2024 09:30:15 +0200, Radosław Korzeniewski said:
> 
> Hello,
> 
> wt., 23 kwi 2024 o 13:33 Roberto Greiner  napisał(a):
> 
> >
> > Em 23/04/2024 04:34, Radosław Korzeniewski escreveu:
> >
> > Hello,
> >
> > śr., 17 kwi 2024 o 14:01 Roberto Greiner  napisał(a):
> >
> >>
> >> The error is at the end of the page, where it says that you can see how
> >> much space is being used using 'df -h', but the problem is that df can't
> >> actually see the space gain from dedup, it shows how much would be used
> >> without dedup.
> >>
> >>
> > This command (df -h) shows how much allocated and free space is available
> > on the filesystem. So when you have a dedup ratio 20:1, and you wrote 20TB,
> > then your df command shows 1TB allocated.
> >
> > But that is the exact problem I had. df did NOT show 1TB allocated. It
> > indicated 20TB allocated (yes, in ZFS).
> >
> I have not used ZFS Dedup for a long time (I'm a ZFS user from the first
> beta in Solaris), so I'm curious - if your zpool is 2TB in size and you
> have a 20:1 dedup ratio with 20TB saved and 1TB allocated then what df
> shows for you?
> Something like this?
> Size: 2TB
> Used: 20TB
> Avail: 1TB
> Use%: 2000%

No, the Size will say 21TB in that situation (on FreeBSD at least).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fix documentation on deduplication

2024-04-23 Thread Martin Simmons
> On Tue, 23 Apr 2024 08:31:59 -0300, Roberto Greiner said:
> 
> Em 23/04/2024 04:34, Radosław Korzeniewski escreveu:
> > Hello,
> >
> > śr., 17 kwi 2024 o 14:01 Roberto Greiner  napisał(a):
> >
> >
> > The error is at the end of the page, where it says that you can
> > see how
> > much space is being used using 'df -h', but the problem is that df
> > can't
> > actually see the space gain from dedup, it shows how much would be
> > used
> > without dedup.
> >
> >
> > This command (df -h) shows how much allocated and free space is 
> > available on the filesystem. So when you have a dedup ratio 20:1, and 
> > you wrote 20TB, then your df command shows 1TB allocated.
> 
> But that is the exact problem I had. df did NOT show 1TB allocated. It 
> indicated 20TB allocated (yes, in ZFS).

Yes, that is how df works with ZFS unfortunately (it doesn't know about
dedup).  See also
https://c0t0d0s0.org/oracle/solaris/english/2009/12/02/df-considered-problematic.c0t0d0s0.html

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] IBM TS3500 LTO7 ULT3580-TD7 Sense Key : Data Protect [current] Add. Sense: Operator selected write protect

2024-04-19 Thread Martin Simmons
It sounds like the "append-only mode" described in:

https://www.dell.com/community/en/conversations/celerra/ndmp-backups-to-ibm-ultrium-td5-drive/647f195bf4ccf8a8dec3aa95
https://www.ibm.com/docs/en/ts4300-tape-library?topic=features-data-safe-append-only-mode
https://support.oracle.com/knowledge/Sun%20Microsystems/1368914_1.html

__Martin


> On Thu, 18 Apr 2024 18:41:12 -0300 (BRT), Heitor Faria said:
> 
> Dear Bacula Users,
> 
> 
> 
> I wonder if someone had faced a similar problem that have been haunting me 
> for weeks and I found no exact similar case on the internet.
> 
> This partitioned tape library with two drivers for the Bacula machine cannot 
> perform a weof diretcly:
> 
> 
> 
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst weof
> /dev/tape/by-id/scsi-35005076044064c01-nst: Input/output error
> mt: The tape is write-protected.
> 
> 
> 
> 
> And the dmesg displays the error:
> 
> 
> 
> [Thu Apr 18 17:32:01 2024] st 14:0:0:0: [st0] Sense Key : Data Protect 
> [current]
> [Thu Apr 18 17:32:01 2024] st 14:0:0:0: [st0] Add. Sense: Operator selected 
> write protect
> 
> 
> 
> 
> However, if we do an eod before weof, it displays no error.
> 
> 
> 
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst eod
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst weof
> 
> 
> 
> 
> But still, I cannot use the tape with Bacula, or test it with btape:
> 
> 
> 
> [root@localhost Device]# /opt/bacula/bin/btape  
> /dev/tape/by-id/scsi-35005076044064c01-nst
> Tape block granularity is 1024 bytes.
> btape: butil.c:296-0 Using device: 
> "/dev/tape/by-id/scsi-35005076044064c01-nst" for writing.
> btape: btape.c:475-0 open device "Drive-0" 
> (/dev/tape/by-id/scsi-35005076044064c01-nst): OK
> *label
> Enter Volume Name: test
> btape: block.c:301-0 [SE0201] Write error at 0:0 on device "Drive-0" 
> (/dev/tape/by-id/scsi-35005076044064c01-nst) Vol=test. ERR=Input/output error.
> 18-Apr 17:39 btape JobId 0: Error: block.c:301 [SE0201] Write error at 0:0 on 
> device "Drive-0" (/dev/tape/by-id/scsi-35005076044064c01-nst) Vol=test. 
> ERR=Input/output error.
> 
> 
> 
> 
> And the same dmesg Data Protect message appears. 
> 
> The curious thing is we are able to write and restore some files using tar 
> using the same drive, but only if we manually move the tape:
> 
> 
> 
> [root@localhost Device]# tar cvf /dev/tape/by-id/scsi-35005076044064c01-nst 
> /etc/resolv.conf
> tar: Removing leading `/' from member names
> /etc/resolv.conf
> [root@localhost Device]# tar tvf /dev/tape/by-id/scsi-35005076044064c01-nst
> tar: This does not look like a tar archive
> tar: Exiting with failure status due to previous errors
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst bsf 
> 2
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst fsf
> [root@localhost Device]# tar tvf /dev/tape/by-id/scsi-35005076044064c01-nst
> -rw-r--r-- root/root        74 2024-04-18 16:24 etc/resolv.conf
> 
> 
> 
> 
> mt status as follows:
> 
> 
> 
> [root@localhost Device]# mt -f /dev/tape/by-id/scsi-35005076044064c01-nst 
> status
> SCSI 2 tape drive:
> File number=20, block number=1, partition=0.
> Tape block size 0 bytes. Density code 0x5c (LTO-7).
> Soft error count since last status=0
> General status bits on (101):
>  ONLINE IM_REP_EN
> 
> 
> 
> 
> The tapes are new, LTO7 such as the drive, the red read-only physical tape 
> switches are open, no encryption is used by the drivers.
> 
> We already tried two different OSes with the same behavior. Ubuntu and Rocky 
> Linux.
> 
> There are no relevant tape library error logs.
> 
> Any hints?
> 
> 
> 
> Rgds.
> 
> 
> MSc,MBA Heitor Faria (Miami/USA)
> Bacula LATAM CIO
> 
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
> 
>   
> 
> bacula.lat | bacula.com.br
> 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] MD5 Fatal Error

2024-04-15 Thread Martin Simmons
Please Cc replies to the list so other people can see them.

Did that job read from Vol0159 as well as writing to it?  If yes, then
messages like this help to show why:

pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
"Vol0159". Marking it purged.

To prevent that, I think you need to increase the Job Retention in the pool or
client resource so it keeps the job records for longer.

BTW, the job log lines below are in reverse order (latest is first) so that
make it confusing!

__Martin


>>>>> On Mon, 15 Apr 2024 09:56:53 +0200, Mohamed AIT EL HADJ said:
> 
> Yes, i think its the same problem but i dont know how to fix it. Do you 
> have any ideas ?
> 
> Le 12/04/2024 à 17:48, Martin Simmons a écrit :
> > Maybe the same problem as reported in
> > https://sourceforge.net/p/bacula/mailman/bacula-users/thread/c3bc9fed-3c84-46d3-2f40-9dd78590a773%40baculasystems.com/#msg37296305
> > ?
> >
> > I assume the "cycle in their volume list" comment there means it is writing 
> > to
> > a volume that it previous read.
> >
> > __Martin
> >
> >
> >>>>>> On Mon, 8 Apr 2024 11:27:20 +0200, Mohamed AIT EL HADJ via 
> >>>>>> Bacula-users said:
> >> Hello,
> >> My backups were working fine but yesterday I had a problem with one of
> >> my jobs which apparently has a problem with MD5.
> >>
> >> Here are the job logs:
> >>
> >> pbckall004-dir JobId 5387: Error: Unable to flush file records!
> >> pbckall004-dir JobId 5387: Fatal error: Error detected between 
> >> digest[466172]="zxdWYCUt+beniiSsYO+d6w" and 
> >> name[466171]="/var/opt/gitlab/gitlab-rails/shared/artifacts/12/53/1253e9373e781b7500266caa55150e08e210bc8cd8cc70d89985e3600155e860/2024_04_01/"
> >> pbckall004-dir JobId 5387: Fatal error: MD5 digest not same 
> >> FileIndex=466172 as attributes FI=466171
> >> pbckall004-sd JobId 5387: Sending spooled attrs to the Director. 
> >> Despooling 237,369,878 bytes ...
> >> pbckall004-sd JobId 5387: Elapsed time=02:20:45, Transfer rate=7.303 M 
> >> Bytes/second
> >> pbckall004-sd JobId 5387: End of Volume "Vol0197" at addr=898996114 on 
> >> device "DEV-dedup-DRV19" (/zfs/bacula).
> >> pbckall004-sd JobId 5387: Forward spacing Volume "Vol0197" to 
> >> addr=893762478
> >> pbckall004-sd JobId 5387: Ready to read from volume "Vol0197" on Aligned 
> >> device "DEV-dedup-DRV19" (/zfs/bacula).
> >> pbckall004-sd JobId 5387: End of Volume "Vol0165" at addr=515616168 on 
> >> device "DEV-dedup-DRV19" (/zfs/bacula).
> >> pbckall004-sd JobId 5387: New volume "Vol0159" mounted on device 
> >> "DEV-dedup-DRV1" (/zfs/bacula) at 07-avril-2024 18:22.
> >> pbckall004-sd JobId 5387: Recycled volume "Vol0159" on Aligned device 
> >> "DEV-dedup-DRV1" (/zfs/bacula), all previous data lost.
> >> pbckall004-dir JobId 5387: Using Volume "Vol0159" from 'Scratch' pool.
> >> pbckall004-dir JobId 5387: Recycled volume "Vol0159"
> >> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0131"; 
> >> marking it "Purged"
> >> pbckall004-dir JobId 5387: New Pool is: Scratch
> >> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> >> "Vol0131". Marking it purged.
> >> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0124"; 
> >> marking it "Purged"
> >> pbckall004-dir JobId 5387: New Pool is: Scratch
> >> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> >> "Vol0124". Marking it purged.
> >> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0184"; 
> >> marking it "Purged"
> >> pbckall004-dir JobId 5387: New Pool is: Scratch
> >> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> >> "Vol0184". Marking it purged.
> >> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0186"; 
> >> marking it "Purged"
> >> pbckall004-dir JobId 5387: New Pool is: Scratch
> >> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> >> "Vol0186". Marking it purged.
> >> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0159"; 
> >> marking it "Purged"
> >> pbckall004

Re: [Bacula-users] MD5 Fatal Error

2024-04-12 Thread Martin Simmons
Maybe the same problem as reported in
https://sourceforge.net/p/bacula/mailman/bacula-users/thread/c3bc9fed-3c84-46d3-2f40-9dd78590a773%40baculasystems.com/#msg37296305
?

I assume the "cycle in their volume list" comment there means it is writing to
a volume that it previous read.

__Martin


> On Mon, 8 Apr 2024 11:27:20 +0200, Mohamed AIT EL HADJ via Bacula-users 
> said:
> 
> Hello,
> My backups were working fine but yesterday I had a problem with one of 
> my jobs which apparently has a problem with MD5.
> 
> Here are the job logs:
> 
> pbckall004-dir JobId 5387: Error: Unable to flush file records!
> pbckall004-dir JobId 5387: Fatal error: Error detected between 
> digest[466172]="zxdWYCUt+beniiSsYO+d6w" and 
> name[466171]="/var/opt/gitlab/gitlab-rails/shared/artifacts/12/53/1253e9373e781b7500266caa55150e08e210bc8cd8cc70d89985e3600155e860/2024_04_01/"
> pbckall004-dir JobId 5387: Fatal error: MD5 digest not same FileIndex=466172 
> as attributes FI=466171
> pbckall004-sd JobId 5387: Sending spooled attrs to the Director. Despooling 
> 237,369,878 bytes ...
> pbckall004-sd JobId 5387: Elapsed time=02:20:45, Transfer rate=7.303 M 
> Bytes/second
> pbckall004-sd JobId 5387: End of Volume "Vol0197" at addr=898996114 on device 
> "DEV-dedup-DRV19" (/zfs/bacula).
> pbckall004-sd JobId 5387: Forward spacing Volume "Vol0197" to addr=893762478
> pbckall004-sd JobId 5387: Ready to read from volume "Vol0197" on Aligned 
> device "DEV-dedup-DRV19" (/zfs/bacula).
> pbckall004-sd JobId 5387: End of Volume "Vol0165" at addr=515616168 on device 
> "DEV-dedup-DRV19" (/zfs/bacula).
> pbckall004-sd JobId 5387: New volume "Vol0159" mounted on device 
> "DEV-dedup-DRV1" (/zfs/bacula) at 07-avril-2024 18:22.
> pbckall004-sd JobId 5387: Recycled volume "Vol0159" on Aligned device 
> "DEV-dedup-DRV1" (/zfs/bacula), all previous data lost.
> pbckall004-dir JobId 5387: Using Volume "Vol0159" from 'Scratch' pool.
> pbckall004-dir JobId 5387: Recycled volume "Vol0159"
> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0131"; marking 
> it "Purged"
> pbckall004-dir JobId 5387: New Pool is: Scratch
> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> "Vol0131". Marking it purged.
> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0124"; marking 
> it "Purged"
> pbckall004-dir JobId 5387: New Pool is: Scratch
> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> "Vol0124". Marking it purged.
> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0184"; marking 
> it "Purged"
> pbckall004-dir JobId 5387: New Pool is: Scratch
> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> "Vol0184". Marking it purged.
> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0186"; marking 
> it "Purged"
> pbckall004-dir JobId 5387: New Pool is: Scratch
> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> "Vol0186". Marking it purged.
> pbckall004-dir JobId 5387: All records pruned from Volume "Vol0159"; marking 
> it "Purged"
> pbckall004-dir JobId 5387: New Pool is: Scratch
> pbckall004-dir JobId 5387: There are no more Jobs associated with Volume 
> "Vol0159". Marking it purged.
> pbckall004-sd JobId 5387: End of medium on Volume "Vol0179" 
> Bytes=53,686,844,017 Blocks=409,600 at 07-avril-2024 18:22.
> pbckall004-sd JobId 5387: User defined maximum volume size 53,687,091,200 
> will be exceeded on device "DEV-dedup-DRV1" (/zfs/bacula).    Marking Volume 
> "Vol0179" as Full.
> 
> I have no idea how this could have happened or how to resolve the error.
> 
> Any helps will be great.
> 
> Regards,
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building the S3/Amazon options for Bacula 15

2024-04-12 Thread Martin Simmons
> On Thu, 11 Apr 2024 16:34:48 -0400, Dan Langille said:
> 
> A problem with building the S3 options on Bacula 15.0.2 has been reported[1] 
> and I'm trying to figure it out. I'm not an s3 user myself, and I am the 
> maintainer of the FreeBSD port/package.
> 
> When building 13.0.4, see:
> 
> src/stored/.libs/bacula-sd-cloud-s3-driver.so
> src/stored/.libs/bacula-sd-cloud-s3-driver-13.0.4.so
> 
> When building 15.0.2, I see
> 
> src/stored/.libs/bacula-sd-cloud-driver.so
> src/stored/.libs/bacula-sd-cloud-driver-15.0.2.so
> 
> Of note, the '-s3' part of the name has been removed. That doesn't seem to be 
> my problem though.

These are different libraries and you need both of them
(bacula-sd-cloud-driver loads bacula-sd-cloud-s3-driver if required).  I think
this is the same in 13.0.4.

> The problem is the files do not get installed, and I'm not sure why.

The build log contains:

checking libs3.h usability... no
checking libs3.h presence... no
checking for libs3.h... no

so that looke like the problem.  Where is your libs3.h?

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with size of the backup

2024-04-11 Thread Martin Simmons
> On Thu, 11 Apr 2024 12:00:24 +, Borut Rozman via Bacula-users said:
> 
> The underlying storage is ~800gb in size. All of the sudden incremental
> backups started to increase exponentially 
> 
> 1.4. - 93M
> 2.4. - 574.5M
> 3.4. - 115.5GB
> 4.4. - 951.3GB
> 5.4. - 1.7TB
> 6.4. - 2.5TB
> 7.4. - 3.4TB
> 8.4. - 4.1TB
> 9.4. - 4.7TB
> 10.4.- 5.4TB
> 
> so last backup says 5,471,496,436,363 (5.471 TB) written, and last 4
> backups from this client with several others were written to this tape
> which is LTO6 - so max 2.5TB should be written (6.25 compressed). 
> 
> For some reason estimation of the size of the backup is way off and
> data written on the tape as now that tape says it has 18T written -
> Vol. bytes18.3TB 
> 
> which is absolutely wrong...
> 
> even if I move that data to some other folder on that server I get the
> same result.
> 
> Any ideas?

Does the bconsole estimate command also show the same effect?

Maybe you are backing up a sparse file?  You could look for large files using
/usr/bin/find with the -size option.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error: cannot run a job from a RunScript

2024-04-09 Thread Martin Simmons
> On Tue, 9 Apr 2024 10:57:24 +0100, Chris Wilkinson said:
> 
> I tried to get a copy job to run after completion of the job using a
> Runscript. The Job is shown below. I can run this copy job OK from within
> bconsole/baculum but it fails when run from a Runscript.
> 
> I get an error "09-Apr 09:52 bsvr-dir JobId 0: Can't use run command in a
> runscript09-Apr 09:52 bsvr-dir JobId 0: run: is an invalid command."
> 
> Is it not permissible to use a run command within a Runscript block?

The run command is not allowed.  You can only use the commands documented
under RunScript in the manual.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mistakenly erased label headers from LTO with btape test / need help to salvage content

2024-03-26 Thread Martin Simmons
Bscan will not help.  Eric is correct: you can't recover an overwritten tape
using normal tape drive software because it will not allow you to read beyond
the end of the last-written data.

__Martin


> On Mon, 25 Mar 2024 19:15:50 +0200, Pedro Oliveira said:
> 
> Please could try to use Bacula Volume utility tool  bscan
> 
> https://www.bacula.org/2.4.x-manuals/en/main/Volume_Utility_Tools.html
> 
> Create your WiseStamp email signature
> 
> 
> [image: __tpx__]
> ‌
> 
> 
> Dedy Yohann  escreveu em seg., 25/03/2024 às 19:12 :
> 
> > So I managed to relabel the tape, using the "btape labe" command. But as
> > you all implied, that's unfortunately not enough.
> >
> > When I run bextract, no files are restored and the command ends after a
> > couple of seconds
> > Here's the log (the volume ID is PAT031L7)
> >
> > # bextract -c /etc/bacula/bacula-sd.conf -t -V PAT031L7 Drive-0
> > /tmp/restore/ -v
> > bextract: butil.c:292-0 Using device: "Drive-0" for reading.
> > 25-Mar 18:01 bextract JobId 0: No slot defined in catalog (slot=0) for
> > Volume "PAT031L7" on "Drive-0" (/dev/nst0).
> >
> > 25-Mar 18:01 bextract JobId 0: Cartridge change or "update slots" may be
> > required.
> > 25-Mar 18:01 bextract JobId 0: Ready to read from volume "PAT031L7" on
> > Tape device "Drive-0" (/dev/nst0).
> >
> > 25-Mar 18:01 bextract JobId 0: End of Volume "PAT031L7" at addr=0:0 on
> > device "Drive-0" (/dev/nst0).
> > 0 files restored.
> >
> >
> > I am somewhat familiar with the disk recovery for block devices with dd.
> > I might give this a try after we figure if a recovery is absolutely needed.
> >
> > Thanks for the hints
> >
> >
> > Yohann DEDY
> > ---
> > Tél: 06 23 91 46 00
> > --
> > *De :* Rob Gerber 
> > *Envoyé :* lundi 25 mars 2024 16:35
> > *À :* Pedro Oliveira 
> > *Cc :* Dedy Yohann ; bacula-users <
> > bacula-users@lists.sourceforge.net>
> > *Objet :* Re: [Bacula-users] Mistakenly erased label headers from LTO
> > with btape test / need help to salvage content
> >
> > Standard data recovery processes are to take a bit for bit image of the
> > troubled media, them attempt all recovery against a copy of the image. This
> > process is used in disk recovery for block devices but I think it could
> > apply in your case also.
> >
> > At minimum, I would write some data to a scratch tape with bacula (at
> > least 20gb or something somewhat substantial consisting of known files
> > which you have hashed so you can verify the success of the recovery),
> > repeat the previous mistaken run of 'btape test' with reasonably quick
> > cancellation (but not too quick as to be overly optimistic!), then attempt
> > recovery.
> >
> > How recovery is done in this case is something I'm not super familiar
> > with. As suggested by Pedro, maybe label + bextract (spelling uncertain,
> > check bacula bin folder)? When dealing with a tape whose data is not in the
> > bacula catalog we typically want to run bscan, but I don't know if it will
> > handle this case well.
> >
> > The wisest case may be to set this tape aside and do a new full backup. If
> > a recovery is needed then you can attempt recovery of data from this tape.
> > If no recovery is ever needed, then no problem.
> >
> >
> >
> > Robert Gerber
> > 402-237-8692
> > r...@craeon.net
> >
> > On Mon, Mar 25, 2024, 10:25 AM Pedro Oliveira 
> > wrote:
> >
> > try to label again the tape and then try to use bexcrtac
> >
> > Create your WiseStamp email signature
> > 
> >
> > [image: __tpx__]
> > ‌
> >
> >
> > Dedy Yohann  escreveu em seg., 25/03/2024 às
> > 17:15 :
> >
> > Dear bacula community,
> > I made a mistake today while making maintenance tasks.
> >
> > I unintentionaly ran the "btape test" command on a LTO cartidge that was
> > already labeled and assigned to our main backup pool.
> >
> > When I realized data was written on the tape after a couple of seconds, I
> > stopped the operation (ctrl-c) in panic...
> > Unfortunately the evil was already done, now when I try to restore data
> > from this volume, the cartidge is correctly loaded in the drive (barcode
> > checked by the autochanger) but I get this error once data is read from the
> > tape :
> >
> > bacula-sd JobId 425: Warning: acquire.c:279 Read acquire: Could not
> > unserialize Volume label: ERR=label.c:987 Expecting Volume Label, got FI=0
> > Stream=0 len=64412
> >
> > Is there a way to manually restore the content of the tape or force the
> > relabelling of the tape without erasing the content of the cartidge?
> >
> > Thanks in advance for your suggestions,
> >
> > Yohann
> >
> > Bacula version : 9.6.7
> > Debian 10 (Buster)
> > Kernel 4.19.0-25-amd64
> >
> >
> > _

Re: [Bacula-users] Tapes suddenly unlabeled

2024-03-22 Thread Martin Simmons
Were there any errors written to the syslog during the backup?

Does the tape work with mt and dd after stopping bacula-sd?  E.g. something
like

mt -f /dev/tape/by-id/scsi-C3EB53C004-nst rewind
dd if=/dev/tape/by-id/scsi-C3EB53C004-nst count=1 | od -c

__Martin


> On Fri, 22 Mar 2024 14:35:53 +0100, MMag Dr Karl Kashofer said:
> 
> Dear all !
> 
> I try to work with a 50 slot Quantum Scalar i3 Library with an IBM LTO7 
> drive. I have a set of 10 LTO7 tapes as a
> storage pool in bacula and it has been backing up user directories for a 
> while.
> 
> Recently we needed to swap the library chassis due to drive issues. I 
> unmounted the library, shut down bacula,
> transferred the tapes to the new library (again a Quantum Scalar i3 with the 
> drives from the old one), adjusted the scsi
> paths of the storage demon config and fired up bacula. During the change I 
> did switch from the "st" to the "nst" devices
> as apparently these are preferred.
> 
> Bacula now sees the changer device, sees the pool of tapes, but when it tried 
> to do the next backup it horribly died on
> the current append volume with:
> 
> 22-Mar 08:42 bacula-sd JobId 1245: Error: [SE0203] The Volume=MU0014L7 on 
> device="LTO7-0" (/dev/tape/by-id/scsi-
> C3EB53C004-nst) appears to be unlabeled.
> 
> There is already 45GB of data on that tape and i really dont see why it 
> suddenly calls it "unlabeled".
> 
> Any idea what could be wrong ?
> 
> I did find this thread and am unsure if thats related:
> https://bacula-users.narkive.com/JWcvt8eZ/bacula-loses-tape-label
> 
> Please help,
> Thanks,
> Karl
> 
>  
> -- 
> MMag. Dr. Karl Kashofer 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again autochangers, mtx-changer script and LTO9...

2024-03-01 Thread Martin Simmons
> On Fri, 1 Mar 2024 11:52:31 +0100, Marco Gaiarin said:
> 
>  *unmount storage=CNPVE3Autochanger
>  Using Catalog "BaculaLNF"
>  Connecting to Storage daemon CNPVE3Autochanger at cnpve3.cn.lnf.it:9103 ...
>  3307 Issuing autochanger "unload Volume AAJ663L9, Slot 4, Drive 0" command.
>  3995 Bad autochanger "unload Volume AAJ663L9, Slot 4, Drive 0": ERR=Child 
> died from signal 15: Termination Results=Program killed by Bacula (timeout)
>  3002 Device ""LTO9Storage0" (/dev/nst0)" unmounted.
> 
> So, this seems solved only by:
> 
> 1) increasing bacula timeout when run autochanger script; i've look into the
>  docs but i've not found the knob; i've not looked at code, but i suppose is
> hardcoded somewhere...

Isn't it Maximum Changer Wait?

See 
https://www.bacula.org/13.0.x-manuals/en/main/Storage_Daemon_Configuratio.html#SECTION002430

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-fd hangs in pipe during backup

2024-02-15 Thread Martin Simmons
Ah, so it is bug that happens when "honor nodump flag=yes" on Linux.  I
couldn't repeat it because FreeBSD implements nodump differently.

You could work around it either by not using that option or by having another
options clause that sets "honor nodump flag=no" for the specific places where
you have pipes or sockets.

__Martin


>>>>> On Thu, 15 Feb 2024 09:14:18 -0500, Peter Sjoberg said:
> 
> Sorry for late reply, had to let my normal backups finish up first.
> 
> I did ask google how to run gdb and managed to get something out that I=20
> now attached.
> No symbols since I'm running the distro version insead of github.
> Did see that a new version is released - 13.0.4. Running some more=20
> backups now (copy to external drive) but after that I will update and=20
> see if anything changed.
> 
> /ps
> 
> 
> On 2024-02-13 10:55, Martin Simmons wrote:
> > It works for me on FreeBSD with Bacula 15 from git.
> >
> > Can you attach gdb to the bacula-fd while it is running and issue the g=
> db
> > command:
> >
> > thread apply all bt
> >
> > Also, try running bacula-fd with -d 150 -dt -v -fP which will make it p=
> rint
> > the debug info to the terminal.  Level 150 should show what it is doing=
>  for
> > the fifo.
> >
> > __Martin
> >
> >
> >>>>>> On Tue, 13 Feb 2024 09:07:43 -0500, Peter Sjoberg said:
> >> On 2024-02-13 02:49, Eric Bollengier wrote:
> >>> Hello Peter,
> >>>
> >>> Without the ReadFifo directive, it's unlikely to cause a problem,
> >> Unlikely maybe but that is the problem and I can even reproduce it!
> >> My setup is based on ubuntu 22.04 LTS (was trying debian but align is
> >> broken there) using the community repo
> >>
> >>   =C2=A0 debhttps://www.bacula.org/packages//debs/13.0.3 jam=
> my main
> >>
> >>> and the file daemon output is pretty clear, we are not at this file.
> >> The file daemon output shows last file that worked, not the file it is
> >> trying to backup.
> >>
> >> To reproduce I did
> >>
> >> *1 - create a fileset that backups just /tmp/debug*
> >>
> >> FileSet {
> >>   Name =3D "debugfs2"
> >>   Ignore FileSet Changes =3D yes
> >>   Include {
> >> Options {
> >>   signature=3DMD5
> >>   honor nodump flag=3Dyes
> >>   noatime=3Dyes
> >>   keepatime =3D no
> >>   sparse=3Dyes
> >>   exclude =3D yes
> >>   wild =3D *~
> >>   wild =3D *.tmp
> >>   }
> >> File =3D "/tmp/debug"
> >> }
> >>   }
> >>
> >> *2 - create a pipe ("mkfifo random_pipe") and a plane file ("date
> >>   >a_red_herring") in /tmp/debug*
> >>
> >> peters@quark:/tmp/debug$ find /tmp/debug/ -ls
> >>   11017  0 drwxr-xr-x   2 peters   peters 80 Feb 13 08=
> :47 /tmp/debug/
> >>   11031  4 -rw-r--r--   1 peters   peters 32 Feb 13 08=
> :47 /tmp/debug/a_red_herring
> >>   11029  0 prw-r--r--   1 peters   peters  0 Feb 13 08=
> :46 /tmp/debug/random_pipe
> >> peters@quark:/tmp/debug$
> >>
> >> *3 - start a backup;
> >> *
> >>
> >> root@quark:~#echo run BackupQ_quark FileSet=3D"debugfs2" Level=3DFull =
> yes|bconsole
> >>
> >> *4 - confirm it hangs*
> >>
> >> root@quark:~# echo stat client=3Dquark-fd|bconsole #CLIENTSTAT
> >> Connecting to Director quark:9101
> >> 1000 OK: 10002 techwiz-dir Version: 13.0.3 (02 May 2023)
> >> Enter a period to cancel a command.
> >> stat client=3Dquark-fd
> >> Connecting to Client quark-fd at quark:9102
> >>
> >> quark-fd Version: 13.0.3 (02 May 2023)  x86_64-pc-linux-gnu-bacula-ent=
> erprise ubuntu 22.04
> >> Daemon started 12-Feb-24 23:55. Jobs: run=3D5 running=3D1.
> >>Heap: heap=3D856,064 smbytes=3D603,907 max_bytes=3D1,219,047 bufs=3D=
> 178 max_bufs=3D429
> >>Sizes: boffset_t=3D8 size_t=3D8 debug=3D0 trace=3D0 mode=3D0,0 bwli=
> mit=3D0kB/s
> >>Crypto: fips=3DN/A crypto=3DOpenSSL 3.0.2 15 Mar 2022
> >>Plugin: bpipe-fd.so(2)
> >>
> >> Running Jobs:
> >> JobId 315 Job BackupQ_quark.2024-02-13_08.48.07_46 is running.
> >>   Full Backup Job started: 13-Feb-24 08:48
> >>   Files=3D1 Bytes=

Re: [Bacula-users] Incr/Diff not backing up mv'd files

2024-02-13 Thread Martin Simmons
Yes, that's what I expect.  I wasn't sure what Daniel was claiming.

__Martin


>>>>> On Tue, 13 Feb 2024 16:23:11 +, Chris Wilkinson said:
> 
> Well, yes and no. From what I see;
> 
> mv on a file will update ctime as expected.
> 
> mv of a directory will update the directory ctime but not that of any
> contained files or directories.
> 
> This is on debian 11.
> 
> -Chris-
> 
> On Tue, 13 Feb 2024, 16:01 Martin Simmons,  wrote:
> 
> > Are you saying that mv on a file doesn't change the ctime?  If so, which
> > filesystem is that please?
> >
> > __Martin
> >
> >
> > >>>>> On Mon, 12 Feb 2024 22:52:08 +0100, Daniel Etter said:
> > >
> > > mv does not change the dates, only cp change the time, because it is a
> > new
> > > file.
> > >
> > > Daniel
> > >
> > >
> > >
> > > Am Mo., 12. Feb. 2024 um 22:39 Uhr schrieb Chris Wilkinson <
> > > winstonia...@gmail.com>:
> > >
> > > > I don't have mtimeonly set.
> > > >
> > > > I mv'd a sample file in my home directory as an experiment and saw that
> > > > stat reports that atime, mtime remain unchanged but ctime does change
> > as
> > > > you expected.
> > > >
> > > > When I stat one of the files I moved before, I see that ctime did not
> > > > change.
> > > >
> > > > I had previously mv'd a whole subdirectory containing several
> > > > subdirectories, each with a dozen or so files. I see that the mv'd
> > > > subdirectory ctime changes but ctime for the contained subdirectories
> > and
> > > > files does not.
> > > >
> > > > This presumably is why they did not get backed up. I set the accurate
> > flag
> > > > in the job resource and then the files do get backed up.
> > > >
> > > > -Chris-
> > > >
> > > > On Mon, 12 Feb 2024, 20:33 Martin Simmons, 
> > wrote:
> > > >
> > > >> Do you have the mtimeonly option set in the FileSet?
> > > >>
> > > >> I would expect mv to change the ctime.  Can you repeat this (mv not
> > > >> changing
> > > >> the ctime)?
> > > >>
> > > >> __Martin
> > > >>
> > > >>
> > > >> >>>>> On Sun, 11 Feb 2024 09:17:10 +, Chris Wilkinson said:
> > > >> >
> > > >> > I'm seeing that files that are mv'd within the same folder are not
> > being
> > > >> > backed up by incr or diff backups. This is on Bacula v11, Debian 11.
> > > >> >
> > > >> > I'm guessing that this is because mtime and ctime are unchanged by
> > mv
> > > >> and
> > > >> > that the full path is not used.
> > > >> >
> > > >> > This is not a big problem I suppose because the files are still
> > there in
> > > >> > the last full and will get backed up in the next full.
> > > >> >
> > > >> > I'm just wondering if this is intended functionality or a
> > > >> mis-configuration
> > > >> > on my part?
> > > >> >
> > > >> > Chris Wilkinson
> > > >> >
> > > >>
> > > > ___
> > > > Bacula-users mailing list
> > > > Bacula-users@lists.sourceforge.net
> > > > https://lists.sourceforge.net/lists/listinfo/bacula-users
> > > >
> > >
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incr/Diff not backing up mv'd files

2024-02-13 Thread Martin Simmons
Are you saying that mv on a file doesn't change the ctime?  If so, which
filesystem is that please?

__Martin


>>>>> On Mon, 12 Feb 2024 22:52:08 +0100, Daniel Etter said:
> 
> mv does not change the dates, only cp change the time, because it is a new
> file.
> 
> Daniel
> 
> 
> 
> Am Mo., 12. Feb. 2024 um 22:39 Uhr schrieb Chris Wilkinson <
> winstonia...@gmail.com>:
> 
> > I don't have mtimeonly set.
> >
> > I mv'd a sample file in my home directory as an experiment and saw that
> > stat reports that atime, mtime remain unchanged but ctime does change as
> > you expected.
> >
> > When I stat one of the files I moved before, I see that ctime did not
> > change.
> >
> > I had previously mv'd a whole subdirectory containing several
> > subdirectories, each with a dozen or so files. I see that the mv'd
> > subdirectory ctime changes but ctime for the contained subdirectories and
> > files does not.
> >
> > This presumably is why they did not get backed up. I set the accurate flag
> > in the job resource and then the files do get backed up.
> >
> > -Chris-
> >
> > On Mon, 12 Feb 2024, 20:33 Martin Simmons,  wrote:
> >
> >> Do you have the mtimeonly option set in the FileSet?
> >>
> >> I would expect mv to change the ctime.  Can you repeat this (mv not
> >> changing
> >> the ctime)?
> >>
> >> __Martin
> >>
> >>
> >> >>>>> On Sun, 11 Feb 2024 09:17:10 +, Chris Wilkinson said:
> >> >
> >> > I'm seeing that files that are mv'd within the same folder are not being
> >> > backed up by incr or diff backups. This is on Bacula v11, Debian 11.
> >> >
> >> > I'm guessing that this is because mtime and ctime are unchanged by mv
> >> and
> >> > that the full path is not used.
> >> >
> >> > This is not a big problem I suppose because the files are still there in
> >> > the last full and will get backed up in the next full.
> >> >
> >> > I'm just wondering if this is intended functionality or a
> >> mis-configuration
> >> > on my part?
> >> >
> >> > Chris Wilkinson
> >> >
> >>
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incr/Diff not backing up mv'd files

2024-02-13 Thread Martin Simmons
Ah, yes, it won't work for the inner files if you mv a subdirectory.  Using
the accurate flag is the way to go.

__Martin


>>>>> On Mon, 12 Feb 2024 21:36:07 +, Chris Wilkinson said:
> 
> I don't have mtimeonly set.
> 
> I mv'd a sample file in my home directory as an experiment and saw that
> stat reports that atime, mtime remain unchanged but ctime does change as
> you expected.
> 
> When I stat one of the files I moved before, I see that ctime did not
> change.
> 
> I had previously mv'd a whole subdirectory containing several
> subdirectories, each with a dozen or so files. I see that the mv'd
> subdirectory ctime changes but ctime for the contained subdirectories and
> files does not.
> 
> This presumably is why they did not get backed up. I set the accurate flag
> in the job resource and then the files do get backed up.
> 
> -Chris-
> 
> On Mon, 12 Feb 2024, 20:33 Martin Simmons,  wrote:
> 
> > Do you have the mtimeonly option set in the FileSet?
> >
> > I would expect mv to change the ctime.  Can you repeat this (mv not
> > changing
> > the ctime)?
> >
> > __Martin
> >
> >
> > >>>>> On Sun, 11 Feb 2024 09:17:10 +, Chris Wilkinson said:
> > >
> > > I'm seeing that files that are mv'd within the same folder are not being
> > > backed up by incr or diff backups. This is on Bacula v11, Debian 11.
> > >
> > > I'm guessing that this is because mtime and ctime are unchanged by mv and
> > > that the full path is not used.
> > >
> > > This is not a big problem I suppose because the files are still there in
> > > the last full and will get backed up in the next full.
> > >
> > > I'm just wondering if this is intended functionality or a
> > mis-configuration
> > > on my part?
> > >
> > > Chris Wilkinson
> > >
> >
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-fd hangs during backup

2024-02-13 Thread Martin Simmons
It works for me on FreeBSD with Bacula 15 from git.

Can you attach gdb to the bacula-fd while it is running and issue the gdb
command:

thread apply all bt

Also, try running bacula-fd with -d 150 -dt -v -fP which will make it print
the debug info to the terminal.  Level 150 should show what it is doing for
the fifo.

__Martin


> On Tue, 13 Feb 2024 09:07:43 -0500, Peter Sjoberg said:
> 
> On 2024-02-13 02:49, Eric Bollengier wrote:
> > Hello Peter,
> >
> > Without the ReadFifo directive, it's unlikely to cause a problem,
> Unlikely maybe but that is the problem and I can even reproduce it!
> My setup is based on ubuntu 22.04 LTS (was trying debian but align is 
> broken there) using the community repo
> 
>    debhttps://www.bacula.org/packages//debs/13.0.3 jammy main
> 
> > and the file daemon output is pretty clear, we are not at this file.
> 
> The file daemon output shows last file that worked, not the file it is 
> trying to backup.
> 
> To reproduce I did
> 
> *1 - create a fileset that backups just /tmp/debug*
> 
> FileSet {
>  Name = "debugfs2"
>  Ignore FileSet Changes = yes
>  Include {
>Options {
>  signature=MD5
>  honor nodump flag=yes
>  noatime=yes
>  keepatime = no
>  sparse=yes
>  exclude = yes
>  wild = *~
>  wild = *.tmp
>  }
>File = "/tmp/debug"
>}
>  }
> 
> *2 - create a pipe ("mkfifo random_pipe") and a plane file ("date 
>  >a_red_herring") in /tmp/debug*
> 
> peters@quark:/tmp/debug$ find /tmp/debug/ -ls
>  11017  0 drwxr-xr-x   2 peters   peters 80 Feb 13 08:47 
> /tmp/debug/
>  11031  4 -rw-r--r--   1 peters   peters 32 Feb 13 08:47 
> /tmp/debug/a_red_herring
>  11029  0 prw-r--r--   1 peters   peters  0 Feb 13 08:46 
> /tmp/debug/random_pipe
> peters@quark:/tmp/debug$
> 
> *3 - start a backup;
> *
> 
> root@quark:~#echo run BackupQ_quark FileSet="debugfs2" Level=Full yes|bconsole
> 
> *4 - confirm it hangs*
> 
> root@quark:~# echo stat client=quark-fd|bconsole #CLIENTSTAT
> Connecting to Director quark:9101
> 1000 OK: 10002 techwiz-dir Version: 13.0.3 (02 May 2023)
> Enter a period to cancel a command.
> stat client=quark-fd
> Connecting to Client quark-fd at quark:9102
> 
> quark-fd Version: 13.0.3 (02 May 2023)  x86_64-pc-linux-gnu-bacula-enterprise 
> ubuntu 22.04
> Daemon started 12-Feb-24 23:55. Jobs: run=5 running=1.
>   Heap: heap=856,064 smbytes=603,907 max_bytes=1,219,047 bufs=178 max_bufs=429
>   Sizes: boffset_t=8 size_t=8 debug=0 trace=0 mode=0,0 bwlimit=0kB/s
>   Crypto: fips=N/A crypto=OpenSSL 3.0.2 15 Mar 2022
>   Plugin: bpipe-fd.so(2)
> 
> Running Jobs:
> JobId 315 Job BackupQ_quark.2024-02-13_08.48.07_46 is running.
>  Full Backup Job started: 13-Feb-24 08:48
>  Files=1 Bytes=40 AveBytes/sec=0 LastBytes/sec=2 Errors=0
>  Bwlimit=0 ReadBytes=32
>  Files: Examined=1 Backed up=1
>  Processing file: /tmp/debug/a_red_herring
>  SDReadSeqNo=8 fd=5 SDtls=1
> Director connected using TLS at: 13-Feb-24 08:55
> 
> 
> *5 - release the job by sending something to the pipe; *
> 
> root@quark:~# echo >/tmp/debug/random_pipe
> 
> *6 - confirm the job finished*
> 
> root@quark:~# echo 'llist jobid=315'|bconsole
> Connecting to Director quark:9101
> 1000 OK: 10002 techwiz-dir Version: 13.0.3 (02 May 2023)
> Enter a period to cancel a command.
> llist jobid=315
> Automatically selected Catalog: MyCatalog
> Using Catalog "MyCatalog"
> JobId: 315
>   Job: BackupQ_quark.2024-02-13_08.48.07_46
>  Name: BackupQ_quark
>   PurgedFiles: 0
>  Type: B
> Level: F
>  ClientId: 32
>ClientName: quark-fd
> JobStatus: T
> SchedTime: 2024-02-13 08:48:07
> StartTime: 2024-02-13 08:48:09
>   EndTime: 2024-02-13 08:56:40
>   RealEndTime: 2024-02-13 08:56:40
>  JobTDate: 1,707,832,600
>  VolSessionId: 66
>VolSessionTime: 1,707,779,559
>  JobFiles: 3
>  JobBytes: 40
> ReadBytes: 32
> JobErrors: 0
>   JobMissingFiles: 0
>PoolId: 2
>  PoolName: File-Full
>PriorJobId: 0
>  PriorJob:
> FileSetId: 7
>   FileSet: debugfs2
>  HasCache: 0
>   Comment:
>  Reviewed: 0
> 
> You have messages.
> root@quark:~#
> 
> /ps
> 
> >
> > The problem can be somewhere else, and a good start is a "status dir"
> > and "status storage".
> >
> > Best Regards,
> > Eric
> >
> > On 2/13/24 06:23, Peter Sjoberg wrote:
> >>
> >> Actually, I think I found the root cause - a pipe!
> >>
> >> The file listed in the client status is not the problem but close to 
> >> it is a pipe (maybe next file) and that is what causing the issue in 
> >> all cases.
> >> I striped down the directory to just one file and it still fails
> >>
> >> root@defiant1:/home/debug# find .zoom -ls
> >>   23855106  4 drwx--   4 sy

Re: [Bacula-users] bacula-fd hangs during backup

2024-02-13 Thread Martin Simmons
> On Tue, 13 Feb 2024 12:03:50 +1100, Gary R Schmidt said:
> 
> On 13/02/2024 11:08, Phil Stracchino wrote:
> > On 2/12/24 18:35, Peter Sjoberg wrote:
> >> Hi all
> >>
> >> I have a strange problem and (on my system) reproducible problem. When 
> >> I do backup of some directories then bacula-fd just hangs and never 
> >> complete.
> >> The directories in question are not very strange and backup of them 
> >> works find with older versions of -fd
> > 
> > 
> > Silly question:  Do the problem directories contain named pipes or sockets?
> > 
> Another possibly silly question: Are there any soft links that may cause 
> a loop?

Soft links can't cause a loop because Bacula doesn't follow them.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incr/Diff not backing up mv'd files

2024-02-12 Thread Martin Simmons
Do you have the mtimeonly option set in the FileSet?

I would expect mv to change the ctime.  Can you repeat this (mv not changing
the ctime)?

__Martin


> On Sun, 11 Feb 2024 09:17:10 +, Chris Wilkinson said:
> 
> I'm seeing that files that are mv'd within the same folder are not being
> backed up by incr or diff backups. This is on Bacula v11, Debian 11.
> 
> I'm guessing that this is because mtime and ctime are unchanged by mv and
> that the full path is not used.
> 
> This is not a big problem I suppose because the files are still there in
> the last full and will get backed up in the next full.
> 
> I'm just wondering if this is intended functionality or a mis-configuration
> on my part?
> 
> Chris Wilkinson
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on incremental files...

2024-01-26 Thread Martin Simmons
>>>>> On Fri, 26 Jan 2024 11:35:21 +0100, Marco Gaiarin said:
> 
> Mandi! Martin Simmons
>   In chel di` si favelave...
> 
> > 'i' is the st_ino field in the stat, i.e. the number that uniquely 
> > identifies
> > the data for the file in the file system.  Note that there is always 
> > exactly 1
> > inode that references the data for each file in a UNIX file system.
> 
> OK. Because data get copied/linked BY .sync dir, option 'i' can be kept,
> because inode does notchange.

Yes.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on incremental files...

2024-01-25 Thread Martin Simmons
> On Wed, 24 Jan 2024 18:00:56 +0100, Marco Gaiarin said:
> 
> Suppose that setting 5 or 1 options depend on setting options; eg, it is
> totally unuseful to have:
>   Options {
> Signature = MD5
> accurate = <...>1
>   }
> 
> so, calculating MD5 and checking SHA1.

In fact, I think that will check MD5.  The implementation can only store one
type of checksum in the catlog and incremental backups just check whatever was
stored (and assume it is the same type as in the original backup).

> But options 'i' what mean? Compare THE inode number? Or the number of inodes
> of that particolar file?

'i' is the st_ino field in the stat, i.e. the number that uniquely identifies
the data for the file in the file system.  Note that there is always exactly 1
inode that references the data for each file in a UNIX file system.

> Also, 'n' mean soft or hard link? What is the interrelation between 'i' and
> 'n' options?

'n' is the st_nlinks field in the stat, i.e. the number of hard links to the
inode.  Nothing records the number of soft links.

> Anyway, i'm currently trying:
> 
>   Options {
> Signature = MD5
> accurate = pugsmcd5
>   }

I think you need to remove 'c' otherwise you will get the same results as
before when the ctime changes.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula: Backup OK -- with warnings - what warning?

2024-01-23 Thread Martin Simmons
>>>>> On Wed, 17 Jan 2024 09:54:23 -0500, Dan Langille said:
> 
> On Fri, Dec 29, 2023, at 6:26 PM, Martin Simmons wrote:
> >>>>>> On Fri, 29 Dec 2023 12:35:59 -0500, Dan Langille said:
> >> 
> >> On Fri, Dec 29, 2023, at 12:10 PM, Martin Simmons wrote:
> >> > 9.6.6 certainly displayed them for me, so I suspect a config issue.
> >> >
> >> > The messages would be omitted if !notsaved is in the Messages resource 
> >> > (but
> >> > they would still be counted as "Non-fatal FD errors" which makes it add 
> >> > "with
> >> > warnings" to the status).
> >> >
> >> > Maybe that changed in the client's bacula-fd.conf when you upgraded it?
> >> 
> >> That's a good idea.
> >> 
> >> [17:29 r730-01 dvl ~] % sudo ls -l /usr/local/etc/bacula/bacula-fd.conf
> >> -rw-r-  1 root bacula 1497 Feb 25  2023 
> >> /usr/local/etc/bacula/bacula-fd.conf
> >> 
> >> [17:31 r730-01 dvl ~] % sudo md5 /usr/local/etc/bacula/bacula-fd.conf
> >> MD5 (/usr/local/etc/bacula/bacula-fd.conf) = 
> >> e41a7d835766f563253c0a93418a1c61
> >> 
> >> 
> >> No change since February.
> >> 
> >> Let's look at snapshots taken before Dec 25, the date of the job in 
> >> question.
> >> 
> >> [17:32 r730-01 dvl /.zfs/snapshot] % cd autosnap_2023-12-20_00:00:09_daily
> >> [17:32 r730-01 dvl /.zfs/snapshot/autosnap_2023-12-20_00:00:09_daily] % 
> >> sudo md5 usr/local/etc/bacula/bacula-fd.conf 
> >> MD5 (usr/local/etc/bacula/bacula-fd.conf) = 
> >> e41a7d835766f563253c0a93418a1c61
> >> 
> >> 
> >> I'm confident this file has not changed.
> >
> > Hmm, looking at src/lib/message.h, I suspect this change broke version
> > compatibility in the message filtering infrastructure:
> >
> > commit fd926fc4671b054234fd3d5957bc05d303d87763
> > Author: Eric Bollengier 
> > Date:   Fri Nov 6 21:27:05 2020 +0100
> >
> > Fix unexpected connection event sent by the FD when the Message 
> > resource is not configured
> >
> > The problem is that the message types have been renumbered by moving 
> > M_EVENTS
> > higher up, but messages sent to the Director from other daemons use the
> > numeric value of the type so this is an incompatible change in the wire
> > protocol.
> >
> > Dispite the date of this change, it looks like it first appeared in Bacula 
> > 13,
> > so will cause problems if a Client < 13 sends a message to a Director >= 13 
> > as
> > in your case.
> 
> That is concerning. It means backups may be incomplete and you don't know it.
> 
> This has happened at home, and today I noticed it at $WORK.
> 
> It is no longer the case that versions can follow the rule:
> 
>   bacula-dir=bacula-sd>bacula-fd
> 
> That rule has been in effect as long as I've been associated with the project 
> (about 20 years). Is there any possibility of this being fixed? Or is this 
> change irrevocable?

The change to the numbering is probably irrevocable (changing it back would
create a new set of incompatibilities), but some hack might be possible using
the jcr->FDVersion.


> If irrevocable, the users really need to be notified via an announcement. 
> Clients must be upgraded or the risk of data loss is present. In my case, 
> things I expected to be backed up were not being backed up. I could not tell 
> because the warnings were not presented to me.

I suggest you create a bug report about it at
https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues
so it can tracked.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to save /dev tree and base directories only?

2024-01-23 Thread Martin Simmons
>>>>> On Mon, 22 Jan 2024 23:08:05 +0100, Pierre Bernhardt said:
> 
> Am 22.01.24 um 16:41 schrieb Martin Simmons:
> >>>>>> On Mon, 22 Jan 2024 10:44:49 +0100, Pierre Bernhardt said:
> > Are you using udev (see the output of "df /dev")?  If so, then I would 
> > expect
> > that to recreate the contents of /dev so no backup is wanted.
> Debian Buster and Bullseye should have it. By the way the system was not come
> up before I recreated the inodes manually. Maybe some of them are essential
> for booting the system so not all needs to be recreated but I think it is not
> an issue to restore them from a backup.
> 
> Here a list of files found on a fresh with debootstrap installed bullseye
> before I firstly boot them up.
> 
> root@backup:/media/file# ls dev
> console  fd  full  null  ptmx  pts  random  shm  stderr  stdin  stdout  tty  
> urandom  zero
> root@backup:/media/file# ls -lR dev
> dev:
> insgesamt 8
> crw-rw-rw- 1 root root 5, 1 Jan 22 22:25 console
> lrwxrwxrwx 1 root root   13 Jan 22 22:25 fd -> /proc/self/fd
> crw-rw-rw- 1 root root 1, 7 Jan 22 22:25 full
> crw-rw-rw- 1 root root 1, 3 Jan 22 22:25 null
> crw-rw-rw- 1 root root 5, 2 Jan 22 22:25 ptmx
> drwxr-xr-x 2 root root 4096 Jan 22 22:25 pts
> crw-rw-rw- 1 root root 1, 8 Jan 22 22:25 random
> drwxr-xr-x 2 root root 4096 Jan 22 22:25 shm
> lrwxrwxrwx 1 root root   15 Jan 22 22:25 stderr -> /proc/self/fd/2
> lrwxrwxrwx 1 root root   15 Jan 22 22:25 stdin -> /proc/self/fd/0
> lrwxrwxrwx 1 root root   15 Jan 22 22:25 stdout -> /proc/self/fd/1
> crw-rw-rw- 1 root root 5, 0 Jan 22 22:25 tty
> crw-rw-rw- 1 root root 1, 9 Jan 22 22:25 urandom
> crw-rw-rw- 1 root root 1, 5 Jan 22 22:25 zero
> 
> dev/pts:
> insgesamt 0
> 
> dev/shm:
> insgesamt 0
> 
> I think this is the minimum of dev inodes needed for starting the system.

Ah, it looks like there is a real /dev directory containing those minimal
files, which is hidden by udev after booting.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to save /dev tree and base directories only?

2024-01-22 Thread Martin Simmons
> On Mon, 22 Jan 2024 10:44:49 +0100, Pierre Bernhardt said:
> 
> But I'm unsure how to do with /dev. It could be possible bacula will
> not save the inodes. Instead of them it could be possible bacula will
> try to save there content which is not a good idea because there
> content will be endless. Before I made some tests I want to simply ask
> someone here which knows this. Maybe some special settings are needed
> to prevent me from getting problems.

Are you using udev (see the output of "df /dev")?  If so, then I would expect
that to recreate the contents of /dev so no backup is wanted.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on incremental files...

2024-01-22 Thread Martin Simmons
> On Mon, 22 Jan 2024 11:11:44 +0100, Marco Gaiarin said:
> 
> I'm using Bacula to put on tapes (LTO9) a backup collected from different
> sources via 'rsnapshot'; if now known, rsnapshot is a perl wrapper script
> around rsync that leveraging the UNIX filesystem capability (eg, hard link)
> permit to have 'snapshots' of filesystems.
> 
> Practically, some rsync invocation sync some source servers to a destination
> folder ('.sync'); after the sync, folders with a prefedined retention get
> rotated:
> 
>  [2024-01-21T15:05:23] /usr/bin/rsnapshot daily: started
>  [2024-01-21T15:05:23] Setting locale to POSIX "C"
>  [2024-01-21T15:05:23] echo 32628 > /var/run/rsnapshot.pid
>  [2024-01-21T15:05:23] /usr/bin/rm -rf /rpool-backup/rsnapshot/daily.6/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.5/ 
> /rpool-backup/rsnapshot/daily.6/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.4/ 
> /rpool-backup/rsnapshot/daily.5/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.3/ 
> /rpool-backup/rsnapshot/daily.4/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.2/ 
> /rpool-backup/rsnapshot/daily.3/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.1/ 
> /rpool-backup/rsnapshot/daily.2/
>  [2024-01-21T15:25:28] mv /rpool-backup/rsnapshot/daily.0/ 
> /rpool-backup/rsnapshot/daily.1/
>  [2024-01-21T15:25:28] /usr/bin/cp -al /rpool-backup/rsnapshot/.sync 
> /rpool-backup/rsnapshot/daily.0
>  [2024-01-21T16:14:44] rm -f /var/run/rsnapshot.pid
>  [2024-01-21T16:14:44] /usr/bin/logger -p user.info -t rsnapshot[32628] 
> /usr/bin/rsnapshot daily: completed successfully
>  [2024-01-21T16:14:44] /usr/bin/rsnapshot daily: completed successfully
> 
> as you can see, last (daily) backup get rotated, all backup get moved, and
> the '.sync' folder get copied (with hard link) to 'daily.0'.
> 
> So seems to me there's no massime modification of '.sync' dir.
> 
> 
> But if i do a full on '.sync', and subsequent an incremental, i get and
> incremental with roughly the same files of full:
> 
> 20-Jan 00:49 ibrpve3-sd JobId 16078: Elapsed time=04:48:47, Transfer 
> rate=84.64 M Bytes/second
> 20-Jan 00:50 ibrpve3-sd JobId 16078: Sending spooled attrs to the Director. 
> Despooling 731,524,889 bytes ...
> 20-Jan 01:01 lnfbacula-dir JobId 16078: Bacula lnfbacula-dir 9.4.2 (04Feb19):
>   Build OS:   x86_64-pc-linux-gnu debian 10.5
>   JobId:  16078
>   Job:PUG-IBR-IBRPVE3.2024-01-19_20.00.00_37
>   Backup Level:   Full
>   Client: "pug-ibr-ibrpve3-fd" 9.4.2 (04Feb19) 
> x86_64-pc-linux-gnu,debian,10.5
>   FileSet:"PVETerzoNodoStd" 2023-12-28 23:00:00
>   Pool:   "PUG-IBR-IBRPVE3LTOPool" (From Job resource)
>   Catalog:"BaculaLNF" (From Client resource)
>   Storage:"IBRPVE3LTO" (From Job resource)
>   Scheduled time: 19-Jan-2024 20:00:00
>   Start time: 19-Jan-2024 20:00:03
>   End time:   20-Jan-2024 01:01:53
>   Elapsed time:   5 hours 1 min 50 secs
>   Priority:   10
>   FD Files Written:   2,145,479
>   SD Files Written:   2,145,479
>   FD Bytes Written:   1,466,069,549,214 (1.466 TB)
>   SD Bytes Written:   1,466,566,045,401 (1.466 TB)
>   Rate:   80953.6 KB/s
>   Software Compression:   None
>   Comm Line Compression:  None
>   Snapshot/VSS:   no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): IBRPVE3_0004
>   Volume Session Id:  31
>   Volume Session Time:1702656598
>   Last Volume Bytes:  1,467,000,503,296 (1.467 TB)
>   Non-fatal FD errors:0
>   SD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Backup OK
> 
> 22-Jan 03:45 ibrpve3-sd JobId 16148: Elapsed time=04:43:55, Transfer 
> rate=86.07 M Bytes/second
> 22-Jan 03:46 ibrpve3-sd JobId 16148: Sending spooled attrs to the Director. 
> Despooling 672,414,561 bytes ...
> 22-Jan 03:56 lnfbacula-dir JobId 16148: Bacula lnfbacula-dir 9.4.2 (04Feb19):
>   Build OS:   x86_64-pc-linux-gnu debian 10.5
>   JobId:  16148
>   Job:PUG-IBR-IBRPVE3.2024-01-21_23.00.00_49
>   Backup Level:   Incremental, since=2024-01-20 23:00:03
>   Client: "pug-ibr-ibrpve3-fd" 9.4.2 (04Feb19) 
> x86_64-pc-linux-gnu,debian,10.5
>   FileSet:"PVETerzoNodoStd" 2023-12-28 23:00:00
>   Pool:   "PUG-IBR-IBRPVE3LTOPool" (From Job resource)
>   Catalog:"BaculaLNF" (From Client resource)
>   Storage:"IBRPVE3LTO" (From Job resource)
>   Scheduled time: 21-Jan-2024 23:00:00
>   Start time: 21-Jan-2024 23:00:03
>   End time:   22-Jan-2024 03:56:43
>   Elapsed time:   4 hours 56 mins 40 secs
>   Priority:   10
>   FD Files Written:   1,919,122

Re: [Bacula-users] Retention and media acrhiving...

2024-01-11 Thread Martin Simmons
The Job and File Retention just controls the size of the catalog database (and
affects restores).  The Volume Retention controls recycling.

Therefore I set

  File Retention = 50 years   # let the Volume Retention of the pool 
control it
  Job Retention = 50 years# let the Volume Retention of the pool 
control it

in the client definition and then control the recycling using Volume Retention
in the pools.

__Martin


> On Wed, 10 Jan 2024 16:23:01 +0100, Marco Gaiarin said:
> 
> Supposing to have some 'short rotation' retention for media (eg, 5 media
> used one week once, volume retention at 28 days).
> Suppose also to have job and file retention to 28 days.
> 
> 
> But soppose also that i need to 'archive' some volume, changing status to
> 'Archive'. Clearly in this way i can still restore data, but i still lost
> metadata for that volume after 28 days (because job and file retention apply
> anyway, right?).
> 
> 
> I can set job and file retention to some (insane) limit like '6 years',
> archive a volume and have access to the metadata?
> I need to have '6 years' for both job and file, or file suffices?
> 
> 
> OK, probably a better solution would be define a different pool with
> different retention, but... this seems a quick and clear solution, if i've
> understood well.
> 
> 
> Thanks.
> 
> -- 
>   Berlusconi: "Da oggi sono a dieta"
>   Il Paese lo è già da 4 anni (Il Ruggito del Coniglio)
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] With version 13.0.3 on Ubuntu 22.04, the Make Compiling Fails with bacula-dir

2024-01-11 Thread Martin Simmons
What command line args are you passing to configure?

It looks like mysql support doesn't work with the --disable-libtool argument
to configure.  In general, --disable-libtool doesn't work anymore for other
reasons too.

__Martin


> On Wed, 10 Jan 2024 13:05:55 -0600, Frank Rogowski said:
> 
> Hi All,
> 
> New to the site and the Bacula community.
> 
> I have been trying to run the Bacula 13.0.4 community install for a couple
> of days now. Stuck with some library and linking during the "make'
> execution. I found this because the director would never start when I ran
> ./bconsole. I think my make install was never completed properly. Thus got
> me here.
> 
> I  am using Ubuntu 22.04. I have mysql installed and working just fine. I
> shut it down (mysql) during the Bacula installation per the instructions
> for 13.0.3.
> 
> Here is what I get for a configuration outcome. Then the 'make' to where it
> fails on the bacula-dir linking phase of the compile with the source files.
> I was just going to use the defaultconfig to start with...then replace it.
> 
> If I could get past this ,I think I will be 'golden'. It seems to me that
> there is a linking between Bacula-dir and the mysql DB that is not
> happening. I tried several things to overcome the issue in the src/cat
> directory based on others having previous and similar issues, no luck thus
> far.
> 
> Welcome to the community's insights.
> 
> Frank R.
> Madiosn, MS
> **   Configuration 
> Configuration on Wed Jan 10 13:01:36 CST 2024:
> 
>Host:  x86_64-pc-linux-gnu -- Pop 22.04
>Bacula version:Bacula 13.0.3 (02 May 2023)
>Source code location:  .
>Install binaries:  /home/rogocolo/bacula/bin
>Install libraries: /usr/lib64
>Install config files:  /home/rogocolo/bacula/bin
>Scripts directory: /home/rogocolo/bacula/bin
>Archive directory: /tmp
>Working directory: /home/rogocolo/bacula/bin/working
>PID directory: /home/rogocolo/bacula/bin/working
>Subsys directory:  /home/rogocolo/bacula/bin/working
>Man directory: /usr/share/man
>Data directory:/usr/share
>Plugin directory:  /usr/lib64
>C Compiler:gcc 11.4.0-1ubuntu1~22.04)
>C++ Compiler:  /usr/bin/g++ 11.4.0-1ubuntu1~22.04)
>Compiler flags: -g -Wall -x c++ -fno-strict-aliasing
> -fno-exceptions -fno-rtti
>Linker flags:
>Libraries: -lpthread
>Statically Linked Tools:   yes
>Statically Linked FD:  no
>Statically Linked SD:  no
>Statically Linked DIR: no
>Statically Linked CONS:no
>Database backends: MySQL
>Database port:
>Database name: bacula
>Database user: bacula
>Database SSL options:
> 
>Job Output Email:  root@localhost
>Traceback Email:   root@localhost
>SMTP Host Address: localhost
> 
>Director Port: 9101
>File daemon Port:  9102
>Storage daemon Port:   9103
> 
>Director User:
>Director Group:
>Storage Daemon User:
>Storage DaemonGroup:
>File Daemon User:
>File Daemon Group:
> 
>Large file support:yes
>Bacula conio support:  no
>readline support:  no
>TCP Wrappers support:  no
>TLS support:   yes
>Encryption support:yes
>ZLIB support:  yes
>LZO support:   no
>S3 support:no
>enable-smartalloc: yes
>enable-lockmgr:no
>bat support:   no
>client-only:   no
>build-dird:yes
>build-stored:  yes
>Plugin support:no
>LDAP support:  no
>LDAP StartTLS support: no
>AFS support:   no
>ACL support:   no
>XATTR support: yes
>GPFS support:  no
>systemd support:   no
>Batch insert enabled:  None
> 
>Plugins:
>- Docker:  no
>- Kubernetes:
>- LDAP BPAM:   no
>- CDP: auto
> ***  Make Execution  
> /usr/bin/g++   -L../lib -L../cats -L../findlib -o bacula-dir dird.o admin.o
> authenticate.o autoprune.o backup.o bsr.o catreq.o dir_plugins.o
> dir_authplugin.o dird_conf.o expand.o fd_cmds.o getmsg.o inc_conf.o job.o
> jobq.o mac.o mac_sql.o msgchan.o next_vol.o newvol.o recycle.o restore.o
> run_conf.o scheduler.o store_mngr.o ua_acl.o ua_cmds.o ua_dotcmds.o
> ua_query.o ua_collect.o ua_input.o ua_label.o ua_output.o ua_prune.o
> ua_purge.o ua_restore.o ua_run.o ua_select.o ua_server.o snapshot.o
> ua_status.o ua_tree.o ua_update.o vbackup.o verify.o org_dird_quota.o -lz \
>   -lbacfin

Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-29 Thread Martin Simmons
Indeed, I can't find anything that does use .bvfs_get_bootstrap.

AFAICS, the query can only be executed by the restore command if you
explicitly specify it with a '?' prefix in the file or directory keywords.

__Martin


>>>>> On Fri, 22 Dec 2023 21:11:49 +0100, Marcin Haba said:
> 
> Hello Everybody,
> 
> Bacularis does not use the .bvfs_get_bootstrap bconsole command.
> 
> I also looked at this query. It seems to me that besides
> .bvfs_get_bootstrap it is executed also after running some options
> from the bconsole 'restore' command.
> 
> Tom, when your jobs are slowing down, is there any restore preparation
> at this time?
> 
> Best regards,
> Marcin Haba (gani)
> 
> On Fri, 22 Dec 2023 at 18:07, Tom Hodder via Bacula-users
>  wrote:
> >
> > Thanks for the quick response!!
> >
> > On Fri, 22 Dec 2023 at 12:15, Martin Simmons  wrote:
> > >
> > > This query looks like something related to restores or the 
> > > .bvfs_get_bootstrap
> > bconsole command, not the backups.
> >
> > Ah ok.
> >
> > > Were you running some front end or GUI that was querying about jobids 103 
> > > and
> > > 419?
> >
> > I have bacula-web and bacularis installed. I will do some more
> > debugging and see whether anything is locking or the db from their
> > side.
> >
> > Many thanks!
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > >
> > > __Martin
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 
> 
> -- 
> "Greater love hath no man than this, that a man lay down his life for
> his friends." Jesus Christ
> 
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
> za przyjaciół swoich." Jezus Chrystus
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula: Backup OK -- with warnings - what warning?

2023-12-29 Thread Martin Simmons
>>>>> On Fri, 29 Dec 2023 12:35:59 -0500, Dan Langille said:
> 
> On Fri, Dec 29, 2023, at 12:10 PM, Martin Simmons wrote:
> > 9.6.6 certainly displayed them for me, so I suspect a config issue.
> >
> > The messages would be omitted if !notsaved is in the Messages resource (but
> > they would still be counted as "Non-fatal FD errors" which makes it add 
> > "with
> > warnings" to the status).
> >
> > Maybe that changed in the client's bacula-fd.conf when you upgraded it?
> 
> That's a good idea.
> 
> [17:29 r730-01 dvl ~] % sudo ls -l /usr/local/etc/bacula/bacula-fd.conf
> -rw-r-  1 root bacula 1497 Feb 25  2023 
> /usr/local/etc/bacula/bacula-fd.conf
> 
> [17:31 r730-01 dvl ~] % sudo md5 /usr/local/etc/bacula/bacula-fd.conf
> MD5 (/usr/local/etc/bacula/bacula-fd.conf) = e41a7d835766f563253c0a93418a1c61
> 
> 
> No change since February.
> 
> Let's look at snapshots taken before Dec 25, the date of the job in question.
> 
> [17:32 r730-01 dvl /.zfs/snapshot] % cd autosnap_2023-12-20_00:00:09_daily
> [17:32 r730-01 dvl /.zfs/snapshot/autosnap_2023-12-20_00:00:09_daily] % sudo 
> md5 usr/local/etc/bacula/bacula-fd.conf 
> MD5 (usr/local/etc/bacula/bacula-fd.conf) = e41a7d835766f563253c0a93418a1c61
> 
> 
> I'm confident this file has not changed.

Hmm, looking at src/lib/message.h, I suspect this change broke version
compatibility in the message filtering infrastructure:

commit fd926fc4671b054234fd3d5957bc05d303d87763
Author: Eric Bollengier 
Date:   Fri Nov 6 21:27:05 2020 +0100

Fix unexpected connection event sent by the FD when the Message resource is 
not configured

The problem is that the message types have been renumbered by moving M_EVENTS
higher up, but messages sent to the Director from other daemons use the
numeric value of the type so this is an incompatible change in the wire
protocol.

Dispite the date of this change, it looks like it first appeared in Bacula 13,
so will cause problems if a Client < 13 sends a message to a Director >= 13 as
in your case.


> 
> Here is it, with some redactions:
> 
> [17:33 r730-01 dvl /.zfs/snapshot/autosnap_2023-12-20_00:00:09_daily] % sudo 
> cat usr/local/etc/bacula/bacula-fd.conf
> Director {
>   Name = bacula-dir
>   Password = "[redacted]"
> 
>   TLS Enable  = yes
>   TLS Require = yes
>   TLS Verify Peer = yes
> 
>   # Allow only the Director to connect
>   TLS Allowed CN  = "bacula.example.org"
>   TLS CA Certificate File = /usr/local/etc/ssl/MyBigCA.crt
> 
>   TLS Certificate = /usr/local/etc/ssl/r730-01.int.unixathome.org.crt
>   TLS Key = 
> /usr/local/etc/ssl/r730-01.int.unixathome.org.nopassword.key
> }
> 
> Director {
>   Name = nagios-mon
>   Password = "[redacted]"
>   Monitor = yes
> }
> 
> FileDaemon {
>   Name= r730-01-fd
>   FDAddress   = 10.55.0.141
>   FDport  = 9102
>   WorkingDirectory= /var/db/bacula
>   Pid Directory   = /var/run
>   Maximum Concurrent Jobs = 20
> 
>   TLS Enable  = yes
>   TLS Require = yes
> 
>   TLS CA Certificate File = /usr/local/etc/ssl/MyBigCA.crt
> 
>   TLS Certificate = /usr/local/etc/ssl/r730-01.int.unixathome.org.crt
>   TLS Key = 
> /usr/local/etc/ssl/r730-01.int.unixathome.org.nopassword.key
> }
> 
> # Send all messages except skipped files back to Director
> Messages {
>   Name = Standard
>   director = bacula-dir = all, !skipped, !restored
> }
> 
> 
> >
> > __Martin
> >
> >
> >>>>>> On Mon, 25 Dec 2023 07:06:58 -0500, Dan Langille said:
> >> 
> >> Hello,
> >> 
> >> This is more for advising others than looking for a fix. 
> >> 
> >> bacula9 client mentions warnings and does not list them.
> >> bacula13 client mentions the warnings.
> >> 
> >> It turns out, the missing warnings are rather important to know. The ZFS 
> >> datasets in question are jailed, and need a different path to the mount 
> >> point. Checking the log file, the warnings do not appear there either.
> >> 
> >> Here are examples of the problem.
> >> 
> >> The following email is from bacula9-client: 9.6.7_3 on FreeBSD 14. The 
> >> subject of this message from Bacula mentions warnings. 
> >> 
> >> No warnings are supplied.
> >> 
> >> From: (Bacula) d...@langille.org
> >> Subject: Bacula: Backup OK -- with warnings of r730-01-fd Incremental
> >> Sender: bac...@bacula.int.unixathome.org
> >> To: [redacted]
>

Re: [Bacula-users] Bacula: Backup OK -- with warnings - what warning?

2023-12-29 Thread Martin Simmons
9.6.6 certainly displayed them for me, so I suspect a config issue.

The messages would be omitted if !notsaved is in the Messages resource (but
they would still be counted as "Non-fatal FD errors" which makes it add "with
warnings" to the status).

Maybe that changed in the client's bacula-fd.conf when you upgraded it?

__Martin


> On Mon, 25 Dec 2023 07:06:58 -0500, Dan Langille said:
> 
> Hello,
> 
> This is more for advising others than looking for a fix. 
> 
> bacula9 client mentions warnings and does not list them.
> bacula13 client mentions the warnings.
> 
> It turns out, the missing warnings are rather important to know. The ZFS 
> datasets in question are jailed, and need a different path to the mount 
> point. Checking the log file, the warnings do not appear there either.
> 
> Here are examples of the problem.
> 
> The following email is from bacula9-client: 9.6.7_3 on FreeBSD 14. The 
> subject of this message from Bacula mentions warnings. 
> 
> No warnings are supplied.
> 
> From: (Bacula) d...@langille.org
> Subject: Bacula: Backup OK -- with warnings of r730-01-fd Incremental
> Sender: bac...@bacula.int.unixathome.org
> To: [redacted]
> Date: Mon, 25 Dec 2023 03:09:16 + (UTC)
> 
> 25-Dec 03:09 bacula-dir JobId 362083: Start Backup JobId 362083, 
> Job=r730-01_snapshots.2023-12-25_03.05.00_24
> 25-Dec 03:09 bacula-dir JobId 362083: Connected to Storage 
> "bacula-sd-04-IncrFile" at bacula-sd-04.int.unixathome.org:9103 with TLS
> 25-Dec 03:09 bacula-dir JobId 362083: There are no more Jobs associated with 
> Volume "IncrAuto-04-14761". Marking it purged.
> 25-Dec 03:09 bacula-dir JobId 362083: All records pruned from Volume 
> "IncrAuto-04-14761"; marking it "Purged"
> 25-Dec 03:09 bacula-dir JobId 362083: Recycled volume "IncrAuto-04-14761"
> 25-Dec 03:09 bacula-dir JobId 362083: Using Device "vDrive-IncrFile-5" to 
> write.
> 25-Dec 03:09 bacula-dir JobId 362083: Connected to Client "r730-01-fd" at 
> r730-01.int.unixathome.org:9102 with TLS
> 25-Dec 03:09 r730-01-fd JobId 362083: shell command: run ClientBeforeJob 
> "/usr/local/sbin/snapshots-for-backup.sh create"
> 25-Dec 03:09 bacula-sd-04 JobId 362083: Recycled volume "IncrAuto-04-14761" 
> on File device "vDrive-IncrFile-5" (/usr/local/bacula/volumes/IncrFile), all 
> previous data lost.
> 25-Dec 03:09 bacula-dir JobId 362083: Max Volume jobs=1 exceeded. Marking 
> Volume "IncrAuto-04-14761" as Used.
> 25-Dec 03:09 r730-01-fd JobId 362083: shell command: run ClientAfterJob 
> "/usr/local/sbin/snapshots-for-backup.sh destroy"
> 25-Dec 03:09 bacula-sd-04 JobId 362083: Elapsed time=00:00:14, Transfer 
> rate=32.35 K Bytes/second
> 25-Dec 03:09 bacula-sd-04 JobId 362083: Sending spooled attrs to the 
> Director. Despooling 10,154 bytes ...
> 25-Dec 03:09 bacula-dir JobId 362083: Bacula bacula-dir 13.0.3 (02May23):
>   Build OS:   amd64-portbld-freebsd14.0 freebsd 14.0-RELEASE-p1
>   JobId:  362083
>   Job:r730-01_snapshots.2023-12-25_03.05.00_24
>   Backup Level:   Incremental, since=2023-12-25 01:51:02
>   Client: "r730-01-fd" 9.6.7 (10Dec20) 
> amd64-portbld-freebsd14.0,freebsd,14.0-RELEASE-p1
>   FileSet:"r730-01 snapshots" 2023-12-25 01:30:49
>   Pool:   "IncrFile-04" (From Job IncPool override)
>   Catalog:"MyCatalog" (From Client resource)
>   Storage:"bacula-sd-04-IncrFile" (From Pool resource)
>   Scheduled time: 25-Dec-2023 03:05:00
>   Start time: 25-Dec-2023 03:09:01
>   End time:   25-Dec-2023 03:09:16
>   Elapsed time:   15 secs
>   Priority:   10
>   FD Files Written:   34
>   SD Files Written:   34
>   FD Bytes Written:   446,284 (446.2 KB)
>   SD Bytes Written:   453,016 (453.0 KB)
>   Rate:   29.8 KB/s
>   Software Compression:   None
>   Comm Line Compression:  56.7% 2.3:1
>   Snapshot/VSS:   no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): IncrAuto-04-14761
>   Volume Session Id:  52
>   Volume Session Time:1703358455
>   Last Volume Bytes:  455,013 (455.0 KB)
>   Non-fatal FD errors:3
>   SD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Backup OK -- with warnings
> 
> 25-Dec 03:09 bacula-dir JobId 362083: Begin pruning Jobs older than 3 years .
> 25-Dec 03:09 bacula-dir JobId 362083: No Jobs found to prune.
> 25-Dec 03:09 bacula-dir JobId 362083: Begin pruning Files.
> 25-Dec 03:09 bacula-dir JobId 362083: No Files found to prune.
> 25-Dec 03:09 bacula-dir JobId 362083: End auto prune.
> 
> 
> I replaced that bacula9-client: 9.6.7_3 with bacula13-client: 13.0.3 and 
> reran the job.
> 
> The warnings are now listed.
> 
> 25-Dec 11:53 bacula-dir JobId 362099: Start Backup JobId 362099, 
> Job=r730-01_snapshots.2023-12-25_11.52.58_26
> 25-Dec 11:53 bacula-dir Job

Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-22 Thread Martin Simmons
> On Thu, 21 Dec 2023 19:40:57 +, Tom Hodder via Bacula-users said:
> 
> inspecting the bacula and mysql server during the slow jobs, I can see
> no particularly high io or cpu, except that the mysql server has 1 CPU
> stuck at 100% and there is a long running query:
> 
> SELECT Path.Path, File.Filename FROM File JOIN Path USING (PathId)
> JOIN b21197077 AS T ON (File.JobId = T.JobId AND File.FileIndex =
> T.FileIndex) WHERE File.Filename LIKE ':component_info_%' AND
> File.JobId IN (103,419);

This query looks like something related to restores or the .bvfs_get_bootstrap
bconsole command, not the backups.

Were you running some front end or GUI that was querying about jobids 103 and
419?

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a file from an unknown backup

2023-12-14 Thread Martin Simmons
> On Thu, 14 Dec 2023 23:07:18 +1100, Gary R Schmidt said:
> 
> Fire up psql with the appropriate options (e.g. $ psql bacula bacula).

Or you can use the sqlquery command in bconsole.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Any suggestions for fail2ban jail for Bacula Director ?

2023-12-05 Thread Martin Simmons
AFAIK, incoming director connections only come from bconsole, monitors and
clients that use "Client Initiated Backup" or "Client Behind NAT" (Connect To
Director) in bacula-fd.conf.

So maybe you don't need to allow incoming connections from everywhere?

__Martin


> On Mon, 04 Dec 2023 17:22:29 +, MylesDearBusiness via Bacula-users 
> said:
> 
> Hello,
> 
> I just installed Bacula director on one of my cloud servers.
> 
> I have set the firewall to allow traffic in/out of port 9101 to allow it 
> to be utilized to orchestrate remote backups as well.
> 
> What I want to do is to identify the potential attack surface and create 
> a fail2ban jail configuration.
> 
> Does anybody have an exemplar that I can work with?
> 
> Also, is there a way to simulate a failed login attempt with a tool such 
> as netcat?  I could possibly use PostMan and dig into the REST API spec, 
> but I was hoping the community would be able to shortcut this effort.
> 
> What say you?
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error closing volume

2023-12-02 Thread Martin Simmons
Maybe something else is using the space temporarily during the backup, which
makes it full and then not full when you look at it later?

Does anything write to /mnt/test or does it share disk space with anything
else?  Does it share space with Bacula's WorkingDirectory?

__Martin


> On Fri, 1 Dec 2023 20:51:57 +0100, Senor Pascual said:
> 
> Hello,
> 
> Thanks for the reply.
> 
> The disk of these volumes is with enough space.The volumes are filled and
> marked as Full (because I have set the limit with Maximum Volume Bytes).
> 
> But I have set it to automatically create a new volume within the same
> pool. Bacula should not return error because it is full, this is the normal
> operation of my system and has never given error.
> It is worth noting that in the jobs that this happens (not always happens,
> some days yes and some days no) are marked as OK -- with warnings. I have
> tried to restore that type of job and due to the problem of that volume.
> 
> The error itself seems logical and intuitive but with my current system, I
> do not understand it.
> 
> Thanks, best regards,
> 
> 
> 
> El vie, 1 dic 2023 a las 20:41, Bill Arlofski via Bacula-users (<
> bacula-users@lists.sourceforge.net>) escribió:
> 
> > On 12/1/23 12:32, Senor Pascual wrote:
> > > Hello everyone,
> > >
> >
> > Hello,
> >
> > The message is clear:
> > 8<
> > ERR=No space left on device.
> > 8<
> >
> > In a shell prompt, do:
> >
> > # df -h
> >
> > And you will see that the partition that `/mnt/text` is on is filled to
> > 100%.
> >
> >
> > Hope this helps,
> > Bill
> >
> > --
> > Bill Arlofski
> > w...@protonmail.com
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HOME for client scripts

2023-11-23 Thread Martin Simmons
I've just tested it with the Bacula 15 Beta on FreeBSD 12.4 and found that the
value of HOME in the ClientRunBeforeJob script comes from when bacula-fd was
started and Bacula doesn't change it.

How are you starting Bacula 9.6.7?  Note that FreeBSD's system startup scripts
such as /etc/rc and /usr/sbin/service explicitly set HOME=/ so that might be
the cause of the difference and will also happen if Bacula 13.x is started
that way.

FWIW, the make_catalog_backup.pl sets HOME=$wd while running pg_dump.  Maybe
you need to do something similar?

__Martin


> On Thu, 23 Nov 2023 09:02:36 -0500, Dan Langille said:
> 
> Hello,
> 
> One of the features of a is a script. I frequently use ClientRunBeforeJob to 
> invoke pg_dump - I'm sure others may do similar.
> 
> For Bacula 9.6.7, when in that script, the value for $HOME is /
> 
> The UID is 0, i.e. root.  On FreeBSD, root's home directory is /root.
> 
> I'm not sure why that differs.  I've been told that Bacula 13.x does the 
> right thing and HOME is /root - I was looking through the commits trying to 
> find something which fixed this. I failed. I also checked the release notes 
> and GitLab issues; no mention.
> 
> Does anyone recall this change?
> 
> To aid in tracking down this issue, could you add "echo $HOME" to your 
> script. Is it / ?  Regardless, what version are you running and what HOME 
> directory is reported?
> 
> NOTE:
> 
> - I'm not asking for bug fix
> - I'm looking for a commit which made a deliberate change to the behavior
> - Knowing the history as to why it changed might be useful
> 
> Thank you.
> 
> -- 
>   Dan Langille
>   d...@langille.org
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 22.04: bacula-fd from official repo cannot talk to src-compiled server on 20.04

2023-11-15 Thread Martin Simmons
>>>>> On Wed, 15 Nov 2023 16:47:34 +0100, Uwe Schuerkamp said:
> 
> Hi again,
> 
> On Tue, Nov 14, 2023 at 03:29:26PM +, Martin Simmons wrote:
> 
> 
> > The --with-openssl=..path-to-openssl-install.. option works for me (Debian
> > 11), where ..path-to-openssl-install.. is the path containing files like:
> > 
> > include/openssl/ssl.h
> > lib/libssl.so
> > 
> > Also, LD_LIBRARY_PATH needs to be set at runtime.
> > 
> 
> That's weird, this definitely did not work on my end when using the
> --with-openssl-Flag. Configure would pick up the openssl binary in
> /usr/bin, so I had to modify LD_LIBRARY_PATH, PATH and set these flags
> 
> LDFLAGS=-L/server/openssl-3.0.12/lib64
> CFLAGS=-I/server/openssl-3.0.12/include/
> 
> in order to successfully compile bacula using the new ssl libraries.

Yes, I noticed that configure picks up the openssl binary in /usr/bin, but
that is a separate issue.  The binary is used to generate random passwords
during configure, but it is not used by the compiled Bacula code.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 22.04: bacula-fd from official repo cannot talk to src-compiled server on 20.04

2023-11-14 Thread Martin Simmons
> On Tue, 14 Nov 2023 09:42:44 +0100, Uwe Schuerkamp said:
> 
> Anyway, I compiled openssl-3.0.12 from source on the 20.04 director /
> stored and provided the location to bacula's configure script using
> the "--with-openssl" directive in order to recompile it using the new
> libraries / binaries in /server/openssl-3.012 (the chosen installation
> location for the modern openssl).
> 
> Sadly, bacula's "configure" failed to pick up the correct location and
> insisted on using the system-provided binaries and libraries from the
> old, system-provided ssl installation, so I guess something is not
> right w/r to bacula's configure option for openssl.

The --with-openssl=..path-to-openssl-install.. option works for me (Debian
11), where ..path-to-openssl-install.. is the path containing files like:

include/openssl/ssl.h
lib/libssl.so

Also, LD_LIBRARY_PATH needs to be set at runtime.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-08 Thread Martin Simmons
> On Wed, 8 Nov 2023 11:09:44 -0500, Josh Fisher via Bacula-users said:
> 
> On 11/7/23 19:26, Lionel PLASSE wrote:
> > I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
> > backup my servers :) .
> > I was talking about ssh (scp) transfer just to Just to show out  I have no 
> > problem when uploading big continuous data using other tools through this 
> > wan. The wan connection is quite stable.
> >
> > "So it is fine when the NIC is up. Since this is Windows,"
> > no windows. I discarder windows problem hypothesis by using a migration 
> > job, so from linux sd to linux sd
> 
> OK. I see that now. You also tried without compression and without 
> encryption. Have you tried reducing Maximum Network Buffer Size back to 
> the default 32768?

Are you sure it is 32768?

I thought the default comes from this in bacula/src/baconfig.h:

#define DEFAULT_NETWORK_BUFFER_SIZE (64 * 1024)

>There must be some reason why the client seems to be 
> sending 30 bytes more than its Maximum Network Buffer Size. Bacula first 
> tries the Maximum Network Buffer Size, but if the OS does not accept 
> that size, then it adjusts the value down until the OS accepts it. Maybe 
> the actual buffer size gets calculated differently on Debian 12? Why is 
> the send size exceeding the buffer size? Or could there be a typo in the 
> Maximum Network Buffer Size setting on one side?

It's an interesting question.  OTOH, if the connection is using TLS, then the
underlying number of bytes transmitted doesn't match the application layer
anyway.

Also, can you explain why the network buffer size would cause data loss on a
TCP connection?

> 
> 
> >
> > Thanks for all, I will find out a solution
> > Best regards
> >
> > PLASSE Lionel | Networks & Systems Administrator
> > 221 Allée de Fétan
> > 01600 TREVOUX - FRANCE
> > Tel : +33(0)4.37.49.91.39
> > pla...@cofiem.fr
> > www.cofiem.fr | www.cofiem-robotics.fr
> >
> >   
> >
> >
> >
> >
> >
> > -Message d'origine-
> > De : Josh Fisher via Bacula-users 
> > Envoyé : mardi 7 novembre 2023 18:01
> > À : bacula-users@lists.sourceforge.net
> > Objet : Re: [Bacula-users] fd lose connection
> >
> >
> > On 11/7/23 04:34, Lionel PLASSE wrote:
> >> Hello ,
> >>
> >> Could  Encryption have any impact in my problem.
> >>
> >> I am testing without any encryption between SD/DIR/BConsole or FD and
> >> it seems to be more stable. (short sized  job right done , longest job
> >> already running : 750Gb to migrate)
> >>
> >> My WAN connection seems  to be quite good,  I  achieve transferring big 
> >> and small  raw files by scp ssh and don't have ping latency or troubles 
> >> with the ipsec connection.
> >
> > So it is fine when the NIC is up. Since this is Windows, the first thing to 
> > do is turn off power saving for the network interface device in Device 
> > Manager. Make sure that the NIC doesn't ever power down its PHY.
> > If any switch, router, or VPN doesn't handle energy-efficient internet in 
> > the same way, then it can look like a dropped connection to the other side.
> >
> > Also, you don't say what type of WAN connection this is. Many wireless 
> > services, 5G, etc. can and will drop open sockets due to inactivity (or 
> > perceived inactivity) to free up channels.
> >
> >
> >> I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
> >> directly in front of the WAN.
> >> And the same occurs  (wrote X byte  but only Y accepted)
> >>
> >> I Tried too to make a migration job to migrate from  SD  to SD through WAN 
> >>  instead of SD-> FD through WAN and the result was the same. (to see if 
> >> win32 FD could be involved)
> >>- DIR and SD in the same LAN.
> >>- Backup remote  FD through  remote SD, the two are in the same LAN to 
> >> fast backup : step OK .
> >> - Then  migration from remote SD to the SD that is in the DIR area
> >> through WAN to outsource volumes physical support : step nok The final 
> >> goal: outsourcing volumes.
> >> I then discard the gzip compression (just in case)
> >>
> >> The errors are quite disturbing :
> >> *   Error: lib/bsock.c:397 Wrote 65566 bytes to 
> >> client:192.168.0.4:9102, but only 65536 accepted
> >>   Fatal error: filed/backup.c:1008 Network send error to SD.
> >> ERR=Input/output error Or  (when increasing MaximumNetworkBuffer)
> >> *   Error: lib/bsock.c:397 Wrote 130277 bytes to 
> >> client:192.168.0.17:9102, but only 98304 accepted.
> >>   Fatal error: filed/backup.c:1008 Network send error to SD.
> >> ERR=Input/output error Or (Migration job)
> >> *   Fatal error: append.c:319 Network error reading from FD. 
> >> ERR=Erreur d'entrée/sortie
> >>   Error: bsock.c:571 Read expected 131118 got 114684 from
> >> Storage daemon:192.168.10.54:9103
> >>
> >> It's look like there is a gap between send and receive buffer and looking 
> >> at the source code, encryption could affect buffer size due to encryption.
> >> So I  think Ba

Re: [Bacula-users] Windows filesets setup...

2023-11-06 Thread Martin Simmons
> On Sun, 5 Nov 2023 11:56:24 +0100, Marco Gaiarin said:
> 
> or still i need to explicitly match all the path, like:
> 
> Options {
>   Signature = MD5
>   Ignore Case = yes
> 
>   WildDir= "C:/Program Files/Hocoma"
>   WildDir= "C:/Program Files (x86)/Hocoma"
>   Wild = "C:/Program Files/Hocoma/*"
>   Wild = "C:/Program Files (x86)/Hocoma/*"
> }

Yes, like the above.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows filesets setup...

2023-11-03 Thread Martin Simmons
> On Thu, 2 Nov 2023 13:08:30 +0100, Marco Gaiarin said:
> 
> OK, worked and finally i've understod. Thanks! ;-)

Great to hear!

> > Alternatively, assuming you only have the two normal Program Files
> > directories, then you could list them both explicitly to avoid ending in *:
> 
> Sure, but in this way the same fileset on a 32bit windows box emit a warning
> on every backup, because on a 32bit "C:/Program Files (x86)" does not exist.
> 
> This, altought minor, disturb me...

I don't see why WildDir in the Options clause will cause a warning on 32bit
Windows (it will just never match).  It would only give a warning if you used
File = "C:/Program Files (x86)" at the start of the Include clause.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows filesets setup...

2023-11-01 Thread Martin Simmons
>>>>> On Tue, 31 Oct 2023 16:33:47 +0100, Marco Gaiarin said:
> 
> Mandi! Martin Simmons
>   In chel di` si favelave...
> 
> > The problem is that your second options clauses matches the directory
> > "C:/Program Files" so that is excluded and the first options clause is never
> > used.
> > Have a look at the example about "My Pictures" in
> > https://www.bacula.org/13.0.x-manuals/en/main/Configuring_Director.html#SECTION0022110
> > to see how to set it up.
> 
> OK, now i use:
> 
> FileSet {
>   Name = ArmeoStd
>   Description = "Backup dati Hocoma/Armeo"
>   Enable VSS = yes
> 
>   Include {
> File = "C:/"
> 
> Options {
>   Signature = MD5
>   Ignore Case = yes
> 
>   WildDir = "C:/Program Files*"
>   WildDir = "C:/Program Files*/Hocoma"
>   Wild = "C:/Program Files*/Hocoma/*"
> }
> 
> Options {
>   Exclude = yes
>   Ignore Case = yes
> 
>   Wild = "C:/*"
> }
>   }
> 
>   Include {
> File = "D:/"
> 
> Options {
>   Signature = MD5
>   Ignore Case = yes
> }
>   }
> }
> 
> 
> And worked, but i found in backup also *ALL* the folders/directory in
> 'C:/Program Files' and 'C:\Program Files (x86)'. Only the folder, not the
> files. Files are correctly only in the 'Hocoma' dir.

Yes, that is expected.  The problem with

  WildDir = "C:/Program Files*"

is that it matches all directories starting with "C:/Program Files" (see the
comment about using RegExDir in the example in the manual).

I think this will work:

  RegExDir = "^C:/Program Files[^/]*$"
  WildDir = "C:/Program Files*/Hocoma"
  Wild = "C:/Program Files*/Hocoma/*"

Alternatively, assuming you only have the two normal Program Files
directories, then you could list them both explicitly to avoid ending in *:

  WildDir = "C:/Program Files"
  WildDir = "C:/Program Files (x86)"
  WildDir = "C:/Program Files*/Hocoma"
  Wild = "C:/Program Files*/Hocoma/*"

> reading the manual in the link above, i supposed that the row:
> 
>   WildDir = "C:/Program Files*"
>   WildDir = "C:/Program Files*/Hocoma"
> 
> was needed to ''descend'' in the Hocoma dir, but this behaviour make me
> suppose that probably:
> 
>   Wild = "C:/Program Files*/Hocoma/*"
> 
> suffices... i've tried to remove them, and does not work: 'C:/' virtual file
> in bacula was empty.

Yes, the reason that a single Wild does not suffice is the same problem as I
mentioned originally.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows filesets setup...

2023-10-27 Thread Martin Simmons
The problem is that your second options clauses matches the directory
"C:/Program Files" so that is excluded and the first options clause is never
used.

Have a look at the example about "My Pictures" in
https://www.bacula.org/13.0.x-manuals/en/main/Configuring_Director.html#SECTION0022110
to see how to set it up.

__Martin


> On Thu, 26 Oct 2023 13:28:39 +0200, Marco Gaiarin said:
> 
> Still there's some things in Bacula that i really dont't understand,
> evidently. ;-)
> 
> 
> I need to backup a windows box; i need to backup entirely the D: drive and
> partially the C: drive. Following documentation and examples, i've wrote:
> 
>  FileSet {
>Name = ArmeoStd
>Description = "Backup dati Hocoma/Armeo"
>Enable VSS = yes
>  
>Include {
>  File = "C:/"
>  
>  Options {
>Signature = MD5
>Ignore Case = yes
>  
>WildDir = "C:/Program Files*/Hocoma"
>  }
>  
>  Options {
>Exclude = yes
>  
>WildDir = "C:/*"
>  }
>}
>  
>Include {
>  File = "D:/"
>  
>  Options {
>Signature = MD5
>Ignore Case = yes
>  }
>}
>  }
> 
> 
> Backup on D: works as expected, on C: i got:
> 
>  cwd is: /
>  $ dir
>  drwxrwx--T   1 root root   12288  2023-10-23 08:45:20  C:/
>  drwxrwx--T   1 root root4096  2023-10-23 08:45:17  D:/
>  cd C:/
>  cwd is: C:/
>  dir:/
>  -rwxrwxrwx   1 root root1024  2023-10-18 13:09:20  C:/.rnd
>  -rwxrwx--T   1 root root  3126231040  2023-10-26 11:41:44  
> C:/hiberfil.sys
>  -rwxrwx--T   1 root root  4168310784  2023-10-26 11:41:44  
> C:/pagefile.sys
> 
> What i'm doing wrong?!
> 
> 
> Thanks...
> 
> -- 
>   Sai come fanno i Serbi ad abbattere i caccia americani?
>   Disegnando una funivia sul terreno...
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3

2023-10-18 Thread Martin Simmons
I hope it was worth all of the effort you put into it :-)

The issue with needing libs3-devel to build libs3 seems strange to me.
Moreover, wouldn't installing libs3-devel also install the EPEL libs3?

The error when running the s3 binary looks like it expects to find libs3 in
the shared library cache or in the standard places (see man ld.so).

__Martin


>>>>> On Mon, 16 Oct 2023 18:18:28 +, Levi Wilbert said:
> 
> Thanks for this. I got it figured out!
> 
> I was able to patch the Bacula libs3 bucket_multipath.c file w/ the 
> libs3-openssl3.patch from the EPEL libs3 RPM.
> 
> Bacula built ok, however, after installing, attempting to run the s3 command 
> still lead to:
> 
> [root@bacula-dev software]# s3
> s3: error while loading shared libraries: libs3.so.4: cannot open shared 
> object file: No such file or directory
> 
> After chasing some red herrings, and going through some troubleshooting, I 
> was able to get Bacula working w/ our S3 storage.
> 
> First, I discovered a few issues:
> 
> 1). I had not built Bacula w/ the "--with-s3" option pointing to the 
> directory where the library was installed (/usr/local/).
> 2). To build libs3 you need the libs3-devel package installed
> 2). I had not included the correct Bacula plugin folder in my bacula-sd. 
> Bacula has its own plugin folder, so I fixed w/ by updating the Plugin 
> Directory specification in bacula-sd.conf:
> 
> Storage { # definition of myself
>   Name = bacula-dev-sd
>   SDPort = 9103  # Director's port
>   WorkingDirectory = "/usr/local/bacula/working"
>   Pid Directory = "/usr/local/bacula/working"
>   Heartbeat Interval = 60
>   Plugin Directory = "/usr/local/bacula/lib/"
> }
> 
> So, the process that worked for me was:
> 
> Download Bacula libs3 plugin:
> wget https://www.bacula.org/downloads/libs3-20200523.tar.gz tar zxvf 
> libs3-20200523.tar.gz
> 
> Patch w/ libs3-openssl3.patch
> cd libs3-20200523/src/
> patch -p1 bucket_metadata.c 
> 
> Install libs3-devel package
> dnf install libs3-devel
> 
> Build libs3:
> cd ../
> make clean
> make distclean
> make install
> 
> Rebuild Bacula including the ./configure option --with-s3="/usr/lib/"
> cd bacula-13.0.3
> make clean
> make distclean
> ./configure  --with-s3="/usr/lib/"
> make
> make install
> 
> Build the Bacula cloud plugin:
> cd bacula-13.0.3/src/stored/
> make install-cloud
> 
> 
> Configure Cloud and Device in bacula-sd.conf:
> Device {
>   Name = cloud_device
>   Device Type = Cloud
>   Cloud = CLOUD # references "Cloud{}" object name
>   Archive Device = /backups/CLOUD
>   Maximum Part Size = 500 MB
>   Media Type = CloudType
>   LabelMedia = yes
>   Random Access = Yes;
>   AutomaticMount = yes
>   RemovableMedia = no
>   AlwaysOpen = no
>   Enabled = yes
> }
> 
> Cloud {
>   Name = CLOUD
>   Driver = "S3"
>   Host Name = ""
>   Bucket Name = ""
>   Access Key = "   Secret Key = "   Protocol = HTTPS
>   UriStyle = Path # Must be set for CEPH
> # Truncate Cache = AtEndOfJob
>   Truncate Cache = AfterUpload
>   Upload = EachPart
> }
> 
> Configure Storage, Pool, and Job in bacula-dir.conf to use the cloud device 
> (follows standard format)
> 
> Restart bacula
> 
> Run the configured job.
> 
> 
> 
> bconsole now shows the following in "stat dir":
> Device Cloud (S3 Plugin): "CLOUD" (/backups/CLOUD) is not open.
>Available Cache Space=264.6 GB
> 
> In the configured device "Archive Device", Bacula will store the backup taken 
> before uploading it to the cloud.
> 
> I actually wonder if I even need to build the libs3 plugin from Bacula, and 
> if I can just install libs3 and libs3-devel, build Bacula and call it a day, 
> but I haven't tested this yet. The above procedure works for me, however.
> 
> Now the only thing I'm working out is how to manage the local storage that is 
> used when taking cloud backups (I believe this is the "cache"). I'm toying 
> with the cache options to see how I can automatically free up this space, as 
> this could easily take up quite a bit of space if you're handling backups 
> from more than a couple machines. But, the cloud plugin is working, so all is 
> good.
> 
> 
> 
> 
> Levi Wilbert
> HPC & Linux Systems Administrator
> ARCC - Division of Research and Economic Development
> Information Technology Ctr 226
> 1000 E. University Avenue, Laramie, WY 82071-200
> 
> 
> 
> 
> _

Re: [Bacula-users] StartTime and EndTime identical

2023-10-18 Thread Martin Simmons
That's strange.

Can you post the full log output for one of those jobs (that might contain
some clue)?

__Martin


> On Mon, 16 Oct 2023 12:45:02 +0200, Steffen Schwebel via Bacula-users 
> said:
> 
> Hello all,
> 
> the last few days I've been taking a closer look at our bacula backup
> servers.
> 
> I've noticed that some jobs will have identical StartTime and EndTime
> in the database.
> 
> +---+---+-+-+--
> ---++--+---++
> | JobId | Level | StartTime   | EndTime |
> RealEndTime | JobTDate   | JobFiles | JobStatus | JobBytes   |
> +---+---+-+-+--
> ---++--+---++
> | 36967 | I | 2023-09-16 00:30:15 | 2023-09-16 00:30:15 | 2023-09-
> 16 00:30:15 | 1694824215 | 274  | T | 124047648  |
> | 37093 | F | 2023-09-17 03:06:10 | 2023-09-17 03:06:10 | 2023-09-
> 17 03:06:10 | 1694919970 | 87527| T | 4105551326 |
> | 37219 | I | 2023-09-18 00:16:40 | 2023-09-18 00:16:39 | 2023-09-
> 18 00:16:39 | 1694996199 | 236  | T | 125878792  |
> | 37345 | I | 2023-09-19 00:31:51 | 2023-09-19 00:31:51 | 2023-09-
> 19 00:31:51 | 1695083511 | 340  | T | 134566298  |
> | 37471 | I | 2023-09-20 02:01:41 | 2023-09-20 02:01:41 | 2023-09-
> 20 02:01:41 | 1695175301 | 413  | T | 106195095  |
> | 37597 | I | 2023-09-21 00:23:54 | 2023-09-21 00:23:54 | 2023-09-
> 21 00:23:54 | 1695255834 | 258  | T | 82447182   |
> | 37723 | I | 2023-09-22 00:28:45 | 2023-09-22 00:28:45 | 2023-09-
> 22 00:28:45 | 1695342525 | 348  | T | 108880792  |
> +---+---+-+-+--
> ---++--+---++
> 
> Is this a misconfiguration?
> Does Bacula travel back in time?
> 
> Any hints are appreciated.
> 
> regards,
> Steffen Schwebel
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3

2023-10-10 Thread Martin Simmons
It look like Bacula's libs3 doesn't support OpenSSL 3.0.  However, you could
try applying the patch (attached) from the src.rpm of the EPEL9 libs3.

cd libs3-20200523
patch -p1 < ...path..to..libs3-openssl3.patch...

I think the Bacula provided libs3 contains changes that are not in the 4.1
version from EPEL (it contains the source at https://github.com/bji/libs3,
where the last change is from Apr 9, 2019, but I assume the Bacula provided
one is from May 23, 2020, based on the file name).

__Martin


>>>>> On Mon, 9 Oct 2023 17:21:35 +, Levi Wilbert said:
> 
> Thanks for the reply, it seems the "make clean" was what was holding it up.
> 
> Now it seems I've run into another error after running make install (after a 
> make clean):
> 
> build/obj/bucket.do: Compiling dynamic object
> build/obj/bucket_metadata.do: Compiling dynamic object
> src/bucket_metadata.c: In function ‘generate_content_md5’:
> src/bucket_metadata.c:489:5: error: ‘MD5_Init’ is deprecated: Since 
> OpenSSL 3.0 [-Werror=deprecated-declarations]
>   489 | MD5_Init(&mdContext);
>   | ^~~~
> In file included from src/bucket_metadata.c:31:
> /usr/include/openssl/md5.h:49:27: note: declared here
>49 | OSSL_DEPRECATEDIN_3_0 int MD5_Init(MD5_CTX *c);
>   |   ^~~~
> src/bucket_metadata.c:490:5: error: ‘MD5_Update’ is deprecated: Since 
> OpenSSL 3.0 [-Werror=deprecated-declarations]
>   490 | MD5_Update(&mdContext, data, size);
>   | ^~
> In file included from src/bucket_metadata.c:31:
> /usr/include/openssl/md5.h:50:27: note: declared here
>50 | OSSL_DEPRECATEDIN_3_0 int MD5_Update(MD5_CTX *c, const void *data, 
> size_t len);
>   |   ^~
> src/bucket_metadata.c:491:5: error: ‘MD5_Final’ is deprecated: Since 
> OpenSSL 3.0 [-Werror=deprecated-declarations]
>   491 | MD5_Final((unsigned char*)md5Buffer, &mdContext);
>   | ^
> In file included from src/bucket_metadata.c:31:
> /usr/include/openssl/md5.h:51:27: note: declared here
>51 | OSSL_DEPRECATEDIN_3_0 int MD5_Final(unsigned char *md, MD5_CTX *c);
>   |   ^
> cc1: all warnings being treated as errors
> make: *** [GNUmakefile:227: build/obj/bucket_metadata.do] Error 1
> 
> 
> 
> I'm wondering however, if the plugin is really the issue I'm running into. I 
> was able to install libs3 4.1 from EPEL, which I've downloaded and am able to 
> use with our Ceph object storage running the commands manually (s3 list, s3 
> get .
> 
> Is the Bacula provided driver different than the EPEL libs3?
> 
> 
> Levi Wilbert
> HPC & Linux Systems Administrator
> ARCC - Division of Research and Economic Development
> Information Technology Ctr 226
> 1000 E. University Avenue, Laramie, WY 82071-200
> 
> 
> 
> 
> 
> From: Martin Simmons 
> Sent: Monday, October 9, 2023 10:44 AM
> To: Levi Wilbert 
> Cc: bacula-users@lists.sourceforge.net 
> Subject: Re: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3
> 
> Firstly, you also need to install whatever provides xml2-config (the libxml2
> development libraries).
> 
> Then try running make clean before make install (or just make).  That should
> remake the dependency files to find your curl/curl.h.
> 
> __Martin
> 
> 
>>>>> On Mon, 9 Oct 2023 14:30:33 +, Levi Wilbert said:
> >
> > BUMP
> >
> > Anyone have any guidance on this?
> >
> > Levi Wilbert
> > HPC & Linux Systems Administrator
> > ARCC - Division of Research and Economic Development
> > Information Technology Ctr 226
> > 1000 E. University Avenue, Laramie, WY 82071-200
> >
> >
> >
> >
> > 
> > From: Levi Wilbert 
> > Sent: Monday, October 2, 2023 4:43 PM
> > To: bacula-users@lists.sourceforge.net 
> > Subject: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3
> >
> >
> > ◆ This message was sent from a non-UWYO address. Please exercise caution 
> > when clicking links or opening attachments from external sources.
> >
> > I'm running Bacula Community 13.0.3 in RHEL9, and having trouble getting 
> > the s3 plugin working w/ Ceph.
> >
> > I've done a bit of reading in the docs, and have been finding info 
> > conflicting/confusing info that S3 Ceph may not be supported under 
> > Community?
> >
> > On this page (for Bacula 11: 
> > https://www.bacula.org/bacula-release-11-0-3/), it says to download and 
> > compile the Cloud driv

Re: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3

2023-10-09 Thread Martin Simmons
Firstly, you also need to install whatever provides xml2-config (the libxml2
development libraries).

Then try running make clean before make install (or just make).  That should
remake the dependency files to find your curl/curl.h.

__Martin


> On Mon, 9 Oct 2023 14:30:33 +, Levi Wilbert said:
> 
> BUMP
> 
> Anyone have any guidance on this?
> 
> Levi Wilbert
> HPC & Linux Systems Administrator
> ARCC - Division of Research and Economic Development
> Information Technology Ctr 226
> 1000 E. University Avenue, Laramie, WY 82071-200
> 
> 
> 
> 
> 
> From: Levi Wilbert 
> Sent: Monday, October 2, 2023 4:43 PM
> To: bacula-users@lists.sourceforge.net 
> Subject: [Bacula-users] Ceph S3 support in Bacula Community 13.0.3
> 
> 
> ◆ This message was sent from a non-UWYO address. Please exercise caution when 
> clicking links or opening attachments from external sources.
> 
> I'm running Bacula Community 13.0.3 in RHEL9, and having trouble getting the 
> s3 plugin working w/ Ceph.
> 
> I've done a bit of reading in the docs, and have been finding info 
> conflicting/confusing info that S3 Ceph may not be supported under Community?
> 
> On this page (for Bacula 11: https://www.bacula.org/bacula-release-11-0-3/), 
> it says to download and compile the Cloud driver from here: 
> https://www.bacula.org/downloads/libs3-20200523.tar.gz
> 
> I downloaded this file, and untar'd it to a local folder.
> 
> When I attempt to build it w/ "rpmbuild -ta libs3-20200523.tar.gz", I get:
> [root@bacula-dev libs3-20200523]# rpmbuild -ta libs3-20200523.tar.gz
> error: Bad source: /root/software/libs3-20200523/libs3-trunk.tar.gz: No such 
> file or directory
> 
> When I try it w/ "make install" I get:
> [root@bacula-dev libs3-20200523]# make install
> make: xml2-config: No such file or directory
> make: xml2-config: No such file or directory
> make: *** No rule to make target 'curl/curl.h', needed by 
> 'build/obj/bucket.do'.  Stop.
> 
> I have libcurl-devel installed, and curl.h is on the system in 
> /usr/include/curl/curl.h.
> 
> I can use our Ceph S3 storage just fine using rclone, so there are system 
> drivers present, however, I've attempted configuring the cloud storage in 
> bacula-sd.conf:
> 
> # Pathfinder S3 - DEV
> Device {
>   Name = pathfinder_device
>   Device Type = Cloud
>   Cloud = PF_S3 # references "Cloud{}" object name
>   Archive Device = /backups/PF_S3
>   Maximum Part Size = 500 MB
>   Media Type = CloudType
>   LabelMedia = yes
>   Random Access = Yes
>   AutomaticMount = yes
>   RemovableMedia = no
>   AlwaysOpen = no
> }
> 
> Cloud {
>   Name = PF_S3
>   Driver = "S3"
>   Host Name = pathfinder.arcc.uwyo.edu
>   Bucket Name = ""
>   Access Key = ""
>   Secret Key = ""
>   Protocol = HTTPS
>   Upload = EachPart
>   UriStyle = Path # Must be set for CEPH
> }
> 
> After restarting Bacula w/ this config, I try running listing the cloud 
> volumes in the cloud w/ this cloud storage, but I get the following error:
> 3900 Error reserving device pathfinder_device cloud
> 
> 
> The documentation I've read thus far hasn't been incredibly clear, as far as 
> whether Ceph S3 is supported or not in the community edition, or if this is 
> something that can be added to an installation.
> 
> In any case, I'm unable to get our Ceph system hooked up to this server in 
> Bacula! Can anyone provide any insight on what's wrong?
> 
> Thank you.
> 
> Bacula Release 11.0.3 | Bacula
> We are pleased to announce the release of Bacula version 11.0.3 to both the 
> Bacula website (www.bacula.org) and to SourceForge. Thank you for using
> www.bacula.org
> 
> 
> Levi Wilbert
> HPC & Linux Systems Administrator
> ARCC - Division of Research and Economic Development
> Information Technology Ctr 226
> 1000 E. University Avenue, Laramie, WY 82071-200
> 
> 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Adding missing foreign keys to PostgreSQL - avoiding dbcheck

2023-10-03 Thread Martin Simmons
>>>>> On Tue, 03 Oct 2023 09:35:27 -0400, Dan Langille said:
> 
> On Mon, Oct 2, 2023, at 11:17 AM, Martin Simmons wrote:
> >>>>>> On Sat, 30 Sep 2023 09:54:51 -0400, Dan Langille said:
> >> 
> >> Hello,
> >> 
> >> The Bacula PostgreSQL schema is missing several foreign keys (FK). Foreign 
> >> keys are not a new database concept; they've been around for decades. They 
> >> are reliable and robust.
> >> 
> >> Wednesday, I started a dbcheck on a Bacula database. Granted, that 
> >> database is 19 years old and this is the first time I've run dbcheck (as 
> >> far as I know). That dbcheck is still going. FYI, the dump to disk is 
> >> about 140GB; lots of cruft removal. 
> >> 
> >> When PostgreSQL was first added to Bacula, there was resistance to FK, and 
> >> I did not pursue the issue. Thus, it persists to this day. I hope to 
> >> change that.
> >> 
> >> I would like to take that development work back up (pun intended), and 
> >> start adding foreign keys back into Bacula, at least for PostgresQL. That 
> >> might remove the need for dbcheck (again, at least for Bacula on 
> >> PostgreSQL).
> >
> > What is the performance cost of foreign keys?
> 
> I'm replying so it does not appear as if I am ignoring you. Short answer: I 
> don't know. Yet. That is the purpose of my project.

OK, fair enough.

> 
> I can't answer that in a way which would sound satisfying. I have not started 
> the work. I have only my personal experience - My backups seem fast enough to 
> me.
> 
> It is easy enough to test. There are several ways to optimize foreign keys 
> usage.
> 
> >> For example, one index I have been using this index for years. I find it 
> >> referenced[1] in the the 5.x documentation, but it is not part of the 
> >> catalog creation.
> >> 
> >> "file_jobid_idx" btree (jobid)
> >> 
> >> This index vastly improves the construction of the files, often going from 
> >> hours to seconds. I don't recall when that index was added here, but 
> >> building trees has never been an issue here.
> >
> > It was removed in this change:
> >
> > commit 740704c9c66d0b049a7cd548ac1204ef1aaf7356
> > Author: Eric Bollengier 
> > Date:   Mon May 11 17:11:40 2020 +0200
> >
> > BEE Backport bacula/src/cats/make_postgresql_tables.in
> > 
> > Does PostgreSQL use file_jpfid_idx for the query if you don't have
> > file_jobid_idx?
> 
> Testing will show that. I am not at that stage yet. I will be examining the 
> queries used and running them through the PostgreSQL 'EXPLAIN ANALYZE' 
> process. I'll post results at https://explain.depesz.com so progress can be 
> seen and compared. Others will be able to run the same non-destructive 
> commands on their own databases for comparison.
> 
> I just check my database and it has these row counts:
> 
> filename:22,232,549
> file: 1,208,708,804
> path: 8,340,411
> job: 97,139
> jobmedia:   331,379
> media:   12,848

Is this an old version?  The filename table shouldn't exist now.  The new
catalog format could make a big difference to the queries (and foreign key
performance).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Adding missing foreign keys to PostgreSQL - avoiding dbcheck

2023-10-02 Thread Martin Simmons
> On Sat, 30 Sep 2023 09:54:51 -0400, Dan Langille said:
> 
> Hello,
> 
> The Bacula PostgreSQL schema is missing several foreign keys (FK). Foreign 
> keys are not a new database concept; they've been around for decades. They 
> are reliable and robust.
> 
> Wednesday, I started a dbcheck on a Bacula database. Granted, that database 
> is 19 years old and this is the first time I've run dbcheck (as far as I 
> know). That dbcheck is still going. FYI, the dump to disk is about 140GB; 
> lots of cruft removal. 
> 
> When PostgreSQL was first added to Bacula, there was resistance to FK, and I 
> did not pursue the issue. Thus, it persists to this day. I hope to change 
> that.
> 
> I would like to take that development work back up (pun intended), and start 
> adding foreign keys back into Bacula, at least for PostgresQL. That might 
> remove the need for dbcheck (again, at least for Bacula on PostgreSQL).

What is the performance cost of foreign keys?


> For example, one index I have been using this index for years. I find it 
> referenced[1] in the the 5.x documentation, but it is not part of the catalog 
> creation.
> 
> "file_jobid_idx" btree (jobid)
> 
> This index vastly improves the construction of the files, often going from 
> hours to seconds. I don't recall when that index was added here, but building 
> trees has never been an issue here.

It was removed in this change:

commit 740704c9c66d0b049a7cd548ac1204ef1aaf7356
Author: Eric Bollengier 
Date:   Mon May 11 17:11:40 2020 +0200

BEE Backport bacula/src/cats/make_postgresql_tables.in
 
Does PostgreSQL use file_jpfid_idx for the query if you don't have
file_jobid_idx?

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-22 Thread Martin Simmons
> On Fri, 22 Sep 2023 12:23:29 +0200, Marco Gaiarin said:
> 
> I've not found info on these section on (current) 9.4 docs; i've followed:
> 
>   https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html
> 
> 
> ...
> 
> Ah. You defined TWO device, with different media type; i've not found
> anywhere notes about the need of different devices/mediatype; seems also a
> single pool can be used...

It is mentioned in the bullet point "Bacula currently does only minimal
Storage conflict resolution" of the 9.4 manual page you listed above.

You definitely need two devices (one for reading, one for writing).  It
usually works if the media types are the same, but I've had one case of a
deadlock in a Copy job.

If you have one media type, then you can define the two devices as a virtual
autochanger like this in bacula-sd.conf:

#
# Virtual autochanger to allow multiple volumes to be used at once
#
Autochanger {
  Name = FileStorage
  Device = FileStorage-Dev1, FileStorage-Dev2
  Changer Command = ""
  Changer Device = /dev/null
}
Device {
  Name = FileStorage-Dev1
  Media Type = File
  Archive Device = /var/bacula/volumes
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
}
Device {
  Name = FileStorage-Dev2
  Media Type = File
  Archive Device = /var/bacula/volumes
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
}

and then just have a single storage definition in bacula-dir.conf:

Storage {
  Name = backupserver1-File
  Address = ...
  SDPort = 9103
  Password = "..."
  Device = FileStorage
  Media Type = File
  Maximum Concurrent Jobs = 2
}

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TLS using certs with X509v3 extensions

2023-09-14 Thread Martin Simmons
> On Tue, 12 Sep 2023 08:41:42 -0400, Dan Langille said:
> 
> >  
> >> 
> >> I ask because yesterday I started running some copy jobs. The cert used by 
> >> bacula-sd was acceptable for receiving backups. It was not acceptable for 
> >> copy jobs.
> >> 
> >> 09-Sep 10:19 bacula-sd-04 JobId 358322: Error: openssl.c:68 Connect 
> >> failure: ERR=error:1417C086:SSL 
> >> routines:tls_process_client_certificate:certificate verify failed
> >> 09-Sep 10:19 bacula-sd-04 JobId 358322: Fatal error: bnet.c:75 TLS 
> >> Negotiation failed.
> >> 09-Sep 10:19 bacula-sd-04 JobId 358322: Fatal error: TLS negotiation 
> >> failed with FD at "10.55.0.7:27230"
> >> 09-Sep 10:19 bacula-sd-04 JobId 358322: Fatal error: Incorrect 
> >> authorization key from File daemon at client rejected.
> >> For help, please see: 
> >> http://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
> >> 09-Sep 10:19 bacula-sd-04 JobId 358322: Security Alert: Unable to 
> >> authenticate File daemon
> > 
> > I wonder if your SD connects to itself here, and fails to validate itself? 
> > The log above does mention an FD at 10.55.0.7. Does that FD component have 
> > a certificate? maybe there's mis-match with the CN of that certificate and 
> > the FDAddress directive in the bacula-fd.conf file?
> 
> There is no bacula-fd at 10.55.0.7 - it is not running and not configured. It 
> is bacula-sd only at that IP address.
> 
> Yes, bacula-sd-04 is at  10.55.0.7 - I don't know why FD is mentioned in the 
> error.
> 
> From the docs 
> (https://bacula.org/13.0.x-manuals/en/main/Migration_Copy.html): 
> 
> The Copy and the Migration jobs run without using the File daemon by copying 
> the data from the old backup Volume to a different Volume in a different Pool
> 
> My reading of that: an FD should not be involved here.

My guess is that Copy and Migration jobs work with the reading SD pretending
to be an FD to send data to the writing SD.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-09-04 Thread Martin Simmons
1642266413 FI=1
> 
> BUSTER_BKP-sd: match_bsr.c:498-9868 Enter match_all
> 
> BUSTER_BKP-sd: match_bsr.c:618-9868 OK match_volume=T1
> 
> BUSTER_BKP-sd: match_bsr.c:508-9868 OK bsr match bsr_vol=T1 read_vol=T1
> 
> BUSTER_BKP-sd: match_bsr.c:706-9868 match_voladdr: saddr=0 
> eaddr=1239074631492 recaddr=64728 sfile=0 efile=288 recfile=0
> 
> BUSTER_BKP-sd: match_bsr.c:709-9868 OK match voladdr=64728
> 
> BUSTER_BKP-sd: match_bsr.c:521-9868 Fail on sesstime. bsr=1649883149 
> rec=1642266413
> 
> BUSTER_BKP-sd: match_bsr.c:608-9868 Leave match all 0
> 
>  
> 
>  
> 
>  
> 
> The SD trace for a newer backup job restored instantly :  
> 
>  
> 
> BUSTER_BKP-sd: label.c:191-9867 VolHdr.VerNum=11 OK.
> 
> BUSTER_BKP-sd: label.c:211-9867 Compare Vol names: VolName=HEBDO1 hdr=HEBDO1
> 
>  
> 
> Volume Label:
> 
> Adata : 0
> 
> Id    : Bacula 1.0 immortal
> 
> VerNo : 11
> 
> VolName   : HEBDO1
> 
> PrevVolName   :
> 
> VolFile   : 0
> 
> LabelType : VOL_LABEL
> 
> LabelSize : 184
> 
> PoolName  : DIFF
> 
> MediaType : USB
> 
> PoolType  : Backup
> 
> HostName  : BUSTER-BKP
> 
> Date label written: 14-janv.-2022 19:58
> 
> BUSTER_BKP-sd: label.c:261-9867 Leave read_volume_label() VOL_OK
> 
> BUSTER_BKP-sd: file_dev.c:71-9867 Enter: virtual bool DEVICE::rewind(DCR*)
> 
> BUSTER_BKP-sd: label.c:274-9867 Call reserve_volume=HEBDO1
> 
> BUSTER_BKP-sd: vol_mgr.c:381-9867 enter reserve_volume=HEBDO1 drive="BACKUP" 
> (/BKP/bacubak)
> 
> BUSTER_BKP-sd: vol_mgr.c:286-9867 new Vol=HEBDO1 slot=0 at 7fb444071538 
> dev="BACKUP" (/BKP/bacubak)
> 
> BUSTER_BKP-sd: vol_mgr.c:548-9867 set in_use. vol=HEBDO1 dev="BACKUP" 
> (/BKP/bacubak)
> 
> BUSTER_BKP-sd: label.c:289-9867 Leave: virtual int 
> DEVICE::read_dev_volume_label(DCR*)
> 
> BUSTER_BKP-sd: acquire.c:246-9867 Got correct volume. VOL_OK: HEBDO1
> 
> BUSTER_BKP-sd: acquire.c:354-9867 dcr=7fb444070378 dev=7fb44c001318
> 
> BUSTER_BKP-sd: acquire.c:355-9867 MediaType dcr=USB dev=USB
> 
> BUSTER_BKP-sd: acquire.c:357-9867 Leave: bool acquire_device_for_read(DCR*)
> 
> BUSTER_BKP-sd: read_records.c:436-9867 Enter: BSR* 
> position_to_first_file(JCR*, DCR*, BSR*)
> 
> BUSTER_BKP-sd: match_bsr.c:231-9867 use_pos=1 repos=1
> 
> BUSTER_BKP-sd: match_bsr.c:618-9867 OK match_volume=HEBDO1
> 
> BUSTER_BKP-sd: read_records.c:451-9867 pos_to_first_file from addr=0 to 
> 3634588568
> 
> BUSTER_BKP-sd: file_dev.c:107-9867 = lseek to 3634588568
> 
> BUSTER_BKP-sd: read_records.c:455-9867 Leave: BSR* 
> position_to_first_file(JCR*, DCR*, BSR*)
> 
> BUSTER_BKP-sd: block.c:520-9867 Pos for read=3634588568 3634588568
> 
> BUSTER_BKP-sd: block.c:545-9867 Read() adata=0 vol=HEBDO1 nbytes=256000 
> pos=3634588568
> 
> BUSTER_BKP-sd: block_util.c:456-9867 set block=7fb44406fc40 adata=0 
> binbuf=255976
> 
>  
> 
>  
> 
> De : Radosław Korzeniewski  
> Envoyé : jeudi 31 août 2023 18:30
> À : Lionel PLASSE 
> Cc : Martin Simmons ; bacula-users@lists.sourceforge.net
> Objet : Re: [Bacula-users] Restore job forward space volume - usb file volume
> 
>  
> 
> Hi,
> 
>  
> 
> pon., 28 sie 2023 o 10:13 Lionel PLASSE  <mailto:pla...@cofiem.fr> > napisał(a):
> 
> You're right, I say it was slow from my own apreciation. 
> The Volume file is 1 TB. And the restoration job  was 65 GB. 
> 
> And unfortunately, I don't have the exact timing for each steps , but I was 
> stuck in the "forwarding spacing " for half an hour before I stopped watching.
> 
>  
> 
> Could you share a restore BSR file for this restore? At this file you can 
> check what parts of the volume will be "fast-forwarded" which for a file on a 
> block device (usb disk) means Bacula just executes lseek(2) which takes 
> almost "no time"; and what Bacula will read block by block.
> 
>  
> 
> best regards
> 
> -- 
> 
> Radosław Korzeniewski
> rados...@korzeniewski.net <mailto:rados...@korzeniewski.net> 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-30 Thread Martin Simmons
It is unlikely that smaller volumes will be faster.  I don't see any evidence
that it depends on the position of the data.

How long did the backup of these file take to run?

External USB drives can be slow, especially if connected by USB 2.0.

__Martin


>>>>> On Mon, 28 Aug 2023 07:53:06 +, Lionel PLASSE said:
> 
> You're right, I say it was slow from my own apreciation. 
> The Volume file is 1 TB. And the restoration job  was 65 GB. 
> 
> And unfortunately, I don't have the exact timing for each steps , but I was 
> stuck in the "forwarding spacing " for half an hour before I stopped watching.
> 
> Elapsed time:   1 hour 16 mins 30 secs
>   Files Expected: 26
>   Files Restored: 26
>   Bytes Restored: 63,581,189,026 (63.58 GB)
>   Rate:   13852.1 KB/s
>   FD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Restore OK
> BUSTER_BKP-sd JobId 9784: Elapsed time=01:11:20, Transfer rate=14.85 M 
> Bytes/second
> BUSTER_BKP-sd JobId 9784: Forward spacing Volume "T1" to addr=249122650420
> 
> Ok, But in my memories this step o was much faster to perform ?
> 
> So, as I understand it, it depends on the position of the data in the restore 
> job in the volume. 
> Can it be faster if I use smaller but more numerous volumes (if a smaller 
> volume contains all the data of the restore job at once of course)
> 
> The overall Rate is good for me.
> 
> Thank for informations,
> 
> 
> -Message d'origine-
> De : Martin Simmons  
> Envoyé : vendredi 25 août 2023 18:05
> À : Lionel PLASSE 
> Cc : bacula-users@lists.sourceforge.net
> Objet : Re: [Bacula-users] Restore job forward space volume - usb file volume
> 
>>>>> On Thu, 24 Aug 2023 09:32:38 +, Lionel PLASSE said:
> > 
> > Hello,
> > 
> > For usb harddrive and file media volume,   when I do a restore job I get a 
> > long waiting  step : "Forward spacing Volume "T1" to addr=249122650420" 
> > I remember I managed to configure the storage resource to quickly restore 
> > sdd drives.
> > 
> > Should I use fastforward, blockpositionning and HardwareEndOfFile for USB 
> > disks and file volume?
> > How to avoid this long forwarding when not a tape device?
> 
> How do you know it is slow in the forwarding?  It might have done that 
> quickly and is slow in the next operation.
> 
> How much data are you restoring?
> 
> You can try using 'status storage' and 'status client' a few times in 
> bconsole to monitor the progress of the restore.
> 
> __Martin
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-25 Thread Martin Simmons
>>>>> On Fri, 25 Aug 2023 13:22:47 -0400, Josh Fisher said:
> 
> On 8/25/23 12:06, Martin Simmons wrote:
> >>>>>> On Thu, 24 Aug 2023 15:51:18 -0400, Josh Fisher via Bacula-users said:
> >>>>>>
> >>Probably you have compression and/or encryption turned on. In
> >> that case, Bacula cannot simply fseek to the offset. It has to
> >> decompress and/or decrypt all data in order to find it, making restores
> >> far slower than backups.
> > The compression and/or encryption is done within each block, so tnat doesn't
> > affect seek time.
> 
> 
> Interesting. So after decompression and decryption, does the 
> uncompressed/decrypted data contain a partial block(s), or are the 
> compressed/encrypted blocks originally written with variable block sizes 
> so that the original data is handled as fixed size blocks?

Except for the last block in a backup, I think file volumes use a fixed block
size (the default is 64512 bytes), but it doesn't really matter.

The compression/encryption occurs within the records, which are then packed
into blocks.  If you run 'bls -v -v -k ..path.to.a.file.volume..' then it will
show the blocks and records like this:

Blk=0 blen=246 bVer=2 SessId=16 SessTim=1691737690
bls: block_util.c:105-0 Dump block  800d68200: adata=0 size=246 BlkNum=0
   Hdrcksum=8adb2881 cksum=8adb2881
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=VOL_LABEL Strm=0 
len=210 reclen=0
Blk=1 blen=64512 bVer=2 SessId=16 SessTim=1691737690
bls: block_util.c:105-0 Dump block  800d68200: adata=0 size=64512 BlkNum=1
   Hdrcksum=d01c4958 cksum=d01c4958
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=SOS_LABEL Strm=98043 
len=177 reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=1 Strm=UATTR len=102 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=1 Strm=GZIP len=226 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=1 Strm=MD5 len=16 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=2 Strm=UATTR len=104 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=2 Strm=GZIP len=1633 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=2 Strm=MD5 len=16 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=UATTR len=113 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5748 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5978 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=4677 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=4378 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5937 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=4950 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=6003 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5222 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=4809 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5911 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5435 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5339 
reclen=0
Blk=2 blen=64512 bVer=2 SessId=16 SessTim=1691737690
bls: block_util.c:105-0 Dump block  800d68200: adata=0 size=64512 BlkNum=2
   Hdrcksum=d28c863e cksum=d28c863e
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=contGZIP 
len=2526 reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=5652 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=6024 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=6674 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=GZIP len=4829 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=3 Strm=MD5 len=16 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=4 Strm=UATTR len=121 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=4 Strm=GZIP len=19296 
reclen=0
bls: block_util.c:130-0Rec: VId=16 VT=1691737690 FI=4 Strm=GZIP len=23650 
reclen=0

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-25 Thread Martin Simmons
> On Thu, 24 Aug 2023 15:51:18 -0400, Josh Fisher via Bacula-users said:
> 
> On 8/24/23 05:32, Lionel PLASSE wrote:
> > Hello,
> >
> > For usb harddrive and file media volume,   when I do a restore job I get a 
> > long waiting  step : "Forward spacing Volume "T1" to addr=249122650420"
> > I remember I managed to configure the storage resource to quickly restore 
> > sdd drives.
> >
> > Should I use fastforward, blockpositionning and HardwareEndOfFile for USB 
> > disks and file volume?
> > How to avoid this long forwarding when not a tape device?
> 
> 
> I don't think any of those affect file type storage (random access 
> devices).

Agreed.

>   Probably you have compression and/or encryption turned on. In 
> that case, Bacula cannot simply fseek to the offset. It has to 
> decompress and/or decrypt all data in order to find it, making restores 
> far slower than backups.

The compression and/or encryption is done within each block, so tnat doesn't
affect seek time.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-25 Thread Martin Simmons
> On Thu, 24 Aug 2023 09:32:38 +, Lionel PLASSE said:
> 
> Hello,
> 
> For usb harddrive and file media volume,   when I do a restore job I get a 
> long waiting  step : "Forward spacing Volume "T1" to addr=249122650420" 
> I remember I managed to configure the storage resource to quickly restore sdd 
> drives.
> 
> Should I use fastforward, blockpositionning and HardwareEndOfFile for USB 
> disks and file volume?
> How to avoid this long forwarding when not a tape device?

How do you know it is slow in the forwarding?  It might have done that quickly
and is slow in the next operation.

How much data are you restoring?

You can try using 'status storage' and 'status client' a few times in bconsole
to monitor the progress of the restore.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dynamic file list

2023-08-24 Thread Martin Simmons
See the documentation for "less-than sign" in
https://www.bacula.org/13.0.x-manuals/en/main/Configuring_Director.html for
how to make it read the file at backup time.

__Martin


> On Thu, 24 Aug 2023 13:15:47 +, Yakup Kaya said:
> 
> Hello everybody,
> 
> 
> I am trying to backup some files created in the last 2 days, and I am trying 
> to do it with the following fileset. The content of the file 
> "bacula_backup.list" mentioned in the fileset is dynamic, and populated every 
> hour. The problem is, bacula reads that file only when the director starts, 
> and afterwards it does not update the file list until the director is 
> restarted. Does bacula have another mechanism to take backup of a list of 
> files, which is changing regularly? Restarting services is not an option I 
> prefer. Any other idea is also welcome.
> 
> 
> FileSet {
>   Name = "FileSet_COPY2TAPE"
>   Include {
> Options {
>   signature=MD5
>   Sparse = yes
> }
> @/Backup/bacula_backup.list
>   }
> }
> 
> 
> Kind regards,
> 
> Yakup Kaya
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TR: Copying jobs from one SD to another

2023-08-21 Thread Martin Simmons
bcopy doesn't update the catalog, so restore would be more difficult (it would
require bscan as well).

__Martin


> On Mon, 21 Aug 2023 07:56:44 +, Lionel PLASSE said:
> 
> hello
> wouldn't   the bcopy tool and a tweaked  bsr file have done the job? 
> 
>  (just a question don't have the answer :)  )
> 
> 
> 
> PLASSE Lionel | Networks & Systems Administrator 
> 221 Allée de Fétan
> 01600 TREVOUX - FRANCE | Acces map
> Tel : +33(0)4.37.49.91.39
> pla...@cofiem.fr
> www.cofiem.fr | www.cofiem-robotics.fr
> 
> 
> 
> 
> 
> -Message d'origine-
> De : Dan Langille  
> Envoyé : dimanche 20 août 2023 22:54
> À : bacula-users 
> Objet : [Bacula-users] Copying jobs from one SD to another
> 
> Hello,
> 
> I've started work on migrating backups from one SD with 45TB of backups (of 
> which 18TB are just Catalog).
> 
> My goals:
> 
> * copy over the latest backup for each job - some may be old for jobs no 
> longer run
> * copy over the past 12 months of full backups.
> 
> I'm doing this via SQL query. There will be overlap between the above two 
> goals, however, the query caters for that.
> 
> Details at: 
> https://dan.langille.org/2023/08/20/bacula-copying-the-latest-jobs-over-from-one-sd-to-another/
> 
> I've been estimating space required, and have some ideas to reduce the number 
> of Catalog backups I keep on hand.
> 
> At 160GB for each catalog backup, it may be time for me to run dbcheck in 
> batch mode.
> 
> — 
> Dan Langille
> http://langille.org/
> 
> 
> 
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with grant_mysql_privileges

2023-08-02 Thread Martin Simmons
> On Wed, 2 Aug 2023 12:21:49 -0400, Phil Stracchino said:
> 
> On 8/2/23 11:12, Graham Dicker via Bacula-users wrote:
> > Hello
> > 
> > I am installing Bacula 13.0.2 on Opensuse 15.5 and get this problem when I 
> > run
> > grant_mysql_privileges:
> > 
> > ERROR 1064 (42000) at line 3: You have an error in your SQL syntax; check 
> > the
> > manual that corresponds to your MariaDB server version for the right syntax 
> > to
> > use near '%{db_user}@"%"' at line 1
> > 
> > Database version 10.6.14-MariaDB
> > 
> > I guess it's complaining about the statement
> > 
> > db_user=${db_user:-bacula}
> > 
> > Can anyone help with this please?
> 
> 
> What does the line 'echo "Created MySQL database user: ${db_user}"' 
> report as the value of ${db_user}?

It looks like a bug in the script to me (maybe % instead of $):

if $bindir/mysql $* -u root -f 

Re: [Bacula-users] Copying jobs from one tape ti another

2023-08-01 Thread Martin Simmons
> On Mon, 31 Jul 2023 21:41:15 +, Bill Arlofski via Bacula-users said:
> 
> On 7/31/23 00:55, Matlink wrote:
> > If you have a read error on that tape, I guess there is little you can do.
> > You should try migration jobs, but that won't give you an exact copy of the 
> > tape.
> > You could also do a byte copy of the tape using dd for instance, but you 
> > will also copy errors.
> >
> 
> I would opt to make this as simple as possible, and omit Bacula from the 
> picture entirely.
> 
> 
> # dd if=/dev/nst0 of=/dev/nst1

Will that copy the whole tape?  I think it will only copy the first tape file
(up to the Device Resource's Maximum File Size).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to verify downloaded Bacula files

2023-07-31 Thread Martin Simmons
I think the keys/fingerprints are here (from the DOWNLOADS menu on
https://www.bacula.org/):
https://www.bacula.org/bacula-distribution-verification-public-keys/

To unsubscribe, see the List-Unsubscribe link in the email headers (or
https://www.bacula.org/support/email-lists/).

__Martin


> On Mon, 31 Jul 2023 08:42:22 +0100 (BST), GRAHAM DICKER via Bacula-users 
> said:
>
> 
>  I find I have reasons to wantto verify if the gzip file I have downloaded is 
> the right file. I downloaded the gzip file for Bacula 11.0.6 and the 
> signature file. But the instructions I have been following to verify the file 
> say that I should compare the fingerprint with that given on the web site. I 
> can't find it on the web site. Does anyone know where I can find it? 
> 
>  Also, how do I unsubscribe my old email address? I now get two copies of 
> each posting. I can recieve but not send from the old address. 
> 
> 
> 
>  Thank you 
> 
> 
> 
>  Graham Dicker 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows PC's

2023-06-19 Thread Martin Simmons
Have you looked at the "My Pictures directory inside the My Documents
directory" example in
https://www.bacula.org/13.0.x-manuals/en/main/Configuring_Director.html ?

__Martin


> On Mon, 19 Jun 2023 06:55:29 +, IT Manager via Bacula-users said:
> 
> I've got to backup a couple of folders on 2 windows PCs. The windows client 
> is installed and connected to the Linux Bacula server, but my problem is the 
> file sets.
> 
> I only need to backup the Documents folder for each user in C:\Users. I've 
> tried with both RegexDIR and WildDIR and can't seem to get the syntax right 
> for either. Having read through various pages online, I've only managed to 
> confuse myself (easily done)!
> 
> For windows, which should I be using, RegexDIR or WildDIR and what syntax?
> 
> Thanks in advance.
> 
> Andy Baker
> 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Why is this job queued and not run concurrently although MaximumConcurrentJobs gt 1, AllowMixedPriority=yes and more drives available in autochanger?

2023-04-21 Thread Martin Simmons
All jobs must have an associated client with suitable MaximumConcurrentJobs,
even if they don't use a client.  It is a feature :-(

__Martin


>>>>> On Thu, 20 Apr 2023 21:13:00 +0200, Justin Case said:
> 
> This seemed to say that the director’s client limit MaximumConcurrentJobs = 1 
> was hit. WHen I changed it to MaximumConcurrentJobs = 5 I was able to run the 
> Admin job concurrently. 
> 
> I still don’t understand why this job would count against the client job 
> limit, as I thought that it is a job that does something on the SD, not on 
> the client / FD. Do you understand this?
> 
> > On 20. Apr 2023, at 20:55, Justin Case  wrote:
> > 
> > Hi Martin,
> > 
> >> On 20. Apr 2023, at 20:38, Martin Simmons  wrote:
> >> 
> >> What is the output of the "status dir" command when the Admin job is 
> >> waiting?
> >> 
> > it says for the Admin job: is waiting on max Client jobs
> > what does that mean?
> > 
> >> When you say "Both jobs have set AllowMixedPriority = yes." do you mean all
> >> jobs that are running at the time you want to Admin job to run?
> > 
> > For now I have  job running for testing, and the Admin job, and both have 
> > AllowMixedPriority = yes.
> > 
> >> 
> >>>>>>> On Thu, 20 Apr 2023 12:14:57 +0200, Justin Case said:
> >>> 
> >>> Greetings to all,
> >>> 
> >>> I have the simple Admin job "truncate-pools-all” (see further down) and I 
> >>> would like to be able to run it concurrently while some backup job 
> >>> “backup1" (see further down)  is running. Lets say backup jobs have 
> >>> Priority = 20.
> >>> The Runscript Console command has Priority = 10 and uses drive number 9, 
> >>> which is very likely not in use when the Admin job is started. The backup 
> >>> jobs usually use drive number 0. Both jobs have set AllowMixedPriority = 
> >>> yes.
> >>> While I can successfully run this command in bconsole concurrently when a 
> >>> backup job is already running, when starting the Admin job the Bacula 
> >>> queuing algorithm puts this Admin job in the queue and it is waiting 
> >>> until the currently running backup job has finished. My understanding was 
> >>> that this is normal behaviour when AllowMixedPriority = no (default). 
> >>> However, I have explicitly enabled AllowMixedPriority and still it does 
> >>> not work. The MaximumConcurrentJobs are 5 or 20 in different components, 
> >>> except for the SD file autochanger drives, there it is set to 1.
> >>> 
> >>> My first guess would be, that somehow the SD does not automagically make 
> >>> use of the available unoccupied drives of the autochanger (although the 
> >>> default behaviour should be AutoSelect = yes). So it tells the director 
> >>> that the drive is busy and then the director makes the Admin job wait.
> >>> But I could also be wrong, as I am not an expert on Bacula topics.
> >>> 
> >>> What would I need to change to get this to work as expected and described 
> >>> at the top of this mail?
> >>> 
> >>> Thanks for considering my question and have a good time,
> >>> J/C
> >>> 
> >>> 
> >>> Job {
> >>> Name = "truncate-pools-all"
> >>> Type = "Admin"
> >>> JobDefs = "default1"
> >>> Enabled = no
> >>> Runscript {
> >>> RunsOnClient = no
> >>> RunsWhen = "Before"
> >>> Console = "truncate volume allpools storage=unraid-tier1-storage drive=9"
> >>> }
> >>> Priority = 10
> >>> AllowDuplicateJobs = no
> >>> AllowMixedPriority = yes
> >>> }
> >>> 
> >>> JobDefs {
> >>> Name = "default1"
> >>> Type = "Backup"
> >>> Level = "Full"
> >>> Messages = "Standard"
> >>> Pool = "default1"
> >>> FullBackupPool = "default1"
> >>> IncrementalBackupPool = "default1"
> >>> Client = “machine1"
> >>> Fileset = "EmptyFileset"
> >>> MaxFullInterval = 2678400
> >>> SpoolAttributes = yes
> >>> Priority = 20
> >>> AllowIncompleteJobs = no
> >>> Accurate = yes
> >>> AllowDuplicateJobs = no
> >>> }
> >>> 
> >>> This is the backup job that is already running:
> >>> 
> >>> Job {
> >>> Name = “backup1"
> >>> Pool = “pool1"
> >>> FullBackupPool = “pool1"
> >>> IncrementalBackupPool = “pool1"
> >>> Fileset = “fs1"
> >>> Schedule = “schd1"
> >>> JobDefs = “default2"
> >>> Enabled = yes
> >>> AllowIncompleteJobs = no
> >>> AllowDuplicateJobs = no
> >>> AllowMixedPriority = yes
> >>> }
> >>> 
> >>> JobDefs {
> >>> Name = “default2"
> >>> Type = "Backup"
> >>> Level = "Full"
> >>> Messages = "Standard"
> >>> Pool = "default1"
> >>> Client = “machine1"
> >>> Fileset = "EmptyFileset"
> >>> Schedule = “sched2"
> >>> Priority = 20
> >>> Accurate = yes
> >>> }
> >>> 
> >>> 
> >>> 
> >>> ___
> >>> Bacula-users mailing list
> >>> Bacula-users@lists.sourceforge.net
> >>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>> 
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Why is this job queued and not run concurrently although MaximumConcurrentJobs gt 1, AllowMixedPriority=yes and more drives available in autochanger?

2023-04-20 Thread Martin Simmons
What is the output of the "status dir" command when the Admin job is waiting?

When you say "Both jobs have set AllowMixedPriority = yes." do you mean all
jobs that are running at the time you want to Admin job to run?

__Martin


> On Thu, 20 Apr 2023 12:14:57 +0200, Justin Case said:
> 
> Greetings to all,
> 
> I have the simple Admin job "truncate-pools-all” (see further down) and I 
> would like to be able to run it concurrently while some backup job “backup1" 
> (see further down)  is running. Lets say backup jobs have Priority = 20.
> The Runscript Console command has Priority = 10 and uses drive number 9, 
> which is very likely not in use when the Admin job is started. The backup 
> jobs usually use drive number 0. Both jobs have set AllowMixedPriority = yes.
> While I can successfully run this command in bconsole concurrently when a 
> backup job is already running, when starting the Admin job the Bacula queuing 
> algorithm puts this Admin job in the queue and it is waiting until the 
> currently running backup job has finished. My understanding was that this is 
> normal behaviour when AllowMixedPriority = no (default). However, I have 
> explicitly enabled AllowMixedPriority and still it does not work. The 
> MaximumConcurrentJobs are 5 or 20 in different components, except for the SD 
> file autochanger drives, there it is set to 1.
> 
> My first guess would be, that somehow the SD does not automagically make use 
> of the available unoccupied drives of the autochanger (although the default 
> behaviour should be AutoSelect = yes). So it tells the director that the 
> drive is busy and then the director makes the Admin job wait.
> But I could also be wrong, as I am not an expert on Bacula topics.
> 
> What would I need to change to get this to work as expected and described at 
> the top of this mail?
> 
> Thanks for considering my question and have a good time,
>  J/C
> 
> 
> Job {
> Name = "truncate-pools-all"
> Type = "Admin"
> JobDefs = "default1"
> Enabled = no
> Runscript {
> RunsOnClient = no
> RunsWhen = "Before"
> Console = "truncate volume allpools storage=unraid-tier1-storage drive=9"
> }
> Priority = 10
> AllowDuplicateJobs = no
> AllowMixedPriority = yes
> }
> 
> JobDefs {
>   Name = "default1"
>   Type = "Backup"
>   Level = "Full"
>   Messages = "Standard"
>   Pool = "default1"
>   FullBackupPool = "default1"
>   IncrementalBackupPool = "default1"
>   Client = “machine1"
>   Fileset = "EmptyFileset"
>   MaxFullInterval = 2678400
>   SpoolAttributes = yes
>   Priority = 20
>   AllowIncompleteJobs = no
>   Accurate = yes
>   AllowDuplicateJobs = no
> }
> 
> This is the backup job that is already running:
> 
> Job {
>   Name = “backup1"
>   Pool = “pool1"
>   FullBackupPool = “pool1"
>   IncrementalBackupPool = “pool1"
>   Fileset = “fs1"
>   Schedule = “schd1"
>   JobDefs = “default2"
>   Enabled = yes
>   AllowIncompleteJobs = no
>   AllowDuplicateJobs = no
>   AllowMixedPriority = yes
> }
> 
> JobDefs {
>   Name = “default2"
>   Type = "Backup"
>   Level = "Full"
>   Messages = "Standard"
>   Pool = "default1"
>   Client = “machine1"
>   Fileset = "EmptyFileset"
>   Schedule = “sched2"
>   Priority = 20
>   Accurate = yes
> }
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deleting Obsolete Volumes

2023-04-14 Thread Martin Simmons
The problem is that the catalog doesn't record which jobs are on each volume.
It uses the jobmedia table to record which files of a job are on each volume
(in groups between firstindex and lastindex).  As a result, if a job has no
files then it has no rows in the jobmedia table and hence is invisible to
'query 14'.

Why are you worried about deleting these empty volumes?  There should be no
problem because they will never be used in a restore.

__Martin


> On Thu, 13 Apr 2023 23:17:50 +0100, Chris Wilkinson said:
> 
> Yes that is true. In the other cases I found, these were also zero file
> incr/diff jobs.
> 
> However it appears that the job wrote a Volume as it appears in the list
> media result and there is such a Volume on disk but only 667 bytes so only
> the label I presume.
> 
> I had expected that 'query 14' would recognise that a Volume has an
> associated job even if that job wrote zero files to the volume but it
> doesn't.
> 
> The script needs some additional condition for deletion to avoid deleting
> these zero file volumes. No idea what that might be right now.
> 
> Chris-
> 
> On Thu, 13 Apr 2023, 22:55 Bill Arlofski,  wrote:
> 
> > Hello Chris,
> >
> > Your jobid  2,371 wrote 0 files, 0 bytes, and the job's Summary shows for
> > `Volume name(s):`  No volumes.
> >
> > So, unless there are other jobs on this volume, jobid 2,371 did not
> > actually write to it.
> >
> >
> > Hope this helps,
> > Bill
> >
> > --
> > Bill Arlofski
> > w...@protonmail.com
> >
> > -Chris-
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple Restore jobs

2023-04-13 Thread Martin Simmons
You will also need 1 Bacula SD device for each concurrent job and the data
must be on different volumes.

Unless you intend to keep the backups, you might find it simpler to use cpio
or tar with some temporary directory on a file server.

__Martin


> On Thu, 13 Apr 2023 09:51:13 +, Yateen Shaligram Bhagat (Nokia) said:
> 
> Thanks Bill, that helps.
> 
> There is a reason why we ned to run  multiple restore jobs concurrently.
> 
> We are migrating our computing env (few hundred hosts) to a different  
> flavour of Linux. 
> Prior to migration we need to do data backup and then restore it back. 
> 
> If we go sequentially using one single RestoreFiles job, it is going to be a 
> lengthy exercise.
> 
> Anyway I will try defining and running multiple restore jobs.
> 
> Regards,
> Yateen 
> 
> -Original Message-
> From: Bill Arlofski via Bacula-users  
> Sent: Wednesday, April 12, 2023 9:17 PM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Multiple Restore jobs
> 
> 
> CAUTION: This is an external email. Please be very careful when clicking 
> links or opening attachments. See the URL nok.it/ext for additional 
> information.
> 
> 
> 
> On 4/11/23 20:46, Yateen Shaligram Bhagat (Nokia) wrote:
> > Hi all,
> >
> > The default bacula-dir.conf provided with new Bacula installation has only 
> > one RestoreFile job defined.
> >
> > This was sufficient for us till now, but at the moment there is need to run 
> > multiple Restore jobs simultaneously.
> >
> > Can we achieve this by defining multiple Restore jobs?
> >
> > Thanks
> >
> > Yateen Bhagat
> >
> 
> 
> Hello Yateen,
> 
> The Restore job (Type = Restore) provided is an example/template of a Restore 
> job.
> 
> You only need one, but you can have as many as you want/need.
> 
> The only time I ever create more than one is when I have one for Linux 
> restores and one for Windows restores where I typically only change the 
> `where =` setting.
> 
> For example, for Windows restores, I might set:
> 
> where = "C:/temp"
> 
> and for Linux, I might have
> 
> where = /tmp
> 
> This just saves a small amount of typing when doing a restore.
> 
> If you need to run multiple concurrent restore jobs, you *may* need to set  
> `MaximumConcurrentJobs = x` where x is something greater than 1.  I am not 
> 100% sure about this, it has been a while since I touched the Restore job 
> template I use here, and mine currently has `MaximumConcurrentJobs = 10` so I 
> have a suspicion this is correct. :)
> 
> Also, you will never actually "Run" Jobs of `Type = restore`... You just type 
> `restore` and if you have more than one job of Type = restore, you will be 
> prompted for the one you want to use. You can also specify it on the command 
> line and you will not be promted:
> 
> * restore restorejob=LinuxRestore_job
> 
> 
> Hope this helps,
> Bill
> 
> --
> Bill Arlofski
> w...@protonmail.com
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] .bsr file not updated after backup

2023-03-03 Thread Martin Simmons
My guess is that your file /usr/home/dan/.touch_a_file_to_force_backupes has
zero length.

If so, then it is a "feature" because jobbytes is 0 (see
update_bootstrap_file).

__Martin


> On Fri, 03 Mar 2023 09:27:02 -0500, Dan Langille said:
> 
> If files were backed up, should the .bsr file be updated?
> 
> [bacula dan /usr/local/bacula/bsr] % ls -l *mydev*
> -rw-r-  1 bacula  bacula  5948 2023.03.03 03:05 mydev-fd_mydev_basic.bsr
> -rw-r-  1 bacula  bacula  3844 2023.03.02 03:05 
> mydev-fd_mydev_home_dir.bsr
> 
> The second file was not updated today.
> 
> Yet, a job ran today and backed up a file:
> 
> *llist files jobid=352170
> +---+
> | filename  |
> +---+
> | /usr/home/dan/.touch_a_file_to_force_backupes |
> +---+
>jobid: 352,170
>  job: mydev_home_dir.2023-03-03_03.05.00_33
> name: mydev home dir
>  purgedfiles: 0
> type: B
>level: I
> clientid: 52
>   clientname: mydev-fd
>jobstatus: T
>schedtime: 2023-03-03 03:05:00
>starttime: 2023-03-03 03:05:08
>  endtime: 2023-03-03 03:05:09
>  realendtime: 2023-03-03 03:05:09
> jobtdate: 1,677,812,709
> volsessionid: 460
>   volsessiontime: 1,676,583,692
> jobfiles: 1
> jobbytes: 0
>readbytes: 0
>joberrors: 0
>  jobmissingfiles: 0
>   poolid: 24
> poolname: IncrFile
>   priorjobid: 0
>filesetid: 294
>  fileset: mydev home dir
>  hasbase: 0
> hascache: 0
>  comment: 
> 
> The job:
> 
> Job {
>   Name= "mydev home dir"
>   JobDefs = "DefaultJob"
>   Client  = mydev-fd 
>   FileSet = "mydev home dir"
> }
> 
> The file set for that job:
> 
> 
> FileSet {
>   Name = "mydev home dir"
>   Include { 
> Options {
>   signature=MD5
> verify=pnugsmcs5
> } 
> Exclude Dir Containing = .NOBACKUP
> 
> File = /usr/home
>   }
>   Exclude {
> File = *~
>   }
> }
> 
> Job defs:
> 
> JobDefs {
>   Name= "DefaultJob"
>   Type= Backup
>   Level   = Incremental
>   Schedule= "WeeklyCycle"
>   Storage = CreyFile
>   Messages= Standard
> 
>   Write Bootstrap = "/usr/local/bacula/bsr/%c_%n.bsr"
> 
>   Pool= FullFile  # required parameter for all Jobs
> 
>   Full Backup Pool = FullFile
>   Differential Backup Pool = DiffFile
>   Incremental  Backup Pool = IncrFile
> 
>   Priority= 10
> 
>   # don't spool date when backing up to disk
>   Spool Data  = no
>   Spool Attributes = yes
> 
>   PreferMountedVolumes = no
> }
> 
> 
> 
> 
> 
> 
> -- 
>   Dan Langille
>   d...@langille.org
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win 11 shadow copy error

2023-02-14 Thread Martin Simmons
If Nextcloud works like OneDrive, then it could be caused by the problem
described here:

https://eazybackup.com/knowledge-base/error-media-is-write-protected-backing-up-onedrive-with-vss/



> On Sat, 11 Feb 2023 10:52:29 -0500, Steven A Falco said:
> 
> I have been using Bacula to back up a Windows 10 virtual machine client in 
> addition to a number of physical Linux clients without any problems for 
> several years.
> 
> Recently I upgraded the win10 vm to Windows 11, and I am now getting an error 
> message from the VSS snapshot system.  I don't know if this is a permission 
> problem.  I also don't know how serious it is - is just one file affected, or 
> is all of C:\ affected?
> 
> The error message is a bit confusing, because it says there is a read error 
> on the file and the media is write protected.
> 
> I've tried deleting shadows using vssadmin but that didn't help.  The file 
> that is triggering the error is on a Nextcloud shared directory, but that was 
> working fine with win10.
> 
> I'm not sure how to troubleshoot this.  Any guidance would be appreciated.
> 
>   Steve
> 
> 11-Feb 09:59 saf-sd JobId 22162: Ready to append to end of Volume 
> "v1_0007_0017" size=40,122,007,412
> 11-Feb 09:59 win11-fd JobId 22162: Generate VSS snapshots. Driver="Win64 VSS"
> 11-Feb 09:59 win11-fd JobId 22162: Snapshot mount point: C:\
> 11-Feb 10:01 win11-fd JobId 22162: Error: Read error on file 
> //?/GLOBALROOT/Device/HarddiskVolumeShadowCopy9/Users/sfalco/Nextcloud/SMS/sms-20230211004914.xml.
>  ERR=The media is write protected.
> 
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "Task 
> Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "VSS Metadata 
> Store Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "Performance 
> Counters Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "System 
> Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "WMI Writer", 
> State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "MSSearch 
> Service Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "ASR Writer", 
> State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "Shadow Copy 
> Optimization Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "COM+ REGDB 
> Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 win11-fd JobId 22162: VSS Writer (BackupComplete): "Registry 
> Writer", State: 0x1 (VSS_WS_STABLE)
> 11-Feb 10:03 saf-sd JobId 22162: Elapsed time=00:04:05, Transfer rate=37.48 M 
> Bytes/second
> 11-Feb 10:03 saf-sd JobId 22162: Sending spooled attrs to the Director. 
> Despooling 267,081 bytes ...
> 11-Feb 10:03 saf-dir JobId 22162: Bacula saf-dir 13.0.1 (05Aug22):
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 11.0.6 Windows Client IPv6 no starttls

2023-01-03 Thread Martin Simmons
Error 10014 from accept is:

WSAEFAULTThe addrlen parameter is too small or addr is not a valid part of
 the user address space.

I think that can only be caused by a bug in the Bacula client (or it is not
compiled correctly for IPv6).

__Martin


> On Mon, 2 Jan 2023 15:04:36 +0100, Radosław Korzeniewski said:
> 
> Thanks!
> 
> In this case as all components are on the same version it is not a version
> inconsistency issue.
> To be honest, I have no clue why a Bacula windows client is unable to
> receive any data on the socket, which is a main reason you can't
> communicate with it.
> I had no issues like this in the past. You need to somehow (I don't know
> how) debug why Bacula cannot read (receive) from a socket which should
> work. Any firewall in the network path? Any antivirus software disrupting
> communication? dunno.
> Maybe other users can help more. Good luck!
> 
> R.
> 
> pon., 2 sty 2023 o 14:13 Eyermann, Frank <
> frank.eyerm...@munich-business-school.de> napisał(a):
> 
> > Hello Radoslaw,
> >
> >
> >
> > 11.0.6 windows binary from bacula org web site.
> >
> >
> >
> > Best regards,
> >
> >
> >
> > Frank
> >
> >
> >
> >
> >
> > *Von:* Radosław Korzeniewski 
> > *Gesendet:* Montag, 2. Januar 2023 11:34
> > *An:* Eyermann, Frank 
> > *Cc:* bacula-users@lists.sourceforge.net
> > *Betreff:* Re: [Bacula-users] Bacula 11.0.6 Windows Client IPv6 no
> > starttls
> >
> >
> >
> > Hi,
> >
> >
> >
> > sob., 31 gru 2022 o 01:30 Eyermann, Frank <
> > frank.eyerm...@munich-business-school.de> napisał(a):
> >
> > Hello Radoslaw,
> >
> >
> >
> > thank you for pointing me at the debug option.
> >
> >
> >
> > Running the director reveals not much interessing (I’ve tried stat client
> > command)
> >
> >
> >
> > backupsrv01-dir: ua_status.c:274-0 status:stat client:
> >
> > backupsrv01-dir: watchdog.c:196-0 Registered watchdog 7f12f001e138,
> > interval 15 one shot
> >
> > backupsrv01-dir: btimers.c:145-0 Start thread timer 7f12f001e378 tid
> > 7f12fbfff640 for 15 secs.
> >
> > backupsrv01-dir: bsockcore.c:354-0 Current
> > [2001:hidden:hidden:101::2]:9102 All [2001:hidden:hidden:101::2]:9102
> >
> > backupsrv01-dir: bsockcore.c:285-0 who=Client: server05a-fd
> > host=2001:hidden:hidden:101::2 port=9102
> >
> > backupsrv01-dir: bsockcore.c:472-0 OK connected to server  Client:
> > server05a-fd 2001:hidden:hidden:101::2:9102.
> > socket=2001:hidden:hidden:101::10.60862:2001:hidden:hidden:101::2.9102
> > s=0x7f12f001e688
> >
> >
> >
> >
> >
> > But FileDaemon shows errors:
> >
> > server05a-fd: lib/watchdog.c:82-0 Initialising NicB-hacked watchdog thread
> >
> > server05a-fd: lib/watchdog.c:197-0 Registered watchdog 311c2c8, interval 30
> >
> > server05a-fd: lib/mem_pool.c:617-0 max_size=512
> >
> > server05a-fd: lib/events.c:48-0 Events: code=FD0001 daemon=server05a-fd
> > ref=0x238e type=daemon source=*Daemon* text=Filed startup
> >
> > server05a-fd: lib/message.c:1534-0 Enter Jmsg type=17
> >
> > server05a-fd: lib/watchdog.c:254-0 NicB-reworked watchdog thread entered
> >
> > server05a-fd: filed/filed.c:295-0 filed: listening on port 9102
> >
> > server05a-fd: filed/filed.c:295-0 filed: listening on port 9102
> >
> > server05a-fd: lib/bnet_server.c:90-0 Addresses 0.0.0.0:9102
> > [0.0.0.0]:9102
> >
> >
> >
> > (Startup successful, now I’m running stat client command on server)
> >
> >
> >
> > server05a-fd: lib/bnet.c:490-0 Socket error: err=10014 Unknown error
> >
> > server05a-fd: lib/bnet_server.c:191-0 Accept=-1 errno=603979776
> >
> > server05a-fd: lib/bnet.c:490-0 Socket error: err=10014 Unknown error
> >
> > server05a-fd: lib/bnet_server.c:191-0 Accept=-1 errno=603979776
> >
> > server05a-fd: lib/bnet.c:490-0 Socket error: err=10014 Unknown error
> >
> > server05a-fd: lib/bnet_server.c:191-0 Accept=-1 errno=603979776
> >
> > server05a-fd: lib/bnet.c:490-0 Socket error: err=10014 Unknown error
> >
> > server05a-fd: lib/bnet_server.c:191-0 Accept=-1 errno=603979776
> >
> > (And those both lines repeat thousands of times within a few seconds.)
> >
> >
> >
> >
> >
> > What version is your client?
> >
> >
> >
> > R.
> >
> >
> >
> > Best regarda,
> >
> >
> >
> > Frank
> >
> >
> >
> >
> >
> > *Von:* Radosław Korzeniewski 
> > *Gesendet:* Donnerstag, 29. Dezember 2022 23:19
> > *An:* Eyermann, Frank 
> > *Cc:* bacula-users@lists.sourceforge.net
> > *Betreff:* Re: [Bacula-users] Bacula 11.0.6 Windows Client IPv6 no
> > starttls
> >
> >
> >
> > Hello,
> >
> >
> >
> > czw., 29 gru 2022 o 15:15 Eyermann, Frank <
> > frank.eyerm...@munich-business-school.de> napisał(a):
> >
> > Hello all,
> >
> >
> >
> > In our network I’ve enabled IPv6 in dual-stack configuration.
> >
> >
> >
> > Bacula-dir and Bacula-SD version 11.0.6, on Ubuntu 22.04
> >
> >
> >
> > With Linux Clients everything works fine.
> >
> >
> >
> > For Windows Clients (tested with 2016, 2019 and 2022 servers):
> >
> > If I state Address in Client resource of Director as an IPv4 address,
> > everything works fine.
> >
> > If I state Addres

Re: [Bacula-users] Setting the to address for Bacula GDB traceback

2022-11-08 Thread Martin Simmons
>>>>> On Tue, 08 Nov 2022 11:22:07 -0500, Dan Langille said:
> 
> On Mon, Nov 7, 2022, at 10:59 AM, Martin Simmons wrote:
> >>>>>> On Sun, 6 Nov 2022 20:00:55 -0500, Dan Langille said:
> >> 
> >> I'm getting some traceback emails like this:
> >> 
> >> From: root@localhost
> >> Subject: Bacula GDB traceback of bacula-dir on bacula.int.example.org
> >> Sender: bac...@bacula.int.example.org
> >> To: root@localhost
> >> 
> >> There is no 'root@localhost' defined with my bacula-dir configuation.
> >> 
> >> [bacula dan /usr/local/etc/bacula] % sudo grep root@localhost * 0:56:01
> >> bacula-dir.conf:#  mail = root@localhost = all, !skipped, saved
> >> bacula-dir.conf.sample:  mail = root@localhost = all, !skipped
> >> bacula-dir.conf.sample:  operator = root@localhost = mount
> >> bacula-dir.conf.sample:  mail = root@localhost = all, !skipped
> >> 
> >> The above are either comments or .sample - none should be active.
> >> 
> >> Where is this address coming from?
> >
> > It is the default for dump_email in configure (used in 
> > scripts/btraceback.in).
> 
> Confirmed, I found it in there. Thank you.
> 
> I will probably modify the FreeBSD port to accomomdate local changes to this 
> script.
> 
> While looking around, I found these:
> 
> [pkg01 dan ~/ports/head/sysutils/bacula9-server] % grep -r root *
> Makefile: --with-dump-email=root@localhost \
> Makefile: --with-job-email=root@localhost \
> 
> Those are configuration arguments for building DIR, SD, and FD. If those are 
> specified at build time, it might be difficult for users to modify them. I'm 
> wondering if they are still used or if they are deprecated. Sorry, I can't 
> search the code just now.

--with-dump-email controls the value of dump_email, which is only used for
scripts/btraceback.

--with-job-email controls the value used for the Messages resources in the
generated bacula-dir.conf.

Both of these send emails to the server set by --with-smtp-host, which
defaults to localhost.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Setting the to address for Bacula GDB traceback

2022-11-07 Thread Martin Simmons
> On Sun, 6 Nov 2022 20:00:55 -0500, Dan Langille said:
> 
> I'm getting some traceback emails like this:
> 
> From: root@localhost
> Subject: Bacula GDB traceback of bacula-dir on bacula.int.example.org
> Sender: bac...@bacula.int.example.org
> To: root@localhost
> 
> There is no 'root@localhost' defined with my bacula-dir configuation.
> 
> [bacula dan /usr/local/etc/bacula] % sudo grep root@localhost * 0:56:01
> bacula-dir.conf:#  mail = root@localhost = all, !skipped, saved
> bacula-dir.conf.sample:  mail = root@localhost = all, !skipped
> bacula-dir.conf.sample:  operator = root@localhost = mount
> bacula-dir.conf.sample:  mail = root@localhost = all, !skipped
> 
> The above are either comments or .sample - none should be active.
> 
> Where is this address coming from?

It is the default for dump_email in configure (used in scripts/btraceback.in).

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wilddir not working for exclusion but is for inclusion

2022-10-03 Thread Martin Simmons
Bacula uses on the first Options clause that matches (in the order they are
written) to decide whether to include or exclude something.  If no clauses
match, then the item is backed up using the options (e.g. Compression) from
the final clause.

The problem with your clauses is that directories such as
/mnt/CUSTOMER_DATA/ifoo/temp first matches /mnt/CUSTOMER_DATA/i* so will be
included.

You need something like this:

FileSet {
  Name = "CD-i"
  Include {
Options {   # exclude rar/zip files and temp dir
   wildfile = "*.rar"
   wildfile = "*.zip"
   wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
   exclude = yes
  }
Options {   # include some dirs
signature = SHA1
Compression = GZIP9
wilddir = "/mnt/CUSTOMER_DATA/i*"
 }
Options {   # exclude everything else at top level, but not top level itself
   signature = SHA1
   Compression = GZIP9
   Regex = "^/mnt/CUSTOMER_DATA/[^/]+$"
   exclude = yes
  }
# everything else is included by default using the final options
File = /mnt/CUSTOMER_DATA
  }
}

__Martin


> On Mon, 3 Oct 2022 09:26:05 -0500, Dave  said:
> 
> I'm running Bacula 9.0.6 and cannot seem to get a wilddir exclusion to work.
> My fileset is:
> 
> FileSet {
>   Name = "CD-i"
>   Include {
> Options {
> signature = SHA1
> Compression = GZIP9
> wilddir = "/mnt/CUSTOMER_DATA/i*"
>  }
> Options {
>RegexDir = ".*"
>wildfile = "*.rar"
>wildfile = "*.zip"
>wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
>exclude = yes
>   }
> File = /mnt/CUSTOMER_DATA
>   }
> }
> 
> There are a few hundred gigs of data in a few temp subdirectories and it
> continues to be backed up.  Is there some sort of issue with how I have this
> configured?  I did also try the following with the same results:
> 
>wilddir = "/mnt/CUSTOMER_DATA/*/temp"


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Console backup job requests manual label even though configuration enable automatic labeling

2022-09-30 Thread Martin Simmons
> On Thu, 29 Sep 2022 19:04:21 -0700, Bruce Trumbo said:
> 
> Can someone explain to me why a backup started manually with bconsole 
> asked for a label command, when the configuration specifies automatic 
> labeling?
> 
> It seems that the Label Format entry in the Pool directive is being 
> ignored.  Why is this?

It could happen if the pool record in the catalog is out of date.

Check the pool in the catalog with:

show pool=FrancoFull

If it shows LabelFormat=*None* then you need to do:

update pool=FrancoFull

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume data error at 0:0!

2022-09-23 Thread Martin Simmons
This usually means that the volume is corrupted.  You could try running the
bls command to see what that reports.

Also check for any syslog messages about disk errors.

__Martin


> On Thu, 22 Sep 2022 20:12:25 +0200, Matlink  said:
> 
> Hello everyone,
> 
> I have an issue when using copy jobs. I use copy jobs to off-site my 
> backups to another SD. However, for some volumes I get errors when they 
> are read (check forwarded mail from bacula):
> 
> *Volume data error at 0:0! Short block of 3377 bytes on device
> "FileStorage" (/var/spool/bacula) discarded.*
> 
> Such an error happens every night and for a couple of distinct volumes. 
> The volumes are always the same across nights.
> 
> How can I investigate this?
> 
> Thanks,
> 
> Mat.
> 
> 
>  Message transféré 
> Sujet :   Bacula: Copy Error of dir-fd Full
> Date :Thu, 22 Sep 2022 01:13:09 +0200 (CEST)
> De :  (Bacula) 
> Pour :matl...@matlink.fr
> 
> 
> 
> 22-sept. 01:10 dir-sd JobId 33135: Warning: block.c:690 [SE0208] Volume 
> data error at 0:0! Short block of 3377 bytes on device "FileStorage" 
> (/var/spool/bacula) discarded.
> 22-sept. 01:10 dir-sd JobId 33135: Error: read_records.c:172 block.c:690 
> [SE0208] Volume data error at 0:0! Short block of 3377 bytes on device 
> "FileStorage" (/var/spool/bacula) discarded.
> 22-sept. 01:13 dir JobId 33135: Error: Bacula Enterprise dir 13.0.0 
> (04Jul22):
> Build OS: x86_64-pc-linux-gnu-bacula-enterprise debian 11.2
> Prev Backup JobId: 32816
> Prev Backup Job: cloud.2022-09-18_23.05.01_15
> New Backup JobId: 33136
> Current JobId: 33135
> Current Job: copy.2022-09-22_01.10.00_04
> Backup Level: Full
> Client: dir-fd
> FileSet: "Full Set" 2021-07-22 23:05:00
> Read Pool: "Incrementals" (From Command input)
> Read Storage: "File" (From Job resource)
> Write Pool: "CopiedIncrementals" (From Command input)
> Write Storage: "Vekk" (From rollback)
> Catalog: "MyCatalog" (From Pool resource)
> Start time: 22-sept.-2022 01:10:16
> End time: 22-sept.-2022 01:13:08
> Elapsed time: 2 mins 52 secs
> Priority: 20
> SD Files Written: 2,660
> SD Bytes Written: 1,943,077,702 (1.943 GB)
> Rate: 11297.0 KB/s
> Volume name(s): CopiedIncr-0411
> Volume Session Id: 370
> Volume Session Time: 1663429981
> Last Volume Bytes: 4,138,949,248 (4.138 GB)
> SD Errors: 1
> SD termination status: OK
> Termination: *** Copying Error ***
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restoring from LTO-4

2022-09-21 Thread Martin Simmons
> On Wed, 21 Sep 2022 17:56:53 +0100, Adam Weremczuk said:
> 
> Hi all,
> 
> I'm running a very old Bacula 5.2.6 writing to equally old LTO-4 tapes 
> (pre-LTFS). Data is unencrypted.
> 
> In the past I've successfully tested restoring outside "bconsole" with 
> "bextract" (.bsr files available) or "bls" followed by "bextract" (.bsr 
> files missing).
> 
> This was all done on the box where Bacula runs from so it's not clear to 
> me whether "bextract" and "bls" consulted local Bacula database (MySQL) 
> or any customised local configuration files. My guess is they didn't.

They consult the bacula-sd.conf for info on the Device settings, so you should
keep a copy of that somewhere.


> So - if the Bacula box dies but I have the drive and tapes - will I be 
> able to access all data on tapes by connecting the drive to a new box 
> followed by "bextract" and "bls" commands alone?

Yes, that should work.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Copy jobs dummy client problem

2022-09-08 Thread Martin Simmons
In the log you sent me privately, the two "unable to authenticate with File
Daemon" messages have timestamps 1 minute after the end of the copy jobs.
That is consistent with them having "JobID 0" but it is still a mystery what
triggers them.

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: is Level a selection criterion for Migration/Copy jobs?

2022-09-08 Thread Martin Simmons
If you can express it in SQL, then you can use that to select a job, e.g. I
use this for testing:

Job {
  Name = "CopyLastBackupJob"
  Type = Copy
  Selection Type = SQLQuery
  Selection Pattern = "select jobid from job where type='B' and jobbytes > 1 
order by jobtdate desc limit 1"
  ...
}

__Martin


> On Wed, 7 Sep 2022 14:48:20 +0200, Justin Case said:
> 
> Hi all,
> 
> I am wondering whether Level is also a selection criterion for migration/copy 
> jobs, i.e. if there are for a given Job Name a full backup from yesterday and 
> an incremental backup from today, will I be able to select the full backup if 
> I use the job name and Level=Full?
> 
> Thanks for considering my question,
>  J/C
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-07 Thread Martin Simmons
>>>>> On Wed, 7 Sep 2022 12:08:10 +0200, Uwe Schuerkamp said:
> 
> On Mon, Sep 05, 2022 at 05:14:28PM +0100, Martin Simmons wrote:
> > >>>>> On Mon, 5 Sep 2022 11:21:52 +0200, Uwe Schuerkamp said:
> > > 
> > > I've tried casting "blob" and "tinyblob" (the mariadb column types for
> > > VolumeName, for example) to "text", but pgloader just hangs when
> > > including those cast statements.
> > 
> > What exact cast statement did you use?
> > 
> 
> Hello Martin,
> 
> I think I used
> 
> cast type tinyblob to text
> 
> for testing purposes, but I don't have the pgloader config file any more, 
> sorry.
> 
> All the best,

This might work but I've not tested it:

cast type tinyblob to text using varbinary-to-string

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Copy jobs dummy client problem

2022-09-06 Thread Martin Simmons
No, it doesn't run the before/after of all the jobs being copied for me (only
for the copy job itself).

It might help if you post the full copy job output rather than just the error
message to show the full sequence of events.

__Martin


>>>>> On Tue, 6 Sep 2022 15:02:50 +0200, Justin Case said:
> 
> I have that on 2 backup jobs that are copied, but not on the copy job itself. 
> doe copy jobs run the before/after of all the jobs being copied?
> 
> > On 6. Sep 2022, at 13:02, Martin Simmons  wrote:
> > 
> > Maybe you have a run before/after directive that is causing it to run a
> > bconsole command?
> > 
> > __Martin
> > 
> > 
> >>>>>> On Mon, 5 Sep 2022 21:02:52 +0200, Justin Case said:
> >> 
> >> It works for me, too, but I get this error.
> >> 
> >>> On 5. Sep 2022, at 20:14, Martin Simmons  wrote:
> >>> 
> >>> A copy job works for me in Bacula 13.0.0 without this problem (even if the
> >>> bacula-fd isn't running).
> >>> 
> >>> __Martin
> >>> 
> >>> 
> >>>>>>>> On Mon, 5 Sep 2022 19:09:18 +0200, Justin Case said:
> >>>> 
> >>>> Yes, it does match. I then replaced the dummy password with the actual 
> >>>> FD password, and the error did not occur any more when starting copy 
> >>>> jobs. This behaves differently than documented, that’s why I am asking. 
> >>>> 
>>>>> On 5. Sep 2022, at 17:56, Martin Simmons  wrote:
>>>>> 
>>>>> That looks strange to me.  The "JobId 0" maybe means that it was caused by
>>>>> something else.  Does the time "04-Sep 14:19" match the sequence of times 
>>>>> in
>>>>> the other messages about the copy job?
>>>>> 
>>>>> __Martin
>>>>> 
>>>>> 
> >>>>>>>>>> On Sun, 4 Sep 2022 14:33:03 +0200, Justin Case said:
> >>>>>> 
> >>>>>> Hi there,
> >>>>>> 
> >>>>>> I took this snipped from the main documentation about copy jobs:
> >>>>>> 
> >>>>>> # # Fake client for copy jobs # 
> >>>>>> Client { 
> >>>>>> Name = None 
> >>>>>> Address = localhost 
> >>>>>> Password = "NoNe” 
> >>>>>> Catalog = MyCatalog 
> >>>>>> } 
> >>>>>> 
> >>>>>> # # Default template for a CopyDiskToTape Job # JobDefs { 
> >>>>>> 
> >>>>>> Name = CopyDiskToTape 
> >>>>>> Type = Copy 
> >>>>>> Messages = StandardCopy 
> >>>>>> Client = None 
> >>>>>> FileSet = None 
> >>>>>> Selection Type = PoolUncopiedJobs 
> >>>>>> Maximum Concurrent Jobs = 10 
> >>>>>> SpoolData = No 
> >>>>>> Allow Duplicate Jobs = Yes 
> >>>>>> Cancel Queued Duplicates = No 
> >>>>>> Cancel Running Duplicates = No 
> >>>>>> Priority = 13 
> >>>>>> } 
> >>>>>> 
> >>>>>> It says: "The Copy Job runs without using the File daemon by copying 
> >>>>>> the data from the old backup Volume to a different Volume in a 
> >>>>>> different Pool.” So there is no need for a working Client.
> >>>>>> 
> >>>>>> This does not seem to be entirely factual. I configured it as 
> >>>>>> suggested above, but then the Director gives me:
> >>>>>> 
> >>>>>> 04-Sep 14:19 bacula-dir JobId 0: Fatal error: authenticatebase.cc:435 
> >>>>>> Director unable to authenticate with File Daemon at "localhost:9102". 
> >>>>>> Possible causes: Passwords or names not the same or
> >>>>>> 
> >>>>>> The copy job still works, but I think it is not suitable to provoke 
> >>>>>> such an error.
> >>>>>> 
> >>>>>> Why would the Director actually connect to the client? May I suppress 
> >>>>>> this?
> >>>>>> 
> >>>>>> Thanks for considering my question,
> >>>>>> J/C
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> ___
> >>>>>> Bacula-users mailing list
> >>>>>> Bacula-users@lists.sourceforge.net
> >>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>>>>> 
>>>>> 
> >>>> 
> >>>> 
> >>> 
> >> 
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Copy jobs dummy client problem

2022-09-06 Thread Martin Simmons
Maybe you have a run before/after directive that is causing it to run a
bconsole command?

__Martin


>>>>> On Mon, 5 Sep 2022 21:02:52 +0200, Justin Case said:
> 
> It works for me, too, but I get this error.
> 
> > On 5. Sep 2022, at 20:14, Martin Simmons  wrote:
> > 
> > A copy job works for me in Bacula 13.0.0 without this problem (even if the
> > bacula-fd isn't running).
> > 
> > __Martin
> > 
> > 
> >>>>>> On Mon, 5 Sep 2022 19:09:18 +0200, Justin Case said:
> >> 
> >> Yes, it does match. I then replaced the dummy password with the actual FD 
> >> password, and the error did not occur any more when starting copy jobs. 
> >> This behaves differently than documented, that’s why I am asking. 
> >> 
> >>> On 5. Sep 2022, at 17:56, Martin Simmons  wrote:
> >>> 
> >>> That looks strange to me.  The "JobId 0" maybe means that it was caused by
> >>> something else.  Does the time "04-Sep 14:19" match the sequence of times 
> >>> in
> >>> the other messages about the copy job?
> >>> 
> >>> __Martin
> >>> 
> >>> 
> >>>>>>>> On Sun, 4 Sep 2022 14:33:03 +0200, Justin Case said:
> >>>> 
> >>>> Hi there,
> >>>> 
> >>>> I took this snipped from the main documentation about copy jobs:
> >>>> 
> >>>> # # Fake client for copy jobs # 
> >>>> Client { 
> >>>> Name = None 
> >>>> Address = localhost 
> >>>> Password = "NoNe” 
> >>>> Catalog = MyCatalog 
> >>>> } 
> >>>> 
> >>>> # # Default template for a CopyDiskToTape Job # JobDefs { 
> >>>> 
> >>>> Name = CopyDiskToTape 
> >>>> Type = Copy 
> >>>> Messages = StandardCopy 
> >>>> Client = None 
> >>>> FileSet = None 
> >>>> Selection Type = PoolUncopiedJobs 
> >>>> Maximum Concurrent Jobs = 10 
> >>>> SpoolData = No 
> >>>> Allow Duplicate Jobs = Yes 
> >>>> Cancel Queued Duplicates = No 
> >>>> Cancel Running Duplicates = No 
> >>>> Priority = 13 
> >>>> } 
> >>>> 
> >>>> It says: "The Copy Job runs without using the File daemon by copying the 
> >>>> data from the old backup Volume to a different Volume in a different 
> >>>> Pool.” So there is no need for a working Client.
> >>>> 
> >>>> This does not seem to be entirely factual. I configured it as suggested 
> >>>> above, but then the Director gives me:
> >>>> 
> >>>> 04-Sep 14:19 bacula-dir JobId 0: Fatal error: authenticatebase.cc:435 
> >>>> Director unable to authenticate with File Daemon at "localhost:9102". 
> >>>> Possible causes: Passwords or names not the same or
> >>>> 
> >>>> The copy job still works, but I think it is not suitable to provoke such 
> >>>> an error.
> >>>> 
> >>>> Why would the Director actually connect to the client? May I suppress 
> >>>> this?
> >>>> 
> >>>> Thanks for considering my question,
> >>>> J/C
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> ___
> >>>> Bacula-users mailing list
> >>>> Bacula-users@lists.sourceforge.net
> >>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>>> 
> >>> 
> >> 
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Copy jobs dummy client problem

2022-09-05 Thread Martin Simmons
A copy job works for me in Bacula 13.0.0 without this problem (even if the
bacula-fd isn't running).

__Martin


>>>>> On Mon, 5 Sep 2022 19:09:18 +0200, Justin Case said:
> 
> Yes, it does match. I then replaced the dummy password with the actual FD 
> password, and the error did not occur any more when starting copy jobs. This 
> behaves differently than documented, that’s why I am asking. 
> 
> > On 5. Sep 2022, at 17:56, Martin Simmons  wrote:
> > 
> > That looks strange to me.  The "JobId 0" maybe means that it was caused by
> > something else.  Does the time "04-Sep 14:19" match the sequence of times in
> > the other messages about the copy job?
> > 
> > __Martin
> > 
> > 
> >>>>>> On Sun, 4 Sep 2022 14:33:03 +0200, Justin Case said:
> >> 
> >> Hi there,
> >> 
> >> I took this snipped from the main documentation about copy jobs:
> >> 
> >> # # Fake client for copy jobs # 
> >> Client { 
> >> Name = None 
> >> Address = localhost 
> >> Password = "NoNe” 
> >> Catalog = MyCatalog 
> >> } 
> >> 
> >> # # Default template for a CopyDiskToTape Job # JobDefs { 
> >> 
> >> Name = CopyDiskToTape 
> >> Type = Copy 
> >> Messages = StandardCopy 
> >> Client = None 
> >> FileSet = None 
> >> Selection Type = PoolUncopiedJobs 
> >> Maximum Concurrent Jobs = 10 
> >> SpoolData = No 
> >> Allow Duplicate Jobs = Yes 
> >> Cancel Queued Duplicates = No 
> >> Cancel Running Duplicates = No 
> >> Priority = 13 
> >> } 
> >> 
> >> It says: "The Copy Job runs without using the File daemon by copying the 
> >> data from the old backup Volume to a different Volume in a different 
> >> Pool.” So there is no need for a working Client.
> >> 
> >> This does not seem to be entirely factual. I configured it as suggested 
> >> above, but then the Director gives me:
> >> 
> >> 04-Sep 14:19 bacula-dir JobId 0: Fatal error: authenticatebase.cc:435 
> >> Director unable to authenticate with File Daemon at "localhost:9102". 
> >> Possible causes: Passwords or names not the same or
> >> 
> >> The copy job still works, but I think it is not suitable to provoke such 
> >> an error.
> >> 
> >> Why would the Director actually connect to the client? May I suppress this?
> >> 
> >> Thanks for considering my question,
> >> J/C
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> ___
> >> Bacula-users mailing list
> >> Bacula-users@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedules verify job fails

2022-09-05 Thread Martin Simmons
As you suspect, adding the extra run for level=Data will not work.  I just
have separate schedules without any level specified for verify jobs.

__Martin


>>>>> On Mon, 5 Sep 2022 19:06:05 +0200, Justin Case said:
> 
> Martin, excellent advice. it says at the job info:
> 
> level=Data
> 
> then under the schedule info:
> 
> Run Level=Full
> ..
> Run Level=Incremental
> ..
> Run Level=Incremental
> 
> according to the schedule it is Incremental
> 
> The precedence of which level is used cannot be derived from the "show job" 
> alone.
> I was assuming that the job level declared within the job or jobdef would be 
> overriding levels declared elsewhere. The manual albeit explains that the 
> level declared in the schedule takes precedence. For me this is a bit 
> unfortunate, as I then need an own set of schedules for backup jobs with runs 
> for Full and Incremental, and another set for verify jobs with a run for 
> level=Data and possibly additional runs for other verify levels. considering 
> the number of schedules I have defined for use this would be a lot.
> 
> Do you see a more elegant solution?
> 
> I guess it wouldn’t work if I added a run for level=Data to all of my 
> schedules, as Bacula wouldn’t know how to match the runs with the correct 
> level to the fitting job type? Or would the Director know that runs with 
> level=Data only apply to jobs with type=Verify, and levels Full and 
> Incremental apply to backup, migrate and copy jobs?
> 
> Cheers,
>  J/C
> 
> 
> > On 5. Sep 2022, at 17:47, Martin Simmons  wrote:
> > 
> > Maybe the output of
> > 
> > show job=job1-verify
> > 
> > will show where the level is being set to Incremental?  It might be the
> > Schedule resource for example.
> > 
> > __Martin
> > 
> > 
> >>>>>> On Sat, 3 Sep 2022 22:13:16 +0200, Justin Case said:
> >> 
> >> Hi
> >> 
> >> I am confused why I am getting this error:
> >> 
> >> 
> >> 03-Sep 22:00 bacula-dir JobId 2481: Fatal error: verify.c:73 Unimplemented 
> >> Verify level 73(I)
> >> 03-Sep 22:00 bacula-dir JobId 2481: Error: Bacula bacula-dir 11.0.6 
> >> (10Mar22):
> >> 
> >> —snip—
> >> 
> >> Verify Level:   Incremental
> >> Verify JobId:   0
> >> Files Examined: 0
> >> Non-fatal FD errors:1
> >> FD termination status:  
> >> Termination:*** Verify Error ***
> >> 
> >> The Verify Level shown above is definitely wrong, here is the definition 
> >> (and it sets Data as Level!)(I removed the schedule resource name):
> >> 
> >> Job {
> >>  Name = “job1-verify"
> >>  Level = "Data"
> >>  Pool = “pool1"
> >>  FullBackupPool = “pool1"
> >>  IncrementalBackupPool = “pool2"
> >>  Fileset = “users"
> >>  VerifyJob = “job1"
> >>  JobDefs = “jdef1-verify"
> >>  Enabled = yes
> >>  AllowIncompleteJobs = no
> >>  AllowDuplicateJobs = no
> >> }
> >> JobDefs {
> >>  Name = “jdef1-verify"
> >>  Type = "Verify"
> >>  Level = "Data"
> >>  Messages = "Standard"
> >>  Storage = “storage1"
> >>  Pool = "Default"
> >>  Client = “client-fd"
> >>  Fileset = "EmptyFileset"
> >>  WriteBootstrap = "/bootstrap/%c_%n.bsr"
> >>  Priority = 40
> >> }
> >> 
> >> Any ideas?
> >> 
> >> Thanks for your time,
> >> J/C
> >> 
> >> 
> >> 
> >> 
> >> ___
> >> Bacula-users mailing list
> >> Bacula-users@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume purged if no files on it, regardless of retention period.

2022-09-05 Thread Martin Simmons
Yes, that seems possible.  Bacula will find no jobs associated with the volume
so it will look like a pruned volume.

__Martin


> On Mon, 5 Sep 2022 10:49:56 +, Antonino Balsamo said:
> 
> Hello,
> 
> my scenario is a one job per volume with no recycle and 90 days 
> retention period, usual full/diff/incr pools.
> 
> It happens that from time to time (if there are no files in the job due 
> to no filesystem change) that Bacula purge a volume within the retention 
> period.
> 
> Is that possible and due to the fact that there are no files in the job?
> 
> thanks
> 
> Ant
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   6   7   8   9   10   >