[Bacula-users] LTO-8 Slow Writes (ITDT Tested) Bad SAS Cable or?

2023-04-21 Thread Drew Von Spreecken

  
  
Hello,

I'm having a hard time determining an issue with a new LTO-8 IBM
drive/tape. 
Hoping someone may have run into this already.

I've used Bacula for around 9 years, most recently with an LTO-6 HH
IBM drive without any real issues. I did know that I never was able
to reach native speeds but close at around 140MBps with
uncompressible data. Since it was close enough, I never really
bothered to troubleshoot further.

Recently I've moved to a new LTO-8 drive and I'm finding I'm also
not able to reach close to native speed. 
I currently max out at 289MBps (exactly) with uncompressible data
and ~415Mbps with zeros/highly compressible data.
With ST, the block size is set to variable and changing this makes
very little difference (see below for clarification).

I've tried just about everything I can think of short of swapping
the SAS cable (SFF-8088 to SFF-8644). 
I've tried both Debian 11 with the built-in ST driver on the 5.19.x
kernel and itched to CentOS 8 with built-in ST driver on 5.18.x
kernel and finally installed IBM's (lin_tape/lin_taped) kernel
module. Zero change in speed.

To further troubleshoot and to ensure Bacula wasn't the bottleneck I
tested with IBM tape diagnostic tools. The numbers are nearly
identical to what the Bacula client daemon reports. ITDT is able to
test multiple block sizes and while there is small difference
between a 1MB block vs 256kb, I don't believe block size to be an
issue here.

I have also tried with two different HBAs (both in HBA/IT mode) one
being dedicated Dell 12gb/s (LSI3008 chip) and the other an Adaptec
ASR8885. No other data is flowing through the controllers. The
firmware is up-to-date on both. The drive firmware is the latest
too.

As for the SAS cable itself I'm reluctant to replace it because the
one (actually a 2-pack) I'm using was purchased in 2016 from Amazon
and the replacements look to be the exact same thing. While I could
purchase one through ATTO/Dell/etc I'm concerned that the fact that
these have an SFF-8644 end, they should be SAS 2.1/3.0 compliant and
support at least 6GB/s.

Has anyone found their SAS cable to not be compliant or have similar
issues where a cable swap worked?

Or maybe I'm missing something really simple? 


Thanks!

-Drew


  


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow Spooling / Despooling

2023-04-21 Thread Udo Kaune

Am 21.04.23 um 16:59 schrieb Dr. Thorsten Brandau:



PS: Despooling seems unusually slow to me.
PPS: Please do not top-post.

I have noticed that both, spooling and despooling is very slow. I have 
no idea why, the disc makes easily 300 MB/s and there is nothing else 
going on on that raid.


I have no idea where to even look. When I start parallel other 
processes, they can use the full power. iotop goes up to 300 MB/s from 
time to time to drop then again to zero.


On another server where I go directly disc to tape it is faster (LTO-6).



Is your tape block size set to zero?

mt -f /dev/nst0 status | grep size
mt -f /dev/nst0 defblksize 0
mt -f /dev/nst0 setblk 0

Grab an empty tape and make some test:

Nothing will be faster than this:

# ddrescue -s 128G --force /dev/zero /dev/null
GNU ddrescue 1.21
Press Ctrl-C to interrupt
 ipos:  127999 MB, non-trimmed:    0 B,  current rate: 7029 MB/s
 opos:  127999 MB, non-scraped:    0 B,  average rate: 11636 MB/s
non-tried:    0 B, errsize:    0 B,  run time: 11s
  rescued:  128000 MB,  errors:    0,  remaining time: n/a
percent rescued: 100.00%  time since last successful read:  0s
Finished

How fast can we write to the disk (ever)?

# ddrescue -D -s 128G --force /dev/zero test.bin
GNU ddrescue 1.21
Press Ctrl-C to interrupt
 ipos:  127999 MB, non-trimmed:    0 B,  current rate: 228 MB/s
 opos:  127999 MB, non-scraped:    0 B,  average rate: 727 MB/s
non-tried:    0 B, errsize:    0 B,  run time:  2m 56s
  rescued:  128000 MB,  errors:    0,  remaining time: n/a
percent rescued: 100.00%  time since last successful read:  0s
Finished

How fast can we read from the disk (ever)?

# ddrescue -d --force test.bin /dev/null
GNU ddrescue 1.21
Press Ctrl-C to interrupt
 ipos:  134217 MB, non-trimmed:    0 B,  current rate:    941 MB/s
 opos:  134217 MB, non-scraped:    0 B,  average rate:    986 MB/s
non-tried:    0 B, errsize:    0 B,  run time:  2m 16s
  rescued:  134217 MB,  errors:    0,  remaining time: n/a
percent rescued: 100.00%  time since last successful read:  0s
Finished

Lets generate a binary random file which is merely incompressible

# ddrescue -s 128G --force /dev/urandom test.bin
GNU ddrescue 1.21
Press Ctrl-C to interrupt
 ipos:  127999 MB, non-trimmed:    0 B,  current rate: 43843 kB/s
 opos:  127999 MB, non-scraped:    0 B,  average rate: 171 MB/s
non-tried:    0 B, errsize:    0 B,  run time: 12m 27s
  rescued:  128000 MB,  errors:    0,  remaining time: n/a
percent rescued: 100.00%  time since last successful read:  0s
Finished

# ddrescue -d --force test.bin /dev/nst0
GNU ddrescue 1.21
Press Ctrl-C to interrupt
     ipos:  127999 MB, non-trimmed:       0 B,  current rate:   40435 kB/s
     opos:  127999 MB, non-scraped:       0 B,  average rate:   98497 kB/s
non-tried:   0 MB,     errsize:       0 B,  run time:      21m 36s
  rescued:  128000 MB,  errors:       0,  remaining time:          n/a
percent rescued: 100.00% time since last successful read:           0s
Finished

This is from our Quantum Superloader 3 LTO-7 definitions. See if it 
makes any difference for you. Despooling sustained ~ 290MB/sec from an 
SSD Raid 1.


bacula-sd.conf:

Device {

  ...
  MaximumSpoolSize = 256G

 # 300 Mb/sec = eof filemark every two minutes
  MaximumFileSize = 36G

  MaximumBlockSize = 1048576
  ...

}
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau


Chris Wilkinson schrieb am 21.04.23 um 17:00:
One thing i’ve always found useful is to install Webmin and the Bacula 
module. It gives a quick overview of the config in GUI form and helps 
to spot config errors. One caveat is not to save any configs from it 
as it doesn’t respect the time units that are in the pool configs for 
retention times etc.




Unfortunately I am running SuSe Tumbleweed on that sever and it does not 
work well with Webmin since many years... Back to the command line for me.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Chris Wilkinson
One thing i’ve always found useful is to install Webmin and the Bacula module. 
It gives a quick overview of the config in GUI form and helps to spot config 
errors. One caveat is not to save any configs from it as it doesn’t respect the 
time units that are in the pool configs for retention times etc.

Webmin is handy for all sorts of tasks too.

Best
-Chris-




> On 21 Apr 2023, at 15:52, Bill Arlofski via Bacula-users 
>  wrote:
> 
> On 4/21/23 04:39, Dr. Thorsten Brandau wrote:
>> Hi
>> 
>> yes, I commented the incremential out because it also makes a full backup 
>> and therefore I only try to do differential per
>> weekend. I want to have incrementals daily, but it does not work (always 
>> does full).
>> 
>> The last log is:
>> 
>> 07-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
>> Job=FileServer_Full.2023-04-07_23.05.00_04
> 
> Hello Dr. Thorsten,
> 
> Please show us the Job and Fileset configurations.
> 
> Then, please run:
> 
> * run job=FileServer_Full level=Incremental
> 
> * run job=FileServer_Full level=Differential
> 
> and show us those complete Job logs too - including the Summary information 
> at the end.
> 
> 
> Some things to note:
> 
> - If there is no Full job in the Catalog, then Bacula can't run an 
> Incremental, nor a Differential. These would be
> automatically upgraded to a Full level - you will be notified in the job log 
> in the very first line.
> 
> - If you edit a Fileset, Bacula will notice this and will automatical
> ly upgrade the next Inc or Diff to a Full - you will be
> notified in the job log in the very first line.
> 
> - To prevent the second case, you can add `IgnoreFileSetChanges = yes` to the 
> Fileset - but this has its own ramifications
> and is not recommended to use unless you are fully aware - See the docs about 
> this feature for more information.
> 
> 
> Best regards,
> Bill
> 
> --
> Bill Arlofski
> w...@protonmail.com
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau




PS: Despooling seems unusually slow to me.
PPS: Please do not top-post.

I have noticed that both, spooling and despooling is very slow. I have 
no idea why, the disc makes easily 300 MB/s and there is nothing else 
going on on that raid.


I have no idea where to even look. When I start parallel other 
processes, they can use the full power. iotop goes up to 300 MB/s from 
time to time to drop then again to zero.


On another server where I go directly disc to tape it is faster (LTO-6).

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Bill Arlofski via Bacula-users

On 4/21/23 04:39, Dr. Thorsten Brandau wrote:

Hi

yes, I commented the incremential out because it also makes a full backup and 
therefore I only try to do differential per
weekend. I want to have incrementals daily, but it does not work (always does 
full).

The last log is:

07-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
Job=FileServer_Full.2023-04-07_23.05.00_04


Hello Dr. Thorsten,

Please show us the Job and Fileset configurations.

Then, please run:

* run job=FileServer_Full level=Incremental

* run job=FileServer_Full level=Differential

and show us those complete Job logs too - including the Summary information at 
the end.


Some things to note:

- If there is no Full job in the Catalog, then Bacula can't run an Incremental, 
nor a Differential. These would be
automatically upgraded to a Full level - you will be notified in the job log in 
the very first line.

- If you edit a Fileset, Bacula will notice this and will automatical
ly upgrade the next Inc or Diff to a Full - you will be
notified in the job log in the very first line.

- To prevent the second case, you can add `IgnoreFileSetChanges = yes` to the 
Fileset - but this has its own ramifications
and is not recommended to use unless you are fully aware - See the docs about 
this feature for more information.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau


Udo Kaune schrieb am 21.04.23 um 16:24:
How does the protocol of one of these Differential (and then allegedly 
unnecessary Full) Backups start? Are there any hints e.g. "No Full 
Backup Found!"?


Please always include the list in your postings for other readers to 
learn from the issue.


Am 21.04.23 um 15:16 schrieb Dr. Thorsten Brandau:
>
> As seen below:
>
> Here is the full log (it is WY to much data for a differential 
backup):

>
> ->->->
>
> 7-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
Job=FileServer_Full.2023-04-07_23.05.00_04

> 07-Apr 23:05 -dir JobId 136: Using Device "LTO9-1" to write.
> 07-Apr 23:05 -sd JobId 136: Error: 07-Apr 23:05 -sd JobId 136: 
Volume "20L9" previously written, moving to end of data.
> 07-Apr 23:06 -sd JobId 136: Ready to append to end of Volume 
"20L9" at file=2789.

> 07-Apr 23:06 -sd JobId 136: Spooling data ...
> 08-Apr 02:37 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,026,157 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 02:37 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,026,157 bytes ...
> 08-Apr 08:14 -sd JobId 136: Despooling elapsed time = 05:36:38, 
Transfer rate = 148.5 M Bytes/second

> 08-Apr 08:14 -sd JobId 136: Spooling data again ...
> 08-Apr 11:40 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,008,520 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 11:40 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,008,520 bytes ...
> 08-Apr 17:59 -sd JobId 136: Despooling elapsed time = 06:18:52, 
Transfer rate = 131.9 M Bytes/second

> 08-Apr 17:59 -sd JobId 136: Spooling data again ...
> 08-Apr 22:57 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,059,190 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 22:57 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,059,190 bytes ...
> 09-Apr 05:47 -sd JobId 136: Despooling elapsed time = 06:49:38, 
Transfer rate = 122.0 M Bytes/second

> 09-Apr 05:47 -sd JobId 136: Spooling data again ...
> 09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool 
file. Disk probably full. Attempting recovery. Wanted to write=64512 
got=692
> 09-Apr 10:35 -sd JobId 136: Writing spooled data to Volume. 
Despooling 2,999,394,019,676 bytes ...
> 09-Apr 17:35 -sd JobId 136: Despooling elapsed time = 06:59:31, 
Transfer rate = 119.1 M Bytes/second
> 09-Apr 20:28 -sd JobId 136: Error: Error writing header to spool 
file. Disk probably full. Attempting recovery. Wanted to write=64512 
got=3793
> 09-Apr 20:28 -sd JobId 136: Writing spooled data to Volume. 
Despooling 2,999,392,366,895 bytes ...
> 10-Apr 03:33 -sd JobId 136: Despooling elapsed time = 07:04:44, 
Transfer rate = 117.6 M Bytes/second
> 10-Apr 06:22 -sd JobId 136: Committing spooled data to Volume 
"20L9". Despooling 1,510,003,913,646 bytes ...
> 10-Apr 09:38 -sd JobId 136: Despooling elapsed time = 03:16:01, 
Transfer rate = 128.3 M Bytes/second
> 10-Apr 09:38 -sd JobId 136: Elapsed time=58:31:43, Transfer 
rate=78.27 M Bytes/second
> 10-Apr 09:38 -sd JobId 136: Sending spooled attrs to the Director. 
Despooling 1,885,573,459 bytes ...

> 10-Apr 09:43 -dir JobId 136: Bacula -dir 11.0.6 (10Mar22):
>   Build OS:   x86_64-suse-linux-gnu openSUSE Tumbleweed
>   JobId:  136
>   Job: FileServer_Full.2023-04-07_23.05.00_04
>   Backup Level:   Full
>   Client: "-fd" 11.0.6 (10Mar22) 
x86_64-suse-linux-gnu,openSUSE,Tumbleweed

>   FileSet:    "Full Set" 2023-03-18 23:05:00
>   Pool:   "Tape" (From Job resource)
>   Catalog:    "MyCatalog" (From Client resource)
>   Storage:    "AutoChangerLTO" (From Job resource)
>   Scheduled time: 07-Apr-2023 23:05:00
>   Start time: 07-Apr-2023 23:05:03
>   End time:   10-Apr-2023 09:43:52
>   Elapsed time:   2 days 10 hours 38 mins 49 secs
>   Priority:   10
>   FD Files Written:   6,600,638
>   SD Files Written:   6,600,638
>   FD Bytes Written:   16,491,080,239,221 (16.49 TB)
>   SD Bytes Written:   16,492,258,152,160 (16.49 TB)
>   Rate:   78109.0 KB/s
>  Software Compression:   None
>   Comm Line Compression:  64.5% 2.8:1
>   Snapshot/VSS:   no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): 20L9
>   Volume Session Id:  2
>   Volume Session Time:    1680860753
>   Last Volume Bytes:  19,288,951,428,096 (19.28 TB)
>   Non-fatal FD errors:    0
>   SD Errors:  3
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:    Backup OK -- with warnings
>
> 10-Apr 09:43 -dir JobId 136: Begin pruning Jobs older than 6 months .
> 10-Apr 09:43 -dir JobId 136: No Jobs found to prune.
> 10-Apr 09:43 -dir JobId 136: Begin pruning Files.
> 10-Apr 09:43 -dir JobId 136: No Files found to prune.
> 10-Apr 

Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Udo Kaune
How does the protocol of one of these Differential (and then allegedly 
unnecessary Full) Backups start? Are there any hints e.g. "No Full 
Backup Found!"?


Please always include the list in your postings for other readers to 
learn from the issue.


Am 21.04.23 um 15:16 schrieb Dr. Thorsten Brandau:
>
> As seen below:
>
> Here is the full log (it is WY to much data for a differential 
backup):

>
> ->->->
>
> 7-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
Job=FileServer_Full.2023-04-07_23.05.00_04

> 07-Apr 23:05 -dir JobId 136: Using Device "LTO9-1" to write.
> 07-Apr 23:05 -sd JobId 136: Error: 07-Apr 23:05 -sd JobId 136: Volume 
"20L9" previously written, moving to end of data.
> 07-Apr 23:06 -sd JobId 136: Ready to append to end of Volume 
"20L9" at file=2789.

> 07-Apr 23:06 -sd JobId 136: Spooling data ...
> 08-Apr 02:37 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,026,157 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 02:37 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,026,157 bytes ...
> 08-Apr 08:14 -sd JobId 136: Despooling elapsed time = 05:36:38, 
Transfer rate = 148.5 M Bytes/second

> 08-Apr 08:14 -sd JobId 136: Spooling data again ...
> 08-Apr 11:40 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,008,520 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 11:40 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,008,520 bytes ...
> 08-Apr 17:59 -sd JobId 136: Despooling elapsed time = 06:18:52, 
Transfer rate = 131.9 M Bytes/second

> 08-Apr 17:59 -sd JobId 136: Spooling data again ...
> 08-Apr 22:57 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,059,190 MaxJobSpoolSize=3,000,000,000,000
> 08-Apr 22:57 -sd JobId 136: Writing spooled data to Volume. 
Despooling 3,000,000,059,190 bytes ...
> 09-Apr 05:47 -sd JobId 136: Despooling elapsed time = 06:49:38, 
Transfer rate = 122.0 M Bytes/second

> 09-Apr 05:47 -sd JobId 136: Spooling data again ...
> 09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool 
file. Disk probably full. Attempting recovery. Wanted to write=64512 got=692
> 09-Apr 10:35 -sd JobId 136: Writing spooled data to Volume. 
Despooling 2,999,394,019,676 bytes ...
> 09-Apr 17:35 -sd JobId 136: Despooling elapsed time = 06:59:31, 
Transfer rate = 119.1 M Bytes/second
> 09-Apr 20:28 -sd JobId 136: Error: Error writing header to spool 
file. Disk probably full. Attempting recovery. Wanted to write=64512 
got=3793
> 09-Apr 20:28 -sd JobId 136: Writing spooled data to Volume. 
Despooling 2,999,392,366,895 bytes ...
> 10-Apr 03:33 -sd JobId 136: Despooling elapsed time = 07:04:44, 
Transfer rate = 117.6 M Bytes/second
> 10-Apr 06:22 -sd JobId 136: Committing spooled data to Volume 
"20L9". Despooling 1,510,003,913,646 bytes ...
> 10-Apr 09:38 -sd JobId 136: Despooling elapsed time = 03:16:01, 
Transfer rate = 128.3 M Bytes/second
> 10-Apr 09:38 -sd JobId 136: Elapsed time=58:31:43, Transfer 
rate=78.27 M Bytes/second
> 10-Apr 09:38 -sd JobId 136: Sending spooled attrs to the Director. 
Despooling 1,885,573,459 bytes ...

> 10-Apr 09:43 -dir JobId 136: Bacula -dir 11.0.6 (10Mar22):
>   Build OS:   x86_64-suse-linux-gnu openSUSE Tumbleweed
>   JobId:  136
>   Job: FileServer_Full.2023-04-07_23.05.00_04
>   Backup Level:   Full
>   Client: "-fd" 11.0.6 (10Mar22) 
x86_64-suse-linux-gnu,openSUSE,Tumbleweed

>   FileSet:    "Full Set" 2023-03-18 23:05:00
>   Pool:   "Tape" (From Job resource)
>   Catalog:    "MyCatalog" (From Client resource)
>   Storage:    "AutoChangerLTO" (From Job resource)
>   Scheduled time: 07-Apr-2023 23:05:00
>   Start time: 07-Apr-2023 23:05:03
>   End time:   10-Apr-2023 09:43:52
>   Elapsed time:   2 days 10 hours 38 mins 49 secs
>   Priority:   10
>   FD Files Written:   6,600,638
>   SD Files Written:   6,600,638
>   FD Bytes Written:   16,491,080,239,221 (16.49 TB)
>   SD Bytes Written:   16,492,258,152,160 (16.49 TB)
>   Rate:   78109.0 KB/s
>  Software Compression:   None
>   Comm Line Compression:  64.5% 2.8:1
>   Snapshot/VSS:   no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): 20L9
>   Volume Session Id:  2
>   Volume Session Time:    1680860753
>   Last Volume Bytes:  19,288,951,428,096 (19.28 TB)
>   Non-fatal FD errors:    0
>   SD Errors:  3
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:    Backup OK -- with warnings
>
> 10-Apr 09:43 -dir JobId 136: Begin pruning Jobs older than 6 months .
> 10-Apr 09:43 -dir JobId 136: No Jobs found to prune.
> 10-Apr 09:43 -dir JobId 136: Begin pruning Files.
> 10-Apr 09:43 -dir JobId 136: No Files found to prune.
> 10-Apr 09:43 -dir JobId 136: End auto prune.
>
> 

Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Udo Kaune

???

Am 21.04.23 um 12:39 schrieb Dr. Thorsten Brandau:

09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool file. Disk 
probably full. Attempting recovery. Wanted to write=64512 got=692
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau

Hi

yes, I commented the incremential out because it also makes a full 
backup and therefore I only try to do differential per weekend. I want 
to have incrementals daily, but it does not work (always does full).


The last log is:

07-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
Job=FileServer_Full.2023-04-07_23.05.00_04
07-Apr 23:05 -dir JobId 136: Using Device "LTO9-1" to write.
07-Apr 23:05 -sd JobId 136: Error: 07-Apr 23:05 -sd JobId 136: Volume 
"20L9" previously written, moving to end of data.
07-Apr 23:06 -sd JobId 136: Ready to append to end of Volume "20L9" at 
file=2789.
07-Apr 23:06 -sd JobId 136: Spooling data ...
08-Apr 02:37 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,026,157 MaxJobSpoolSize=3,000,000,000,000
08-Apr 02:37 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,026,157 bytes ...
08-Apr 08:14 -sd JobId 136: Despooling elapsed time = 05:36:38, Transfer rate = 
148.5 M Bytes/second
08-Apr 08:14 -sd JobId 136: Spooling data again ...
08-Apr 11:40 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,008,520 MaxJobSpoolSize=3,000,000,000,000
08-Apr 11:40 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,008,520 bytes ...
08-Apr 17:59 -sd JobId 136: Despooling elapsed time = 06:18:52, Transfer rate = 
131.9 M Bytes/second
08-Apr 17:59 -sd JobId 136: Spooling data again ...
08-Apr 22:57 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,059,190 MaxJobSpoolSize=3,000,000,000,000
08-Apr 22:57 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,059,190 bytes ...
09-Apr 05:47 -sd JobId 136: Despooling elapsed time = 06:49:38, Transfer rate = 
122.0 M Bytes/second
09-Apr 05:47 -sd JobId 136: Spooling data again ...
09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool file. Disk 
probably full. Attempting recovery. Wanted to write=64512 got=692
09-Apr 10:35 -sd JobId 136: Writing spooled data to Volume. Despooling 
2,999,394,019,676 bytes ...
09-Apr 17:35 -sd JobId 136: Despooling elapsed time = 06:59:31, Transfer rate = 
119.1 M Bytes/second
09-Apr 20:28 -sd JobId 136: Error: Error writing header to spool file. Disk 
probably full. Attempting recovery. Wanted to write=64512 got=3793
09-Apr 20:28 -sd JobId 136: Writing spooled data to Volume. Despooling 
2,999,392,366,895 bytes ...
10-Apr 03:33 -sd JobId 136: Despooling elapsed time = 07:04:44, Transfer rate = 
117.6 M Bytes/second
10-Apr 06:22 -sd JobId 136: Committing spooled data to Volume "20L9". 
Despooling 1,510,003,913,646 bytes ...
10-Apr 09:38 -sd JobId 136: Despooling elapsed time = 03:16:01, Transfer rate = 
128.3 M Bytes/second
10-Apr 09:38 -sd JobId 136: Elapsed time=58:31:43, Transfer rate=78.27 M 
Bytes/second
10-Apr 09:38 -sd JobId 136: Sending spooled attrs to the Director. Despooling 
1,885,573,459 bytes ...
10-Apr 09:43 -dir JobId 136: Bacula -dir 11.0.6 (10Mar22):
  Build OS:   x86_64-suse-linux-gnu openSUSE Tumbleweed
  JobId:  136
  Job:FileServer_Full.2023-04-07_23.05.00_04
  Backup Level:   Full
  Client: "-fd" 11.0.6 (10Mar22) 
x86_64-suse-linux-gnu,openSUSE,Tumbleweed
  FileSet:"Full Set" 2023-03-18 23:05:00
  Pool:   "Tape" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"AutoChangerLTO" (From Job resource)
  Scheduled time: 07-Apr-2023 23:05:00
  Start time: 07-Apr-2023 23:05:03
  End time:   10-Apr-2023 09:43:52
  Elapsed time:   2 days 10 hours 38 mins 49 secs
  Priority:   10
  FD Files Written:   6,600,638
  SD Files Written:   6,600,638
  FD Bytes Written:   16,491,080,239,221 (16.49 TB)
  SD Bytes Written:   16,492,258,152,160 (16.49 TB)
  Rate:   78109.0 KB/s
  Software Compression:   None
  Comm Line Compression:  64.5% 2.8:1
  Snapshot/VSS:   no
  Encryption: no
  Accurate:   no
  Volume name(s): 20L9
  Volume Session Id:  2
  Volume Session Time:1680860753
  Last Volume Bytes:  19,288,951,428,096 (19.28 TB)
  Non-fatal FD errors:0
  SD Errors:  3
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK -- with warnings

10-Apr 09:43 -dir JobId 136: Begin pruning Jobs older than 6 months .
10-Apr 09:43 -dir JobId 136: No Jobs found to prune.
10-Apr 09:43 -dir JobId 136: Begin pruning Files.
10-Apr 09:43 -dir JobId 136: No Files found to prune.
10-Apr 09:43 -dir JobId 136: End auto prune.

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Client = -fd
  FileSet = "Full Set"
  Schedule = "WeeklyCycle"
  Storage = AutoChangerLTO
  Messages = Standard
  Pool = Tape
  SpoolAttributes = yes
  Priority = 10
  Write Bootstrap = "/mnt/data5/bacula/%c.bsr"
}

Job {
  Name = 

Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau

I have already 4 full backups on tape...

Lionel PLASSE schrieb am 21.04.23 um 11:43:

Don’ t forget to have performed One Full Job at least one time before doing any 
differential or incremental job

If Bacula don’t find any full backup it starts a full job instead of 
incremental or differential

I got the problem before

-


De : Dr. Thorsten Brandau  
Envoyé : vendredi 21 avril 2023 11:09

À : Justin Case
Cc : bacula-users
Objet : Re: [Bacula-users] Bacula Differential and Incremental Backup not 
working

Hi J/C
Thank you.
The configuration is:
Schedule {
   Name = "WeeklyCycle"
   Run = Full 1st fri at 23:05
   Run = Differential 2nd-5th fri at 23:05
#  Run = Incremental sat-thu at 23:05
}
So how should it be configured if that does not work?
Mit freundlichen Grüßen/Best regards/よろしくお願いします
BRACE GmbH

Dr. Thorsten Brandau



Justin Case schrieb am 21.04.23 um 10:53:
Hi Thorsten,

the mechanisms for full and incremental backups works for me. It is more likely 
a matter of your settings.

also the schedules influence whether a backup is done as a full or as an 
incremental, so I would look there, if this is configured correctly.

Hope it helps,
j/c


On 21. Apr 2023, at 09:10, Dr. Thorsten 
Brandau  wrote:

Hi
I am running Bacula with a tape drive on an Autoloader. As the full Backup 
takes about 70 h (it is a larger filesystem to backup) I want to do 
differential and incremental backups. However, all Differential and Incremental 
are performed as full backups.
I am running also an Rsync script with hardlinks, where it is no problem to 
have differential backups.
It seems as the file checking of Bacula is not correct.
Any idea how to finetune? Is it possible to understand WHY a file is considered 
new? Is there any way to control this?
We have several backup strategies, with Rsync on different computers, but also 
symmetric synching with remote hosts.
Would a blockwise differential backup be possible?
Or at last ignoring things like ACL or so?
Thanks
Cheers
Thorsten
___
Bacula-users mailing list
mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Why is this job queued and not run concurrently although MaximumConcurrentJobs gt 1, AllowMixedPriority=yes and more drives available in autochanger?

2023-04-21 Thread Martin Simmons
All jobs must have an associated client with suitable MaximumConcurrentJobs,
even if they don't use a client.  It is a feature :-(

__Martin


> On Thu, 20 Apr 2023 21:13:00 +0200, Justin Case said:
> 
> This seemed to say that the director’s client limit MaximumConcurrentJobs = 1 
> was hit. WHen I changed it to MaximumConcurrentJobs = 5 I was able to run the 
> Admin job concurrently. 
> 
> I still don’t understand why this job would count against the client job 
> limit, as I thought that it is a job that does something on the SD, not on 
> the client / FD. Do you understand this?
> 
> > On 20. Apr 2023, at 20:55, Justin Case  wrote:
> > 
> > Hi Martin,
> > 
> >> On 20. Apr 2023, at 20:38, Martin Simmons  wrote:
> >> 
> >> What is the output of the "status dir" command when the Admin job is 
> >> waiting?
> >> 
> > it says for the Admin job: is waiting on max Client jobs
> > what does that mean?
> > 
> >> When you say "Both jobs have set AllowMixedPriority = yes." do you mean all
> >> jobs that are running at the time you want to Admin job to run?
> > 
> > For now I have  job running for testing, and the Admin job, and both have 
> > AllowMixedPriority = yes.
> > 
> >> 
> >>> On Thu, 20 Apr 2023 12:14:57 +0200, Justin Case said:
> >>> 
> >>> Greetings to all,
> >>> 
> >>> I have the simple Admin job "truncate-pools-all” (see further down) and I 
> >>> would like to be able to run it concurrently while some backup job 
> >>> “backup1" (see further down)  is running. Lets say backup jobs have 
> >>> Priority = 20.
> >>> The Runscript Console command has Priority = 10 and uses drive number 9, 
> >>> which is very likely not in use when the Admin job is started. The backup 
> >>> jobs usually use drive number 0. Both jobs have set AllowMixedPriority = 
> >>> yes.
> >>> While I can successfully run this command in bconsole concurrently when a 
> >>> backup job is already running, when starting the Admin job the Bacula 
> >>> queuing algorithm puts this Admin job in the queue and it is waiting 
> >>> until the currently running backup job has finished. My understanding was 
> >>> that this is normal behaviour when AllowMixedPriority = no (default). 
> >>> However, I have explicitly enabled AllowMixedPriority and still it does 
> >>> not work. The MaximumConcurrentJobs are 5 or 20 in different components, 
> >>> except for the SD file autochanger drives, there it is set to 1.
> >>> 
> >>> My first guess would be, that somehow the SD does not automagically make 
> >>> use of the available unoccupied drives of the autochanger (although the 
> >>> default behaviour should be AutoSelect = yes). So it tells the director 
> >>> that the drive is busy and then the director makes the Admin job wait.
> >>> But I could also be wrong, as I am not an expert on Bacula topics.
> >>> 
> >>> What would I need to change to get this to work as expected and described 
> >>> at the top of this mail?
> >>> 
> >>> Thanks for considering my question and have a good time,
> >>> J/C
> >>> 
> >>> 
> >>> Job {
> >>> Name = "truncate-pools-all"
> >>> Type = "Admin"
> >>> JobDefs = "default1"
> >>> Enabled = no
> >>> Runscript {
> >>> RunsOnClient = no
> >>> RunsWhen = "Before"
> >>> Console = "truncate volume allpools storage=unraid-tier1-storage drive=9"
> >>> }
> >>> Priority = 10
> >>> AllowDuplicateJobs = no
> >>> AllowMixedPriority = yes
> >>> }
> >>> 
> >>> JobDefs {
> >>> Name = "default1"
> >>> Type = "Backup"
> >>> Level = "Full"
> >>> Messages = "Standard"
> >>> Pool = "default1"
> >>> FullBackupPool = "default1"
> >>> IncrementalBackupPool = "default1"
> >>> Client = “machine1"
> >>> Fileset = "EmptyFileset"
> >>> MaxFullInterval = 2678400
> >>> SpoolAttributes = yes
> >>> Priority = 20
> >>> AllowIncompleteJobs = no
> >>> Accurate = yes
> >>> AllowDuplicateJobs = no
> >>> }
> >>> 
> >>> This is the backup job that is already running:
> >>> 
> >>> Job {
> >>> Name = “backup1"
> >>> Pool = “pool1"
> >>> FullBackupPool = “pool1"
> >>> IncrementalBackupPool = “pool1"
> >>> Fileset = “fs1"
> >>> Schedule = “schd1"
> >>> JobDefs = “default2"
> >>> Enabled = yes
> >>> AllowIncompleteJobs = no
> >>> AllowDuplicateJobs = no
> >>> AllowMixedPriority = yes
> >>> }
> >>> 
> >>> JobDefs {
> >>> Name = “default2"
> >>> Type = "Backup"
> >>> Level = "Full"
> >>> Messages = "Standard"
> >>> Pool = "default1"
> >>> Client = “machine1"
> >>> Fileset = "EmptyFileset"
> >>> Schedule = “sched2"
> >>> Priority = 20
> >>> Accurate = yes
> >>> }
> >>> 
> >>> 
> >>> 
> >>> ___
> >>> Bacula-users mailing list
> >>> Bacula-users@lists.sourceforge.net
> >>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>> 
> >> 
> > 
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Radosław Korzeniewski
Hi,

pt., 21 kwi 2023 o 11:10 Dr. Thorsten Brandau 
napisał(a):

> Hi J/C
>
> Thank you.
>
> The configuration is:
>
> Schedule {
>   Name = "WeeklyCycle"
>   Run = Full 1st fri at 23:05
>   Run = Differential 2nd-5th fri at 23:05
> #  Run = Incremental sat-thu at 23:05
> }
>
> So how should it be configured if that does not work?
>
Please share your logs and job definition, so we can check why it is not
working in your setup.
There are a few situations where incremental or differential backups are
forced to be full.

btw. you commented out the incremental level in your schedule, are you
aware of this?

Radek
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Richard Laysell

Hello Thorsten,

On Fri, 21 Apr 2023 09:10:16 +0200
"Dr. Thorsten Brandau"  wrote:

> Hi
> 
> I am running Bacula with a tape drive on an Autoloader. As the full 
> Backup takes about 70 h (it is a larger filesystem to backup) I want
> to do differential and incremental backups. However, all Differential
> and Incremental are performed as full backups.
> 
> I am running also an Rsync script with hardlinks, where it is no
> problem to have differential backups.
> 
> It seems as the file checking of Bacula is not correct.
> 
> Any idea how to finetune? Is it possible to understand WHY a file is 
> considered new? Is there any way to control this?

If you are using rsync then the mtime will be set to that of the
original file but the ctime will be the time when rsync copied the file.

Bacula normally looks at both mtime and ctime to determine if a file
has changed.  There is a parameter in Bacula to change this behaviour

- mtimeonly= If enabled, tells the Client that the selection of
- files during Incremental and Differential backups should based only
- on the st_mtime value in the stat() packet. The default is no which
- means that the selection of files to be backed up will be based on
- both the st_mtime and the st_ctime values. In general, it is not
- recommended to use this option.

Note the caveat at the end of the description.
 
> We have several backup strategies, with Rsync on different computers, 
> but also symmetric synching with remote hosts.
> 
> Would a blockwise differential backup be possible?
> 
> Or at last ignoring things like ACL or so?
> 
> Thanks
> 
> Cheers
> 
> Thorsten

Regards,

Richard


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Lionel PLASSE
Don’ t forget to have performed One Full Job at least one time before doing any 
differential or incremental job

If Bacula don’t find any full backup it starts a full job instead of 
incremental or differential

I got the problem before

-


De : Dr. Thorsten Brandau  
Envoyé : vendredi 21 avril 2023 11:09
À : Justin Case 
Cc : bacula-users 
Objet : Re: [Bacula-users] Bacula Differential and Incremental Backup not 
working

Hi J/C
Thank you.
The configuration is:
Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st fri at 23:05
  Run = Differential 2nd-5th fri at 23:05
#  Run = Incremental sat-thu at 23:05
}
So how should it be configured if that does not work?
Mit freundlichen Grüßen/Best regards/よろしくお願いします
BRACE GmbH

Dr. Thorsten Brandau



Justin Case schrieb am 21.04.23 um 10:53:
Hi Thorsten,

the mechanisms for full and incremental backups works for me. It is more likely 
a matter of your settings. 

also the schedules influence whether a backup is done as a full or as an 
incremental, so I would look there, if this is configured correctly.

Hope it helps,
j/c


On 21. Apr 2023, at 09:10, Dr. Thorsten Brandau 
 wrote:

Hi
I am running Bacula with a tape drive on an Autoloader. As the full Backup 
takes about 70 h (it is a larger filesystem to backup) I want to do 
differential and incremental backups. However, all Differential and Incremental 
are performed as full backups.
I am running also an Rsync script with hardlinks, where it is no problem to 
have differential backups.
It seems as the file checking of Bacula is not correct.
Any idea how to finetune? Is it possible to understand WHY a file is considered 
new? Is there any way to control this?
We have several backup strategies, with Rsync on different computers, but also 
symmetric synching with remote hosts.
Would a blockwise differential backup be possible?
Or at last ignoring things like ACL or so?
Thanks
Cheers
Thorsten
___
Bacula-users mailing list
mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
+ This Document is confidential + Dieses Dokument ist vertraulich +

->->-> Please register at https://www.brace.de for getting always the latest 
news and best informations! <-<-<-
* FOCUS Business Innovationschampion 2023, for development, prozesses, business 
model and organisation *
* Awarded as "Employer of the Future"
* WINNER OF THE EXPORT AWARD BAVARIA 2021 *
 Please visit us at those events:
* Food Ingredients Europe, Nov 28-Nov30, 2023, Frankfurt, Germany *
* Industral Convention on Microencapsuatlon, Oct. 9-12, 2023, Chicago, USA *
* Bearer of the "High Commendation" at the FI Innovation Award 2019 *
* Finalist of the FI Innovation Award 2019 - Instant Process *


BRACE GmbH
Dr. Thorsten Brandau (mailto:thorsten.bran...@brace.de)
President
Am Mittelberg 5
D-63791 Karlstein
Germany
Tel: +49 6188 991757
Fax: +49 6188 991759
https://www.brace.de

HRB5004 (Amtsgericht Aschaffenburg), VAT DE151299833, Managing Directors: Dr. 
Thorsten Brandau

IMPORTANT NOTICE:
This email may be confidential, may be legally privileged, and is for the 
intended recipient only. 
Access, disclosure, copying, distribution, or reliance on any of it by anyone 
else is prohibited 
and may be a criminal offence. Please delete if obtained in error and email 
confirmation to the sender. 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau

Hi J/C

Thank you.

The configuration is:

Schedule {
  Name = "WeeklyCycle"
  Run = Full 1st fri at 23:05
  Run = Differential 2nd-5th fri at 23:05
#  Run = Incremental sat-thu at 23:05
}

So how should it be configured if that does not work?

Mit freundlichen Grüßen/Best regards/よろしくお願いします
BRACE GmbH

Dr. Thorsten Brandau




Justin Case schrieb am 21.04.23 um 10:53:

Hi Thorsten,

the mechanisms for full and incremental backups works for me. It is 
more likely a matter of your settings.


also the schedules influence whether a backup is done as a full or as 
an incremental, so I would look there, if this is configured correctly.


Hope it helps,
j/c

On 21. Apr 2023, at 09:10, Dr. Thorsten Brandau 
 wrote:


Hi

I am running Bacula with a tape drive on an Autoloader. As the full 
Backup takes about 70 h (it is a larger filesystem to backup) I want 
to do differential and incremental backups. However, all Differential 
and Incremental are performed as full backups.


I am running also an Rsync script with hardlinks, where it is no 
problem to have differential backups.


It seems as the file checking of Bacula is not correct.

Any idea how to finetune? Is it possible to understand WHY a file is 
considered new? Is there any way to control this?


We have several backup strategies, with Rsync on different computers, 
but also symmetric synching with remote hosts.


Would a blockwise differential backup be possible?

Or at last ignoring things like ACL or so?

Thanks

Cheers

Thorsten

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
+ This Document is confidential + Dieses Dokument ist vertraulich +

->->-> Please register athttps://www.brace.de  for getting always the latest news and 
best informations! <-<-<-
* FOCUS Business Innovationschampion 2023, for development, prozesses, business 
model and organisation *
* Awarded as "Employer of the Future"
* WINNER OF THE EXPORT AWARD BAVARIA 2021 *
 Please visit us at those events:
* Food Ingredients Europe, Nov 28-Nov30, 2023, Frankfurt, Germany *
* Industral Convention on Microencapsuatlon, Oct. 9-12, 2023, Chicago, USA *
* Bearer of the "High Commendation" at the FI Innovation Award 2019 *
* Finalist of the FI Innovation Award 2019 - Instant Process *


BRACE GmbH
Dr. Thorsten Brandau (thorsten.bran...@brace.de)
President
Am Mittelberg 5
D-63791 Karlstein
Germany
Tel: +49 6188 991757
Fax: +49 6188 991759
https://www.brace.de

HRB5004 (Amtsgericht Aschaffenburg), VAT DE151299833, Managing Directors: Dr. 
Thorsten Brandau

IMPORTANT NOTICE:
This email may be confidential, may be legally privileged, and is for the 
intended recipient only.
Access, disclosure, copying, distribution, or reliance on any of it by anyone 
else is prohibited
and may be a criminal offence. Please delete if obtained in error and email 
confirmation to the sender.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Justin Case
Hi Thorsten,

the mechanisms for full and incremental backups works for me. It is more likely 
a matter of your settings.

also the schedules influence whether a backup is done as a full or as an 
incremental, so I would look there, if this is configured correctly.

Hope it helps,
j/c

> On 21. Apr 2023, at 09:10, Dr. Thorsten Brandau  
> wrote:
> 
> Hi
> 
> I am running Bacula with a tape drive on an Autoloader. As the full Backup 
> takes about 70 h (it is a larger filesystem to backup) I want to do 
> differential and incremental backups. However, all Differential and 
> Incremental are performed as full backups.
> 
> I am running also an Rsync script with hardlinks, where it is no problem to 
> have differential backups.
> 
> It seems as the file checking of Bacula is not correct.
> 
> Any idea how to finetune? Is it possible to understand WHY a file is 
> considered new? Is there any way to control this?
> 
> We have several backup strategies, with Rsync on different computers, but 
> also symmetric synching with remote hosts.
> 
> Would a blockwise differential backup be possible?
> 
> Or at last ignoring things like ACL or so?
> 
> Thanks
> 
> Cheers
> 
> Thorsten
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Differential and Incremental Backup not working

2023-04-21 Thread Dr. Thorsten Brandau

Hi

I am running Bacula with a tape drive on an Autoloader. As the full 
Backup takes about 70 h (it is a larger filesystem to backup) I want to 
do differential and incremental backups. However, all Differential and 
Incremental are performed as full backups.


I am running also an Rsync script with hardlinks, where it is no problem 
to have differential backups.


It seems as the file checking of Bacula is not correct.

Any idea how to finetune? Is it possible to understand WHY a file is 
considered new? Is there any way to control this?


We have several backup strategies, with Rsync on different computers, 
but also symmetric synching with remote hosts.


Would a blockwise differential backup be possible?

Or at last ignoring things like ACL or so?

Thanks

Cheers

Thorsten
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users