Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Udo Lembke
Hi Tim,
if dd use the full speed instead of bareos, whats about an IO-limitation
on the source - the spool-disk?
In your example, the first 50-gb spoolfile was transferred with 40MB/s
the later ones with 35MB/s.
Does the spool-disks was filled with other backup-jobs (writes) during
the despooling?

What kind of spool-disks are in use (raid level, how much disks, what
kind of disks/ssd)?

I had an high load on the backup-server with an spooldisk (for 3 lto-4
drives) with raid10 on 6*SAS 15k disks.
With raid0 (the spool disks only) the load was ok.

Udo

On 21.08.2018 15:46, Tim Banchi wrote:
> Hello,
>
> I'm using Bareos 17.2 with the following pool, and device configuration:
>
> Pool {
>   Name = tape_automated
>   Pool Type = Backup
>   Recycle = yes   # Bareos can automatically recycle 
> Volumes
>   AutoPrune = yes # Prune expired volumes
>   #Recycle Oldest Volume = yes
>   RecyclePool = Scratch
>   Maximum Volume Bytes = 0
>   Job Retention = 365 days
>   Volume Retention = 4 weeks
>   Volume Use Duration = 12 days
>   Cleaning Prefix = "CLN"
>   Catalog Files = yes
>   Storage = delaunay_HP_G2_Autochanger #virtualfull test
> #  Next Pool = tape_automated
> }
>
> Device {
>   Name = "Ultrium920"
>   Media Type = LTO
>   Archive Device = /dev/st1
>   Autochanger = yes
>   LabelMedia = no
>   AutomaticMount = yes
>   AlwaysOpen = yes
>   RemovableMedia = yes
>   Maximum Spool Size = 50G
>   Spool Directory = /var/lib/bareos/spool
>   Maximum Block Size = 2097152
> #  Maximum Block Size = 4194304
>   Maximum Network Buffer Size = 32768
>   Maximum File Size = 50G
>   Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
> }
>
> I'm closely monitoring tape performance (write speed + capacity marked as 
> full by bareos), and all tapes are losing capacity, and write speed, in an 
> almost linear way.
>
> Over the period of roughly 8 months (I installed bareos early this year) and 
> 3 to 4 write cycles per tape (=every 2 months a tape is rewritten), I've lost 
> on average 25% of the initial capacity per tape. E.g. some tapes started at 
> ~400GB, and are now somewhere at 300GB. some started at 160GB, and are now 
> somewhere around 120GB.
>
> Write-speed wise, I also lose speed: The fastest tapes (most often also the 
> tapes which have the highest capacity) started out at 50MB/s, and are now 
> somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and are 
> now at 16MB/s.
>
> Because most of the tapes were second-hand (we got tape drive, controller, 
> autoloader, and tapes as a gift, as our NGO doesn't have any money), I blamed 
> the bad state of tapes (I verify volume to catalog after each job, I assume 
> I'm still fine. Also there are no tape alerts).
> But in the course of this year, I also introduced some brand new (never 
> written) tapes. And they are also losing capacity, and write speed to the 
> same amount.
>
> Hardware compression, (software) encryption, and software compression is all 
> ON since the very beginning (so no configuration change). And I don't think 
> this can be relevant, because tapes are losing capacity over time, whether 
> they started off at ~406GB (new) or 160GB (used, and initial bad condition). 
> It also doesn't depend what I back up (everything is encrypted anyway, so 
> compression shouldn't work at all at the drive's side).
>
> I don't know the logic how bareos recognises that a tape is full, so I 
> thought I give it a try to fill some tapes with zero/urandom to "realign" it. 
> I could fully write to those tapes (=400GB). But bareos continued to only 
> partly use them as before. I then tried to fill the tapes up with btape 
> speed, but the same problem persists when using the tape again in bareos.
>
> Tape backups typically have job logs like:
> 
> *Joblog*
> Connecting to Director localhost:9101
> 1000 OK: pavlov-dir Version: 17.2.4 (21 Sep 2017)
> Enter a period to cancel a command.
> list joblog jobid=7665
> Automatically selected Catalog: MyCatalog
> Using Catalog "MyCatalog"
>  2018-08-18 06:37:51 pavlov-dir JobId 7665: Start Backup JobId 7665, 
> Job=edite_backup_automated.2018-08-18_05.30.00_34
>  2018-08-18 06:37:52 pavlov-dir JobId 7665: Using Device "Ultrium920" to 
> write.
>  2018-08-18 06:37:53 delaunay-sd JobId 7665: 3307 Issuing autochanger "unload 
> slot 2, drive 0" command.
>  2018-08-18 06:40:22 delaunay-sd JobId 7665: 3304 Issuing autochanger "load 
> slot 1, drive 0" command.
>  2018-08-18 06:41:06 delaunay-sd JobId 7665: 3305 Autochanger "load slot 1, 
> drive 0", status is OK.
>  2018-08-18 06:41:15 delaunay-sd JobId 7665: Volume "XM2705L3" previously 
> written, moving to end of data.
>  2018-08-18 06:41:33 delaunay-sd JobId 7665: Ready to append to end of Volume 
> "XM2705L3" at file=5.
>  2018-08-18 06:41:33 delaunay-sd JobId 7665: Spooling data ...
>  2018-08-18 08:37:57 delaunay-sd JobId 7665: User specified Device spool size 
> reached: DevSpoolSize=53,687,398,

Re: [bareos-users] Bareos Wild and Regex

2018-08-22 Thread Dakota Pilot
No luck.  If I put a separate include with it's options Bareos still will not 
exclude the directory with a WildDir or RegExDir.  Given that bregex and bwild 
work just fine I think there is something broken somewhere in my config but I 
have no idea where.  Bareos just won't exclude that directory.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] Bareos Wild and Regex

2018-08-22 Thread Dakota Pilot
Interesting.  I may split it into two includes with one having exclude = yes in 
it.  I think it's Bareos acting wonky since bregex and bwild do the selections 
perfectly no matter what depth the directory is.  So something must be 
happening when it gets parsed.

I notice you put your Files = after the includes.  Any reason since I assume 
the config engine doesn't care.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Tim Banchi
Hi Kai,

I thought about disabling hardware compression, but doing that in a permanent 
way, I think I would have had to re-install a OS which runs LTT (other ways via 
Linux seemed to me not conclusive enough, or I didn't find that information).

I also read at numerous occasions that >= LTO-2 drives handle hardware 
compression quite well (e.g. not increasing data usage when compressing 
uncompressable data like encrypted data).

ad software compression: we (have to) use quite a slow internal network, so 
software compression at the client seemed reasonable to me.





On Wednesday, August 22, 2018 at 12:22:20 PM UTC+2, Stefan Klatt wrote:
> Hi Tim,
> 
> 
> 
> hmmm. "Hardware compression, (software) encryption, and software
> compression"
> 
> That's probably overkill. Could you disable hardware compression?
> This can slow down backup if the data is encrypted (and compressed).
> 
> 
> 
> Stefan
> 
> 
> 
> 
> Am 22.08.2018 um 09:40 schrieb Tim
>   Banchi:
> 
> 
> 
>   Hi Stefan,
> 
> yes, I use newly purchased ones when the autoloader/drive requested it. I got 
> clean requests when getting I/O errors with really bad tapes (which I then 
> dumped right away). Otherwise there have been so far no cleaning requests 
> (which is normal to my experience), and the status of the drive is OK. I also 
> looked up regularly logged warnings/errors, and there is nothing (except the 
> aforementioned I/O errors)
> 
> 
> 
> On Tuesday, August 21, 2018 at 10:38:26 PM UTC+2, Stefan Klatt wrote:
> 
>   
> Hi Tim,
> 
> 
> 
> only a idea... did you use cleaning tapes?
> 
> 
> 
> Regards
> 
> 
> 
> Stefan
> 
> 
> 
> 
> Am 21.08.2018 um 15:46 schrieb Tim
>   Banchi:
> 
> 
> 
>   Hello,
> 
> I'm using Bareos 17.2 with the following pool, and device configuration:
> 
> Pool {
>   Name = tape_automated
>   Pool Type = Backup
>   Recycle = yes   # Bareos can automatically recycle 
> Volumes
>   AutoPrune = yes # Prune expired volumes
>   #Recycle Oldest Volume = yes
>   RecyclePool = Scratch
>   Maximum Volume Bytes = 0
>   Job Retention = 365 days
>   Volume Retention = 4 weeks
>   Volume Use Duration = 12 days
>   Cleaning Prefix = "CLN"
>   Catalog Files = yes
>   Storage = delaunay_HP_G2_Autochanger #virtualfull test
> #  Next Pool = tape_automated
> }
> 
> Device {
>   Name = "Ultrium920"
>   Media Type = LTO
>   Archive Device = /dev/st1
>   Autochanger = yes
>   LabelMedia = no
>   AutomaticMount = yes
>   AlwaysOpen = yes
>   RemovableMedia = yes
>   Maximum Spool Size = 50G
>   Spool Directory = /var/lib/bareos/spool
>   Maximum Block Size = 2097152
> #  Maximum Block Size = 4194304
>   Maximum Network Buffer Size = 32768
>   Maximum File Size = 50G
>   Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
> }
> 
> I'm closely monitoring tape performance (write speed + capacity marked as 
> full by bareos), and all tapes are losing capacity, and write speed, in an 
> almost linear way.
> 
> Over the period of roughly 8 months (I installed bareos early this year) and 
> 3 to 4 write cycles per tape (=every 2 months a tape is rewritten), I've lost 
> on average 25% of the initial capacity per tape. E.g. some tapes started at 
> ~400GB, and are now somewhere at 300GB. some started at 160GB, and are now 
> somewhere around 120GB.
> 
> Write-speed wise, I also lose speed: The fastest tapes (most often also the 
> tapes which have the highest capacity) started out at 50MB/s, and are now 
> somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and are 
> now at 16MB/s.
> 
> Because most of the tapes were second-hand (we got tape drive, controller, 
> autoloader, and tapes as a gift, as our NGO doesn't have any money), I blamed 
> the bad state of tapes (I verify volume to catalog after each job, I assume 
> I'm still fine. Also there are no tape alerts).
> But in the course of this year, I also introduced some brand new (never 
> written) tapes. And they are also losing capacity, and write speed to the 
> same amount.
> 
> Hardware compression, (software) encryption, and software compression is all 
> ON since the very beginning (so no configuration change). And I don't think 
> this can be relevant, because tapes are losing capacity over time, whether 
> they started off at ~406GB (new) or 160GB (used, and initial bad condition). 
> It also doesn't depend what I back up (everything is encrypted anyway, so 
> compression shouldn't work at all at the drive's side).
> 
> I don't know the logic how bareos recognises that a tape is full, so I 
> thought I give it a try to fill some tapes with zero/urandom to "realign" it. 
> I could fully write to those tapes (=400GB). But bareos continued to only 
> partly use them as before. I then tried to fill the tapes up with btape 
> speed, but the same problem persists when using the tape again in bareos.
> 
> 

Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Tim Banchi
Hi Kai,

thanks, the HP LTT tools contain a diagnostic test (my drive is a HP Ultrium920 
SCSI LTO3 drive). There should also be a 5-second self test on every power up. 
So far, this always succeeded (I never got a error message on the autoloader 
display or in the logs).

I remember, that I did this LTT diagnostic when configuring the drive in 
January. However, right now, I don't have a access to a physical Windows Server 
any more where I could try that quickly out.



On Wednesday, August 22, 2018 at 10:12:23 AM UTC+2, Kai Zimmer wrote:
> Hi Tim,
> 
> does the drive have a self test function in the firmware? If so, try it. 
> If not, maybe there is a test software from the vendor available?
> 
> Best regards,
> Kai
> 
> Am 22.08.2018 um 09:40 schrieb Tim Banchi:
> > Hi Stefan,
> >
> > yes, I use newly purchased ones when the autoloader/drive requested it. I 
> > got clean requests when getting I/O errors with really bad tapes (which I 
> > then dumped right away). Otherwise there have been so far no cleaning 
> > requests (which is normal to my experience), and the status of the drive is 
> > OK. I also looked up regularly logged warnings/errors, and there is nothing 
> > (except the aforementioned I/O errors)
> >
> >
> >
> > On Tuesday, August 21, 2018 at 10:38:26 PM UTC+2, Stefan Klatt wrote:
> >> Hi Tim,
> >>
> >>  
> >>
> >>  only a idea... did you use cleaning tapes?
> >>
> >>  
> >>
> >>  Regards
> >>
> >>  
> >>
> >>  Stefan
> >>
> >>  
> >>
> >>  
> >> Am 21.08.2018 um 15:46 schrieb Tim
> >>Banchi:
> >>
> >>  
> >>  
> >>Hello,
> >>
> >> I'm using Bareos 17.2 with the following pool, and device configuration:
> >>
> >> Pool {
> >>Name = tape_automated
> >>Pool Type = Backup
> >>Recycle = yes   # Bareos can automatically recycle 
> >> Volumes
> >>AutoPrune = yes # Prune expired volumes
> >>#Recycle Oldest Volume = yes
> >>RecyclePool = Scratch
> >>Maximum Volume Bytes = 0
> >>Job Retention = 365 days
> >>Volume Retention = 4 weeks
> >>Volume Use Duration = 12 days
> >>Cleaning Prefix = "CLN"
> >>Catalog Files = yes
> >>Storage = delaunay_HP_G2_Autochanger #virtualfull test
> >> #  Next Pool = tape_automated
> >> }
> >>
> >> Device {
> >>Name = "Ultrium920"
> >>Media Type = LTO
> >>Archive Device = /dev/st1
> >>Autochanger = yes
> >>LabelMedia = no
> >>AutomaticMount = yes
> >>AlwaysOpen = yes
> >>RemovableMedia = yes
> >>Maximum Spool Size = 50G
> >>Spool Directory = /var/lib/bareos/spool
> >>Maximum Block Size = 2097152
> >> #  Maximum Block Size = 4194304
> >>Maximum Network Buffer Size = 32768
> >>Maximum File Size = 50G
> >>Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
> >> }
> >>
> >> I'm closely monitoring tape performance (write speed + capacity marked as 
> >> full by bareos), and all tapes are losing capacity, and write speed, in an 
> >> almost linear way.
> >>
> >> Over the period of roughly 8 months (I installed bareos early this year) 
> >> and 3 to 4 write cycles per tape (=every 2 months a tape is rewritten), 
> >> I've lost on average 25% of the initial capacity per tape. E.g. some tapes 
> >> started at ~400GB, and are now somewhere at 300GB. some started at 160GB, 
> >> and are now somewhere around 120GB.
> >>
> >> Write-speed wise, I also lose speed: The fastest tapes (most often also 
> >> the tapes which have the highest capacity) started out at 50MB/s, and are 
> >> now somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and 
> >> are now at 16MB/s.
> >>
> >> Because most of the tapes were second-hand (we got tape drive, controller, 
> >> autoloader, and tapes as a gift, as our NGO doesn't have any money), I 
> >> blamed the bad state of tapes (I verify volume to catalog after each job, 
> >> I assume I'm still fine. Also there are no tape alerts).
> >> But in the course of this year, I also introduced some brand new (never 
> >> written) tapes. And they are also losing capacity, and write speed to the 
> >> same amount.
> >>
> >> Hardware compression, (software) encryption, and software compression is 
> >> all ON since the very beginning (so no configuration change). And I don't 
> >> think this can be relevant, because tapes are losing capacity over time, 
> >> whether they started off at ~406GB (new) or 160GB (used, and initial bad 
> >> condition). It also doesn't depend what I back up (everything is encrypted 
> >> anyway, so compression shouldn't work at all at the drive's side).
> >>
> >> I don't know the logic how bareos recognises that a tape is full, so I 
> >> thought I give it a try to fill some tapes with zero/urandom to "realign" 
> >> it. I could fully write to those tapes (=400GB). But bareos continued to 
> >> only partly use them as before. I then tried to fill the tapes up with 
> >> btape speed, but the same problem p

Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Stefan Klatt
Hi Tim,

hmmm. "Hardware compression, (software) encryption, and software
compression"
That's probably overkill. Could you disable hardware compression? This
can slow down backup if the data is encrypted (and compressed).

Stefan

Am 22.08.2018 um 09:40 schrieb Tim Banchi:
> Hi Stefan,
>
> yes, I use newly purchased ones when the autoloader/drive requested it. I got 
> clean requests when getting I/O errors with really bad tapes (which I then 
> dumped right away). Otherwise there have been so far no cleaning requests 
> (which is normal to my experience), and the status of the drive is OK. I also 
> looked up regularly logged warnings/errors, and there is nothing (except the 
> aforementioned I/O errors)
>
>
>
> On Tuesday, August 21, 2018 at 10:38:26 PM UTC+2, Stefan Klatt wrote:
>> Hi Tim,
>>
>> 
>>
>> only a idea... did you use cleaning tapes?
>>
>> 
>>
>> Regards
>>
>> 
>>
>> Stefan
>>
>> 
>>
>> 
>> Am 21.08.2018 um 15:46 schrieb Tim
>>   Banchi:
>>
>> 
>> 
>>   Hello,
>>
>> I'm using Bareos 17.2 with the following pool, and device configuration:
>>
>> Pool {
>>   Name = tape_automated
>>   Pool Type = Backup
>>   Recycle = yes   # Bareos can automatically recycle 
>> Volumes
>>   AutoPrune = yes # Prune expired volumes
>>   #Recycle Oldest Volume = yes
>>   RecyclePool = Scratch
>>   Maximum Volume Bytes = 0
>>   Job Retention = 365 days
>>   Volume Retention = 4 weeks
>>   Volume Use Duration = 12 days
>>   Cleaning Prefix = "CLN"
>>   Catalog Files = yes
>>   Storage = delaunay_HP_G2_Autochanger #virtualfull test
>> #  Next Pool = tape_automated
>> }
>>
>> Device {
>>   Name = "Ultrium920"
>>   Media Type = LTO
>>   Archive Device = /dev/st1
>>   Autochanger = yes
>>   LabelMedia = no
>>   AutomaticMount = yes
>>   AlwaysOpen = yes
>>   RemovableMedia = yes
>>   Maximum Spool Size = 50G
>>   Spool Directory = /var/lib/bareos/spool
>>   Maximum Block Size = 2097152
>> #  Maximum Block Size = 4194304
>>   Maximum Network Buffer Size = 32768
>>   Maximum File Size = 50G
>>   Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
>> }
>>
>> I'm closely monitoring tape performance (write speed + capacity marked as 
>> full by bareos), and all tapes are losing capacity, and write speed, in an 
>> almost linear way.
>>
>> Over the period of roughly 8 months (I installed bareos early this year) and 
>> 3 to 4 write cycles per tape (=every 2 months a tape is rewritten), I've 
>> lost on average 25% of the initial capacity per tape. E.g. some tapes 
>> started at ~400GB, and are now somewhere at 300GB. some started at 160GB, 
>> and are now somewhere around 120GB.
>>
>> Write-speed wise, I also lose speed: The fastest tapes (most often also the 
>> tapes which have the highest capacity) started out at 50MB/s, and are now 
>> somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and are 
>> now at 16MB/s.
>>
>> Because most of the tapes were second-hand (we got tape drive, controller, 
>> autoloader, and tapes as a gift, as our NGO doesn't have any money), I 
>> blamed the bad state of tapes (I verify volume to catalog after each job, I 
>> assume I'm still fine. Also there are no tape alerts).
>> But in the course of this year, I also introduced some brand new (never 
>> written) tapes. And they are also losing capacity, and write speed to the 
>> same amount.
>>
>> Hardware compression, (software) encryption, and software compression is all 
>> ON since the very beginning (so no configuration change). And I don't think 
>> this can be relevant, because tapes are losing capacity over time, whether 
>> they started off at ~406GB (new) or 160GB (used, and initial bad condition). 
>> It also doesn't depend what I back up (everything is encrypted anyway, so 
>> compression shouldn't work at all at the drive's side).
>>
>> I don't know the logic how bareos recognises that a tape is full, so I 
>> thought I give it a try to fill some tapes with zero/urandom to "realign" 
>> it. I could fully write to those tapes (=400GB). But bareos continued to 
>> only partly use them as before. I then tried to fill the tapes up with btape 
>> speed, but the same problem persists when using the tape again in bareos.
>>
>> Tape backups typically have job logs like:
>> 
>> *Joblog*
>> Connecting to Director localhost:9101
>> 1000 OK: pavlov-dir Version: 17.2.4 (21 Sep 2017)
>> Enter a period to cancel a command.
>> list joblog jobid=7665
>> Automatically selected Catalog: MyCatalog
>> Using Catalog "MyCatalog"
>>  2018-08-18 06:37:51 pavlov-dir JobId 7665: Start Backup JobId 7665, 
>> Job=edite_backup_automated.2018-08-18_05.30.00_34
>>  2018-08-18 06:37:52 pavlov-dir JobId 7665: Using Device "Ultrium920" to 
>> write.
>>  2018-08-18 06:37:53 delaunay-sd JobId 7665: 3307 Issuing autochanger 
>> "unload slot 2, drive 0" command.
>>  2018-08-18 06:40:22 delaunay-sd JobId 7665: 3304 Issuing autochanger "load 
>> sl

Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Kai Zimmer

Hi Tim,

does the drive have a self test function in the firmware? If so, try it. 
If not, maybe there is a test software from the vendor available?


Best regards,
Kai

Am 22.08.2018 um 09:40 schrieb Tim Banchi:

Hi Stefan,

yes, I use newly purchased ones when the autoloader/drive requested it. I got 
clean requests when getting I/O errors with really bad tapes (which I then 
dumped right away). Otherwise there have been so far no cleaning requests 
(which is normal to my experience), and the status of the drive is OK. I also 
looked up regularly logged warnings/errors, and there is nothing (except the 
aforementioned I/O errors)



On Tuesday, August 21, 2018 at 10:38:26 PM UTC+2, Stefan Klatt wrote:

Hi Tim,

 


 only a idea... did you use cleaning tapes?

 


 Regards

 


 Stefan

 

 
Am 21.08.2018 um 15:46 schrieb Tim

   Banchi:

 
 
   Hello,


I'm using Bareos 17.2 with the following pool, and device configuration:

Pool {
   Name = tape_automated
   Pool Type = Backup
   Recycle = yes   # Bareos can automatically recycle 
Volumes
   AutoPrune = yes # Prune expired volumes
   #Recycle Oldest Volume = yes
   RecyclePool = Scratch
   Maximum Volume Bytes = 0
   Job Retention = 365 days
   Volume Retention = 4 weeks
   Volume Use Duration = 12 days
   Cleaning Prefix = "CLN"
   Catalog Files = yes
   Storage = delaunay_HP_G2_Autochanger #virtualfull test
#  Next Pool = tape_automated
}

Device {
   Name = "Ultrium920"
   Media Type = LTO
   Archive Device = /dev/st1
   Autochanger = yes
   LabelMedia = no
   AutomaticMount = yes
   AlwaysOpen = yes
   RemovableMedia = yes
   Maximum Spool Size = 50G
   Spool Directory = /var/lib/bareos/spool
   Maximum Block Size = 2097152
#  Maximum Block Size = 4194304
   Maximum Network Buffer Size = 32768
   Maximum File Size = 50G
   Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
}

I'm closely monitoring tape performance (write speed + capacity marked as full 
by bareos), and all tapes are losing capacity, and write speed, in an almost 
linear way.

Over the period of roughly 8 months (I installed bareos early this year) and 3 
to 4 write cycles per tape (=every 2 months a tape is rewritten), I've lost on 
average 25% of the initial capacity per tape. E.g. some tapes started at 
~400GB, and are now somewhere at 300GB. some started at 160GB, and are now 
somewhere around 120GB.

Write-speed wise, I also lose speed: The fastest tapes (most often also the 
tapes which have the highest capacity) started out at 50MB/s, and are now 
somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and are now 
at 16MB/s.

Because most of the tapes were second-hand (we got tape drive, controller, 
autoloader, and tapes as a gift, as our NGO doesn't have any money), I blamed 
the bad state of tapes (I verify volume to catalog after each job, I assume I'm 
still fine. Also there are no tape alerts).
But in the course of this year, I also introduced some brand new (never 
written) tapes. And they are also losing capacity, and write speed to the same 
amount.

Hardware compression, (software) encryption, and software compression is all ON 
since the very beginning (so no configuration change). And I don't think this 
can be relevant, because tapes are losing capacity over time, whether they 
started off at ~406GB (new) or 160GB (used, and initial bad condition). It also 
doesn't depend what I back up (everything is encrypted anyway, so compression 
shouldn't work at all at the drive's side).

I don't know the logic how bareos recognises that a tape is full, so I thought I give it 
a try to fill some tapes with zero/urandom to "realign" it. I could fully write 
to those tapes (=400GB). But bareos continued to only partly use them as before. I then 
tried to fill the tapes up with btape speed, but the same problem persists when using the 
tape again in bareos.

Tape backups typically have job logs like:

*Joblog*
Connecting to Director localhost:9101
1000 OK: pavlov-dir Version: 17.2.4 (21 Sep 2017)
Enter a period to cancel a command.
list joblog jobid=7665
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
  2018-08-18 06:37:51 pavlov-dir JobId 7665: Start Backup JobId 7665, 
Job=edite_backup_automated.2018-08-18_05.30.00_34
  2018-08-18 06:37:52 pavlov-dir JobId 7665: Using Device "Ultrium920" to write.
  2018-08-18 06:37:53 delaunay-sd JobId 7665: 3307 Issuing autochanger "unload slot 
2, drive 0" command.
  2018-08-18 06:40:22 delaunay-sd JobId 7665: 3304 Issuing autochanger "load slot 1, 
drive 0" command.
  2018-08-18 06:41:06 delaunay-sd JobId 7665: 3305 Autochanger "load slot 1, drive 
0", status is OK.
  2018-08-18 06:41:15 delaunay-sd JobId 7665: Volume "XM2705L3" previously 
written, moving to end of data.
  2018-08-18 06:41:33 delaunay-sd JobId 7665: Ready to append to end of Volume 
"XM2705L3" at file=5.
  2018-08-18 06:41:33 del

Re: [bareos-users] losing LTO3 tape capacity with bareos. But full tape length can be written with dd or btape speed

2018-08-22 Thread Tim Banchi
Hi Stefan,

yes, I use newly purchased ones when the autoloader/drive requested it. I got 
clean requests when getting I/O errors with really bad tapes (which I then 
dumped right away). Otherwise there have been so far no cleaning requests 
(which is normal to my experience), and the status of the drive is OK. I also 
looked up regularly logged warnings/errors, and there is nothing (except the 
aforementioned I/O errors)



On Tuesday, August 21, 2018 at 10:38:26 PM UTC+2, Stefan Klatt wrote:
> Hi Tim,
> 
> 
> 
> only a idea... did you use cleaning tapes?
> 
> 
> 
> Regards
> 
> 
> 
> Stefan
> 
> 
> 
> 
> Am 21.08.2018 um 15:46 schrieb Tim
>   Banchi:
> 
> 
> 
>   Hello,
> 
> I'm using Bareos 17.2 with the following pool, and device configuration:
> 
> Pool {
>   Name = tape_automated
>   Pool Type = Backup
>   Recycle = yes   # Bareos can automatically recycle 
> Volumes
>   AutoPrune = yes # Prune expired volumes
>   #Recycle Oldest Volume = yes
>   RecyclePool = Scratch
>   Maximum Volume Bytes = 0
>   Job Retention = 365 days
>   Volume Retention = 4 weeks
>   Volume Use Duration = 12 days
>   Cleaning Prefix = "CLN"
>   Catalog Files = yes
>   Storage = delaunay_HP_G2_Autochanger #virtualfull test
> #  Next Pool = tape_automated
> }
> 
> Device {
>   Name = "Ultrium920"
>   Media Type = LTO
>   Archive Device = /dev/st1
>   Autochanger = yes
>   LabelMedia = no
>   AutomaticMount = yes
>   AlwaysOpen = yes
>   RemovableMedia = yes
>   Maximum Spool Size = 50G
>   Spool Directory = /var/lib/bareos/spool
>   Maximum Block Size = 2097152
> #  Maximum Block Size = 4194304
>   Maximum Network Buffer Size = 32768
>   Maximum File Size = 50G
>   Alert Command = "/bin/bash -c '/usr/sbin/smartctl -H -l error %c'"
> }
> 
> I'm closely monitoring tape performance (write speed + capacity marked as 
> full by bareos), and all tapes are losing capacity, and write speed, in an 
> almost linear way.
> 
> Over the period of roughly 8 months (I installed bareos early this year) and 
> 3 to 4 write cycles per tape (=every 2 months a tape is rewritten), I've lost 
> on average 25% of the initial capacity per tape. E.g. some tapes started at 
> ~400GB, and are now somewhere at 300GB. some started at 160GB, and are now 
> somewhere around 120GB.
> 
> Write-speed wise, I also lose speed: The fastest tapes (most often also the 
> tapes which have the highest capacity) started out at 50MB/s, and are now 
> somewhere at 42MB/s. The worst tapes are somewhere between 20MB/s, and are 
> now at 16MB/s.
> 
> Because most of the tapes were second-hand (we got tape drive, controller, 
> autoloader, and tapes as a gift, as our NGO doesn't have any money), I blamed 
> the bad state of tapes (I verify volume to catalog after each job, I assume 
> I'm still fine. Also there are no tape alerts).
> But in the course of this year, I also introduced some brand new (never 
> written) tapes. And they are also losing capacity, and write speed to the 
> same amount.
> 
> Hardware compression, (software) encryption, and software compression is all 
> ON since the very beginning (so no configuration change). And I don't think 
> this can be relevant, because tapes are losing capacity over time, whether 
> they started off at ~406GB (new) or 160GB (used, and initial bad condition). 
> It also doesn't depend what I back up (everything is encrypted anyway, so 
> compression shouldn't work at all at the drive's side).
> 
> I don't know the logic how bareos recognises that a tape is full, so I 
> thought I give it a try to fill some tapes with zero/urandom to "realign" it. 
> I could fully write to those tapes (=400GB). But bareos continued to only 
> partly use them as before. I then tried to fill the tapes up with btape 
> speed, but the same problem persists when using the tape again in bareos.
> 
> Tape backups typically have job logs like:
> 
> *Joblog*
> Connecting to Director localhost:9101
> 1000 OK: pavlov-dir Version: 17.2.4 (21 Sep 2017)
> Enter a period to cancel a command.
> list joblog jobid=7665
> Automatically selected Catalog: MyCatalog
> Using Catalog "MyCatalog"
>  2018-08-18 06:37:51 pavlov-dir JobId 7665: Start Backup JobId 7665, 
> Job=edite_backup_automated.2018-08-18_05.30.00_34
>  2018-08-18 06:37:52 pavlov-dir JobId 7665: Using Device "Ultrium920" to 
> write.
>  2018-08-18 06:37:53 delaunay-sd JobId 7665: 3307 Issuing autochanger "unload 
> slot 2, drive 0" command.
>  2018-08-18 06:40:22 delaunay-sd JobId 7665: 3304 Issuing autochanger "load 
> slot 1, drive 0" command.
>  2018-08-18 06:41:06 delaunay-sd JobId 7665: 3305 Autochanger "load slot 1, 
> drive 0", status is OK.
>  2018-08-18 06:41:15 delaunay-sd JobId 7665: Volume "XM2705L3" previously 
> written, moving to end of data.
>  2018-08-18 06:41:33 delaunay-sd JobId 7665: Ready to append to end of Volume 
> "XM2705L3" at file=5.
>  2018-08-18 06:41:33 delaunay-sd JobId 7665: Sp