Re: [Bacula-users] Again LTO9 and performances...

2024-07-01 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users In chel di` si favelave... > Another way that might be better for your case is to leave > MaximumSpoolSize = 0 (unlimited) and specify a MaximumJobSpoolSize in > the Device resource instead. The difference is that when one job reaches > the

Re: [Bacula-users] Again LTO9 and performances...

2024-06-25 Thread Josh Fisher via Bacula-users
On 6/24/24 11:04, Marco Gaiarin wrote: Mandi! Josh Fisher via Bacula-users In chel di` si favelave... Except when the MaximumSpoolSize for the Device resource is reached or the Spool Directory becomes full. When there is no more storage space for data spool files, all jobs writing to that

Re: [Bacula-users] Again LTO9 and performances...

2024-06-24 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users In chel di` si favelave... > Except when the MaximumSpoolSize for the Device resource is reached or > the Spool Directory becomes full. When there is no more storage space > for data spool files, all jobs writing to that device are paused and the > spool

Re: [Bacula-users] Again LTO9 and performances...

2024-06-22 Thread Marco Gaiarin
Mandi! Bill Arlofski via Bacula-users In chel di` si favelave... >> But, now, a question: this mean that in spool data get interleaved too? How >> they are interleaved? File by file? Block by block? What block size? > No. When you have jobs running, take a look into the SpoolDirectory. You will

Re: [Bacula-users] Again LTO9 and performances...

2024-06-21 Thread Josh Fisher via Bacula-users
On 6/20/24 18:58, Bill Arlofski via Bacula-users wrote: On 6/20/24 8:58 AM, Marco Gaiarin wrote: Once that is hit, the spoofles are written to tape, during which active jobs have to wait because the spool is full. There's no way to 'violate' this behaviour, right?! A single SD process

Re: [Bacula-users] Again LTO9 and performances...

2024-06-20 Thread Bill Arlofski via Bacula-users
On 6/20/24 8:58 AM, Marco Gaiarin wrote: But, now, a question: this mean that in spool data get interleaved too? How they are interleaved? File by file? Block by block? What block size? No. When you have jobs running, take a look into the SpoolDirectory. You will see a 'data' *.spool file and

Re: [Bacula-users] Again LTO9 and performances...

2024-06-20 Thread Marco Gaiarin
Mandi! Gary R. Schmidt In chel di` si favelave... >> job involved (in the same pool, i think) start to write down spool to tape. > MaximumSpoolSize is the total space used in the spool area, by all jobs. After posting, i've looked more carefully at the log and understod that. Sorry for the

Re: [Bacula-users] Again LTO9 and performances...

2024-06-17 Thread Gary R. Schmidt
On 17/06/2024 17:45, Marco Gaiarin wrote: [SNIP] > > So, literaly, if one of the job fill the 'MaximumSpoolSize' buffer, *ALL* job involved (in the same pool, i think) start to write down spool to tape. MaximumSpoolSize is the total space used in the spool area, by all jobs. Once that is

Re: [Bacula-users] Again LTO9 and performances...

2024-06-17 Thread Marco Gaiarin
Mandi! Bill Arlofski via Bacula-users In chel di` si favelave... > With DataSpooling enabled in all jobs, the only "interleaving" that you will > have on your tapes is one big block of Job 1's > de-spooled data, then maybe another Job 1 block, or a Job 2 block, or a Job 3 > block, and so on,

Re: [Bacula-users] Again LTO9 and performances...

2024-06-14 Thread Marco Gaiarin
Mandi! Bill Arlofski via Bacula-users In chel di` si favelave... > Hope this helps! Thanks to all for the hints and the explainings; bacula is really a bad beast... there's ever room for improvement! ;-) -- ___ Bacula-users mailing list

Re: [Bacula-users] Again LTO9 and performances...

2024-06-13 Thread Josh Fisher via Bacula-users
On 6/13/24 08:13, Gary R. Schmidt wrote: On 13/06/2024 20:12, Stefan G. Weichinger wrote: interested as well, I need to speedup my weekly/monthly FULL runs (with LTO6, though: way slower anyway). Shouldn't the file daemon do multiple jobs in parallel? To tape you can only write ONE

Re: [Bacula-users] Again LTO9 and performances...

2024-06-13 Thread Gary R. Schmidt
On 13/06/2024 20:12, Stefan G. Weichinger wrote: interested as well, I need to speedup my weekly/monthly FULL runs (with LTO6, though: way slower anyway). Shouldn't the file daemon do multiple jobs in parallel? To tape you can only write ONE stream of data. To the spooling disk there could

Re: [Bacula-users] Again LTO9 and performances...

2024-06-13 Thread Stefan G. Weichinger
interested as well, I need to speedup my weekly/monthly FULL runs (with LTO6, though: way slower anyway). Shouldn't the file daemon do multiple jobs in parallel? To tape you can only write ONE stream of data. To the spooling disk there could be more than one stream. Yes, that seems wrong:

Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Bill Arlofski via Bacula-users
On 6/11/24 10:45 AM, Marco Gaiarin wrote: Sorry, i really don't understand and i need feedback... I've read many time that tapes are handled better as they are, sequential media; so they need on storage: Maximum Concurrent Jobs = 1 Hello Marco, If you are using DataSpooling for all

Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Marco Gaiarin
>> Not for a single job. When the storage daemon is writing a job's spooled >> data to tape, the client must wait. However, if multiple jobs are >> running in parallel, then the other jobs will continue to spool their >> data while one job is despooling to tape. > > I come back on this. I've

Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Gary R. Schmidt
On 11/06/2024 18:56, Stefan G. Weichinger wrote: Am 06.06.24 um 15:57 schrieb Marco Gaiarin: Mandi! Josh Fisher via Bacula-users    In chel di` si favelave... Not for a single job. When the storage daemon is writing a job's spooled data to tape, the client must wait. However, if multiple jobs

Re: [Bacula-users] Again LTO9 and performances...

2024-06-11 Thread Stefan G. Weichinger
Am 06.06.24 um 15:57 schrieb Marco Gaiarin: Mandi! Josh Fisher via Bacula-users In chel di` si favelave... Not for a single job. When the storage daemon is writing a job's spooled data to tape, the client must wait. However, if multiple jobs are running in parallel, then the other jobs will

Re: [Bacula-users] Again LTO9 and performances...

2024-06-06 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users In chel di` si favelave... > Not for a single job. When the storage daemon is writing a job's spooled > data to tape, the client must wait. However, if multiple jobs are > running in parallel, then the other jobs will continue to spool their > data while

Re: [Bacula-users] Again LTO9 and performances...

2024-05-28 Thread Marco Gaiarin
Mandi! Josh Fisher via Bacula-users In chel di` si favelave... > Not for a single job. When the storage daemon is writing a job's spooled > data to tape, the client must wait. However, if multiple jobs are > running in parallel, then the other jobs will continue to spool their > data while

Re: [Bacula-users] Again LTO9 and performances...

2024-05-28 Thread Marco Gaiarin
Mandi! Gary R. Schmidt In chel di` si favelave... > And a sensible amount of RAM - millions of files on ZFS should not be a > problem - unless you're doing it on a system with 32G of RAM or the like. root@bpbkplom:~# free -h totalusedfree shared

Re: [Bacula-users] Again LTO9 and performances...

2024-05-28 Thread Marco Gaiarin
Mandi! Heitor Faria In chel di` si favelave... > Is the ZFS local? Yep. > Does it have ZFS compression or dedup enabled? Damn. Dedup no, but compression IS enabled... right! Never minded about that... I've created a different mountpoint with compression disabled, i'll provide feedback.

Re: [Bacula-users] Again LTO9 and performances...

2024-05-28 Thread Marco Gaiarin
> Damn. Dedup no, but compression IS enabled... right! Never minded about > that... I've created a different mountpoint with compression disabled, i'll > provide feedback. OK, as supposed with ZFS compression disabled provide some performance improvement, but little one, nothing dramatic.

Re: [Bacula-users] Again LTO9 and performances...

2024-05-20 Thread Gary R. Schmidt
On 20/05/2024 21:25, Heitor Faria wrote: Hello Marco, > Anyway, i'm hit another trouble. Seems that creating the spool file took an insane amount of time: source to backup are complex dirs, with millions of files. Filesystem is ZFS. Is the ZFS local? Does it have ZFS compression or dedup

Re: [Bacula-users] Again LTO9 and performances...

2024-05-20 Thread Heitor Faria
Hello Marco, > Anyway, i'm hit another trouble. Seems that creating the spool file took an insane amount of time: source to backup are complex dirs, with millions of files. Filesystem is ZFS. Is the ZFS local? Does it have ZFS compression or dedup enabled? I wouldn't use those options for

Re: [Bacula-users] Again LTO9 and performances...

2024-05-18 Thread Josh Fisher via Bacula-users
On 5/17/24 06:29, Marco Gaiarin wrote: I'm still fiddling with LTO9 and backup performances; finally i've managed to test a shiny new server with an LTO9 tape (library indeed, but...) and i can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM specification say that the tape

[Bacula-users] Again LTO9 and performances...

2024-05-17 Thread Marco Gaiarin
I'm still fiddling with LTO9 and backup performances; finally i've managed to test a shiny new server with an LTO9 tape (library indeed, but...) and i can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM specification say that the tape could perform 400 MB/s. Also, following