On 20/05/2024 21:25, Heitor Faria wrote:
Hello Marco,

> Anyway, i'm hit another trouble. Seems that creating the spool file took an
insane amount of time: source to backup are complex dirs, with millions of
files. Filesystem is ZFS.

Is the ZFS local? Does it have ZFS compression or dedup enabled? I wouldn't use those options for data spooling. You have to also consider the number of disks: https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/ <https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/>

You can test Disks reading and writing capacity with hdparm, dd/iostat. https://www.bacula.lat/benchmarking-disks-reading-and-writing-capacity/?lang=en <https://www.bacula.lat/benchmarking-disks-reading-and-writing-capacity/?lang=en>

And network with iperf: https://www.bacula.lat/testing-bacula-machines-network-capacity-iperf/?lang=en <https://www.bacula.lat/testing-bacula-machines-network-capacity-iperf/?lang=en>

I wouldn't also use Bacula compression software compression for tapes.

> How can i do to improve the spooling performance? What factors impact more?

Probably better network and better/more disks, such as Enterprise Grade NVMe/SSDs.

And a sensible amount of RAM - millions of files on ZFS should not be a problem - unless you're doing it on a system with 32G of RAM or the like.

Oh, and is it Solaris, BSD, or Linux?

        Cheers,
                Gary    B-)


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to