Deb, Kees,

Yes, I'd done something similar, we have zfs mounts for each user, and then 
globbed them together by first letter of the username.

Brian

Extract from disklist
ZFS mounts for samba shares was by specific share, I'd written a script to find 
the shares that currently existed and updated the disklist daily with the 
current list, automatically so shares could be created and not missed, we 
seldom took any offline.
I did not bother with spindles for this.
Got great plots from # amplot which was critical in pinpointing bottlenecks.

# extracted from home directory list, new home directories caught automatically 
by the glob.
finsen  /export/home-Y /export/home   {
        user-tar2
        include "./[y]*"
        }
finsen  /export/home-Z /export/home   {
        user-tar2
        include "./[z]*"
        }
finsen  /export2/home-AZ /export2/home   {
        user-tar2
        include "./[A-Z]*"
        }

List supporting samba shares, updated by daily script.
finsen /export2/samba/bdinst   zfs-snapshot2
finsen /export2/samba/bdlshare zfs-snapshot2
finsen /export2/samba/bladder  zfs-snapshot2

-----Original Message-----
From: owner-amanda-us...@amanda.org <owner-amanda-us...@amanda.org> On Behalf 
Of Debra S Baddorf
Sent: Tuesday, September 21, 2021 1:38 PM
To: Kees Meijs | Nefos <k...@nefos.nl>
Cc: Debra S Baddorf <badd...@fnal.gov>; amanda-users@amanda.org
Subject: Re: CPU eating tar process

ATTENTION: This email came from an external source. Do not open attachments or 
click on links from unknown senders or unexpected emails.


Have you experimented with dividing those target disks into smaller pieces, 
using
tar?   So that amanda isn’t doing level 0 on all parts on the same day?

I’ve divided some disks as far as a*,  b*, c*, …. z*,  Other  (to catch caps or 
numbers
or future additions).   I’ve found that each piece must have SOME content,
or tar fails.  So Other always contains some small portion,  and non-existent 
letters
are skipped and are caught by Other if they’re created later.

It’s a pain for restoring a whole disk, but it helps backups.

Deb Baddorf
Fermilab

> On Sep 21, 2021, at 8:55 AM, Kees Meijs | Nefos <k...@nefos.nl> wrote:
>
> Hi list,
>
> We've got some backup targets with lots (and lots, and then some) of files. 
> There's so much of them that making back-ups is becoming a problem.
>
> During the backup process, tar(1) is eating up a CPU core. There's no or 
> hardly no I/O wait to be seen. Very likely tar is single threaded so there's 
> that. The additional gzip(1) process is doing zero to nothing.
>
> Any thoughts on speeding this up? Maybe an alternative for GNU tar, or...?
>
> Thanks all!
>
> Cheers,
> Kees
>
> --
> https://protect2.fireeye.com/v1/url?k=65a54a9a-3a3e73f8-65a7b3af-000babd9069e-d60d8dd67d1ef6d1&q=1&e=de036afa-6a0d-4723-8c1b-1a3c292b5206&u=https%3A%2F%2Fnefos.nl%2Fcontact
>
> Nefos IT bv
> Ambachtsweg 25 (industrienummer 4217)
> 5627 BZ Eindhoven
> Nederland
>
> KvK 66494931
>
> Bereikbaar op maandag, dinsdag, donderdag en vrijdag tussen 09:00u en 17:00u.



Reply via email to