hello

when I run my borgbackup over cephfs volume (10 subvolumes for 1.5To) I
can see a big increase of osd space usage and 2 or 3 osd goes near
full, or full, then out and finally the cluster goes in error.

Any tips to prevent this ?

My cluster is cephv15 with :

9 nodes :

each node run : 2x6to hdd and 2x600to ssd
the cephfs got data on hdd and metadata on ssd.
the cephfs md cache is : 32Go

128pg for data and metadata (this is has been setup by auto balancer)

Perhaps I can fix the pg num for each of cephfs pool and prevent
autobalancer to run for them.

what do you think ?

thx you for your help and advices.

UPDATE : I increase the pg number to 256 for data and 1024 for metadata

Here the df during the backup started since 30min

POOL                  ID  STORED   OBJECTS  USED     %USED  MAX AVAIL
cephfs-metadata       12  183 GiB  514.68k  550 GiB   7.16    2.3 TiB

Before the backup the stored was 20GiB

oau
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to