> Hello
>
> As previously describe here, we have a full-flash NVME ceph cluster (16.2.6) 
> with currently only cephfs service configured.
[...]
> We noticed that cephfs_metadata pool had only 16 PG, we have set 
> autoscale_mode to off and increase the number of PG to 256 and with this
> change, the number of SLOW message has decreased drastically.
>
> Is there any mechanism to increase the number of PG automatically in such a 
> situation ? Or this is something to do manually ?
>

https://ceph.io/en/news/blog/2022/autoscaler_tuning/


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to