You also might want to increase mon_max_pg_per_osd since you have a wide spread 
of OSD sizes.

Default is 250.  Set it to 1000.

> On Feb 24, 2024, at 10:30 AM, Anthony D'Atri <anthony.da...@gmail.com> wrote:
> 
> Add a 10tb HDD to the third node as I suggested, that will help your cluster.
> 
> 
>> On Feb 24, 2024, at 10:29 AM, nguyenvand...@baoviet.com.vn wrote:
>> 
>> I will correct some small things:
>> 
>> we have 6 nodes, 3 osd node and 3 gaeway node ( which run RGW, mds and nfs 
>> service)
>> you r corrct, 2/3 osd node have ONE-NEW 10tib disk
>> 
>> About your suggestion, add another osd host, we will. But we need to end 
>> this nightmare, my NFS folder which have 10tib data is down :(
>> 
>> My ratio
>> ceph osd dump | grep ratio
>> full_ratio 0.95
>> backfillfull_ratio 0.92
>> nearfull_ratio 0.85
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to