There ya go.

You have 4 hosts, one of which appears to be down and have a single OSD that is 
so small as to not be useful.  Whatever cephgw03 is, it looks like a mistake.  
OSDs much smaller than, say, 1TB often aren’t very useful.

Your pools appear to be replicated, size=3.

So each of your cephosd* hosts stores one replica of each RADOS object.

You added the 10TB spinners to only two of your hosts, which means that they’re 
only being used as though they were 4TB OSDs.  That’s part of what’s going on.

You want to add a 10TB spinner to cephosd02.  That will help your situation 
significantly.

After that, consider adding a cephosd04 host.  Having at least one more failure 
domain than replicas lets you better use uneven host capacities.




> On Feb 24, 2024, at 10:06 AM, nguyenvand...@baoviet.com.vn wrote:
> 
> Hi Mr Anthony,
> 
> pls check the output 
> 
> https://anotepad.com/notes/s7nykdmc
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to