[ceph-users] Re: About placement group scrubbing state

2024-06-03 Thread tranphong079
I resolved this problem; the issue stemmed from the scrubbing state taking too 
much time to complete 270 OSDs, and then the scrubbing process increased 
overtime.

Changing the osd_scrub_max_interval and osd_scrub_min_interval to 7 days and 14 
days, respectively, resolved my problem
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] About number of osd node can be failed with erasure code 3+2

2023-11-27 Thread tranphong079
Hi Groups,

Recently I was setting up a ceph cluster with 10 nodes 144 osd, and I use S3 
for it with pool erasure code EC3+2 on it.

I have a question, how many osd nodes can fail with erasure code 3+2 with 
cluster working normal (read, write)? and can i choose better erasure code 
ec7+3, 8+2 etc..?

With the erasure code algorithm, it only ensures no data loss, but does not 
guarantee that the cluster operates normally and does not block IO when osd 
nodes down. Is that right?

Thanks to the community.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io