Hi Tobias,

April 18, 2024 at 8:08 PM, "Tobias Langner" <tlangner+c...@bitvalve.org> wrote:



> 
> We operate a tiny ceph cluster (v16.2.7) across three machines, each 
> 
> running two OSDs and one of each mds, mgr, and mon. The cluster serves 
> 
> one main erasure-coded (2+1) storage pool and a few other 
I'd assume (w/o pool config) that the EC 2+1 is putting PG as inactive. Because 
for EC you need n-2 for redundancy and n-1 for availability.

The output got a bit mangled. Could you please provide them in some pastebin 
maybe?

Can you please post the crush rule and pool settings? To better understand the 
data distribution. And what does the logs show on one of the affected OSDs?

Cheers,
Alwin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to