[ceph-users] Re: Default erasure code profile not working for 3 node cluster?

2022-07-25 Thread Mark S. Holliman
a documentation bug with the hope that they clarify things (I know of at least one other admin who hit the same issue I'm seeing, so I'm not the only one...). Cheers, Mark From: Danny Webb Sent: 25 July 2022 14:32 To: Mark S. Holliman ; ceph-users@ceph.io Subject: Re: Default erasure code

[ceph-users] Default erasure code profile not working for 3 node cluster?

2022-07-25 Thread Mark S. Holliman
Dear All, I've recently setup a 3 node Ceph Quincy (17.2) cluster to serve a pair of CephFS mounts for a Slurm cluster. Each ceph node has 6 x SSD and 6 x HDD, and I've setup the pools and crush rules to create separate CephFS filesystems using the different disk classes. I used the default

[ceph-users] Re: How can I recover PGs in state 'unknown', where OSD location seems to be lost?

2020-03-25 Thread Mark S. Holliman
So I've managed to use ceph-objectstore-tool to locate the pgs in 'unknown' state on the OSDs, but how do I tell the rest of the system where to find them? Is there a command for setting a the OSDs associated with a PG? Or, less ideally, is there a table somewhere I can hack to do this by

[ceph-users] How can I recover PGs in state 'unknown', where OSD location seems to be lost?

2020-03-23 Thread Mark S. Holliman
Hi all, I have a large distributed ceph cluster that recently broke with all PGs housed at a single site getting marked as 'unknown' after a run of the Ceph Ansible playbook (which was being used to expand the cluster at a third site). Is there a way to recover the location of PGs in this