[ceph-users] Re: Restore OSD disks damaged by deployment misconfiguration

2021-10-07 Thread Phil Merricks
Thanks for the reply Sebastian. Sadly I haven't had any luck so far restoring the OSD drive's superblocks. Any other advice from this group would be welcome before I erase and start again. On Mon, 27 Sept 2021 at 01:38, Sebastian Wagner wrote: > Hi Phil, > > > Am 27.09.21 um 10:06

[ceph-users] Restore OSD disks damaged by deployment misconfiguration

2021-09-27 Thread Phil Merricks
Hey folks, A recovery scenario I'm looking at right now is this: 1: In a clean 3-node Ceph cluster (pacific, deployed with cephadm), the OS Disk is lost from all nodes 2: Trying to be helpful, a self-healing deployment system reinstalls the OS on each node, and rebuilds the ceph services 3:

[ceph-users] Docs on Containerized Mon Maintenance

2021-06-15 Thread Phil Merricks
Hey folks, I'm working through some basic ops drills, and noticed what I think is an inconsistency in the Cephadm Docs. Some Googling appears to show this is a known thing, but I didn't find a clear direction on cooking up a solution yet. On a cluster with 5 mons, 2 were abruptly removed when

[ceph-users] Re: Mon crash when client mounts CephFS

2021-06-15 Thread Phil Merricks
files being off, its easiest to > stop the working node, copy them over , set the user id/group to ceph and > start things up. > > Rob > > -Original Message- > From: Phil Merricks > Sent: Tuesday, June 8, 2021 3:18 PM > To: ceph-users > Subject: [ceph-users

[ceph-users] Mon crash when client mounts CephFS

2021-06-08 Thread Phil Merricks
Hey folks, I have deployed a 3 node dev cluster using cephadm. Deployment went smoothly and all seems well. If I try to mount a CephFS from a client node, 2/3 mons crash however. I've begun picking through the logs to see what I can see, but so far other than seeing the crash in the log itself,

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Phil Merricks
Thanks for all the replies folks. I think it's testament to the versatility of Ceph that there are some differences of opinion and experience here. With regards to the purpose of this cluster, it is providing distributed storage for stateful workloads of containers. The data produced is

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-12 Thread Phil Merricks
d possibly the EC crush rule setup? Best regards Phil Merricks On Wed., Nov. 11, 2020, 1:30 a.m. Robert Sander, < r.san...@heinlein-support.de> wrote: > Am 07.11.20 um 01:14 schrieb seffyr...@gmail.com: > > I've inherited a Ceph Octopus cluster that seems like it needs urgent