[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-30 Thread Eugen Block
Hi, how exactly does ceph report that there’s no CephFS? If your MONs were down and you recovered them, is at least one MGR also up and running? Can you share ‚ceph -s‘ and ‚ceph fs status’? Zitat von cyclic3@gmail.com: Hi, I've had a complete monitor failure, which I have recovered fro

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-30 Thread cyclic3 . git
My ceph -s output is this: cluster: id: bfe08dcf-aabd-4cac-ac4f-9e56af3df11b health: HEALTH_ERR 1/3 mons down, quorum omicron-m1,omicron-m2 6 scrub errors Possible data damage: 1 pg inconsistent Degraded data redundancy: 626702/20558920

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-30 Thread Eugen Block
There’s no MDS running, can you start it? Zitat von cyclic3@gmail.com: My ceph -s output is this: cluster: id: bfe08dcf-aabd-4cac-ac4f-9e56af3df11b health: HEALTH_ERR 1/3 mons down, quorum omicron-m1,omicron-m2 6 scrub errors Possible data d

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread Yan, Zheng
On Sun, Aug 30, 2020 at 8:05 PM wrote: > > Hi, > I've had a complete monitor failure, which I have recovered from with the > steps here: > https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures > The data and metadata pools are there and are completely

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread cyclic3 . git
I added an MDS, but there was no change in either output (apart from recognising the existence of an MDS) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread cyclic3 . git
This sounds rather risky; will this definitely not lose any of my data? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread Eugen Block
I don’t understand, what happened to the previous MDS? If there are cephfs pools there also was an old MDS, right? Can you explain that please? Zitat von cyclic3@gmail.com: I added an MDS, but there was no change in either output (apart from recognising the existence of an MDS) _

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread Cyclic 3
Both the MDS maps and the keyrings are lost as a side effect of the monitor recovery process I mentioned in my initial email, detailed here https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures . On Mon, 31 Aug 2020 at 21:10, Eugen Block wrote: > I do

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-09-01 Thread Eugen Block
Alright, I didn't realize that the MDS was affected by this as well. In that case there's probably no other way than running the 'ceph fs new ...' command as Yan, Zheng suggested. Do you have backups of your cephfs contents in case that goes wrong? I'm not sure if a pool copy would help in any

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-09-02 Thread cyclic3 . git
With that first command, I get this error: Error EINVAL: pool 'cephfs_metadata' already contains some objects. Use an empty pool instead. What can I do? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-09-02 Thread cyclic3 . git
Nevermind, it works now. Thanks for the help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io