[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-09-02 Thread cyclic3 . git
Nevermind, it works now. Thanks for the help. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-09-02 Thread cyclic3 . git
With that first command, I get this error: Error EINVAL: pool 'cephfs_metadata' already contains some objects. Use an empty pool instead. What can I do? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-31 Thread cyclic3 . git
This sounds rather risky; will this definitely not lose any of my data? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Filesystem recovery with intact pools

2020-08-30 Thread cyclic3 . git
My ceph -s output is this: cluster: id: bfe08dcf-aabd-4cac-ac4f-9e56af3df11b health: HEALTH_ERR 1/3 mons down, quorum omicron-m1,omicron-m2 6 scrub errors Possible data damage: 1 pg inconsistent Degraded data redundancy: 626702/20558920

[ceph-users] Ceph Filesystem recovery with intact pools

2020-08-30 Thread cyclic3 . git
Hi, I've had a complete monitor failure, which I have recovered from with the steps here: https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures The data and metadata pools are there and are completely intact, but ceph is reporting that there are no