Hi,
> My question is - do I understand correctly that I need to either update my
> CRUSH rule to select OSDs (which I know is bad) to place objects into PGs
> or have more OSD hosts available so when one of them is going down I would
> still have 3 active hosts and CEPH can re-distribute data
Hi
I got hit by ill-formatted OMAP keys bug discovered recently by Igor (
https://tracker.ceph.com/issues/53062 ).
I was fortunate and my data pools seems to recovered.
MDS metadata pool seems to have some broken objects, that crash MDS daemon,
even after metadata jurnal+tables reset.
I'm
Hi everyone,
I have a CEPH cluster with 3 MON/MGR/MDS nodes, 3 OSD nodes each hosting
two OSDs (2 HDDs, 1 OSD per HDD). My pools are configured with a replica x
3 and my osd_pool_default_size is set to 2. So I have 6 total OSDs and 3
hosts for OSDs.
My CRUSH map is plain simple - root, then 3
I have the exact same problem: I upgraded to 16.2.6 and set
bluestore_fsck_quick_fix_on_mount to true, after a rolling restart of my osds
only 2 of 5 came back (one of them was only recently added and has only very
few data, so in essence there is only 1 osd really running).
Al other osds