Hi Kevin,
Are your OSDs bluestore or filestore?
-- dan
On Thu, Jul 12, 2018 at 11:30 PM Kevin wrote:
>
> Sorry for the long posting but trying to cover everything
>
> I woke up to find my cephfs filesystem down. This was in the logs
>
> 2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object rea
Hi Kevin,
Am 13.07.2018 um 04:21 schrieb Kevin:
> That thread looks exactly like what I'm experiencing. Not sure why my
> repeated googles didn't find it!
maybe the thread was still too "fresh" for Google's indexing.
>
> I'm running 12.2.6 and CentOS 7
>
> And yes, I recently upgraded from j
Hi,
all this sounds an awful lot like:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-July/027992.html
In htat case, things started with an update to 12.2.6. Which version are you
running?
Cheers,
Oliver
Am 12.07.2018 um 23:30 schrieb Kevin:
> Sorry for the long posting but trying to
On Thu, Jul 12, 2018 at 3:55 PM, Patrick Donnelly wrote:
>> Recommends fixing error by hand. Tried running deep scrub on pg 2.4, it
>> completes but still have the same issue above
>>
>> Final option is to attempt removing mds.ds27. If mds.ds29 was a standby and
>> has data it should become live.
On Thu, Jul 12, 2018 at 2:30 PM, Kevin wrote:
> Sorry for the long posting but trying to cover everything
>
> I woke up to find my cephfs filesystem down. This was in the logs
>
> 2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object read crc 0x6fc2f65a
> != expected 0x1c08241c on 2:292cf221:::20
Sorry for the long posting but trying to cover everything
I woke up to find my cephfs filesystem down. This was in the logs
2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object read crc
0x6fc2f65a != expected 0x1c08241c on 2:292cf221:::200.:head
I had one standby MDS, but as far as