I'd suggest you open a tracker under the Bluestore component so
someone can take a look. I'd also suggest you include a log with
'debug_bluestore=20' added to the COT command line.

On Thu, Nov 7, 2019 at 6:56 PM Eugene de Beste <eug...@sanbi.ac.za> wrote:
>
> Hi, does anyone have any feedback for me regarding this?
>
> Here's the log I get when trying to restart the OSD via systemctl: 
> https://pastebin.com/tshuqsLP
> On Mon, 4 Nov 2019 at 12:42, Eugene de Beste <eug...@sanbi.ac.za> wrote:
>
> Hi everyone
>
> I have a cluster that was initially set up with bad defaults in Luminous. 
> After upgrading to Nautilus I've had a few OSDs crash on me, due to errors 
> seemingly related to https://tracker.ceph.com/issues/42223 and 
> https://tracker.ceph.com/issues/22678.
>
> One of my pools have been running in min_size 1 (yes, I know) and I am not 
> stuck with incomplete pgs due to aforementioned OSD crash.
>
> When trying to use the ceph-objectstore-tool to get the pgs out of the OSD, 
> I'm running into the same issue as trying to start the OSD, which is the 
> crashes. ceph-objectstore-tool core dumps and I can't retrieve the pg.
>
> Does anyone have any input on this? I would like to be able to retrieve that 
> data if possible.
>
> Here's the log for ceph-objectstore-tool --debug --data-path 
> /var/lib/ceph/osd/ceph-22 --skip-journal-replay --skip-mount-omap --op info 
> --pgid 2.9f  -- https://pastebin.com/9aGtAfSv
>
> Regards and thanks,
> Eugene
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to