Re: [ceph-users] BlueStore.cc: 11208: ceph_abort_msg("unexpected error")

2019-08-25 Thread Brad Hubbard
https://tracker.ceph.com/issues/38724

On Fri, Aug 23, 2019 at 10:18 PM Paul Emmerich  wrote:
>
> I've seen that before (but never on Nautilus), there's already an
> issue at tracker.ceph.com but I don't recall the id or title.
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Aug 23, 2019 at 1:47 PM Lars Täuber  wrote:
> >
> > Hi Paul,
> >
> > a result of fgrep is attached.
> > Can you do something with it?
> >
> > I can't read it. Maybe this is the relevant part:
> > " bluestore(/var/lib/ceph/osd/first-16) _txc_add_transaction error (39) 
> > Directory not empty not handled on operation 21 (op 1, counting from 0)"
> >
> > Later I tried it again and the osd is working again.
> >
> > It feels like I hit a bug!?
> >
> > Huge thanks for your help.
> >
> > Cheers,
> > Lars
> >
> > Fri, 23 Aug 2019 13:36:00 +0200
> > Paul Emmerich  ==> Lars Täuber  :
> > > Filter the log for "7f266bdc9700" which is the id of the crashed
> > > thread, it should contain more information on the transaction that
> > > caused the crash.
> > >
> > >
> > > Paul
> > >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to organize devices

2019-08-25 Thread Martinx - ジェームズ
Hello Ceph Users!

 This is my first message to this list, thanks in advance for any help!

 I'm playing with Ceph Ansible for a couple months, including deploying it
from within OpenStack Ansible (still using Ceph Ansible though), it works
great! But I have a feeling that I don't know exactly what I'm doing...

 To try to better understand Ceph, I got a new PC with the following
storage devices:

 2 x NVMe of 1T - /dev/md0 for root file system (XFS)
 2 x SATA SSD of 256G
 2 x SATA SSHDD of 1T
 2 x SATA HDD of 3T

 The first 2 NVMe is where I'm running Ubuntu 18.04 on top of a old school
Linux RAID1 formatted with XFS, it's also where I'm running a lot of QEMU
Virtual Machines (including OpenStack and Ceph Controllers) and LXD
containers.

 My idea is to run Ceph infrastructure stuff (mgrs, metadata, radosgw, etc)
within the VMs on top of the NVMe devices and Ceph OSD on bare-metal Ubuntu
to control all the other storage devices directly.

 Thing is, I have a vague idea about what to do with those devices on Ceph!

 I already decided to use new Ceph Bluestore, where LVM is also involved.
So, for each block device, there is a LVM physical device and just one LVM
logical device using the whole thing.

 What I have in mind is something like this:

 - Use the SATA SSDs to speed up writes.
 - Use the *HDD to long-term storage.
 - Be able to lose half of the devices and still keep my data safe.
 - Be able to lose all SATA SSDs while keeping my data safe (so, I can't
use them as a Ceph DB (of an OSD), only as "WAL", is that right?).

 So far, my osds.yml looks like:

---
lvm_volumes:
  - data: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0KB77803H
data_vg: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0KB77803H

  - data: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0M117675W
data_vg: ata-Samsung_SSD_860_EVO_250GB_S3YHNX0M117675W

  - data: ata-ST1000DX001-1NS162_Z4YCWM2F
data_vg: ata-ST1000DX001-1NS162_Z4YCWM2F

  - data: ata-ST1000DX001-1NS162_Z4YE0APL
data_vg: ata-ST1000DX001-1NS162_Z4YE0APL

  - data: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K0DFV6HE
data_vg: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K0DFV6HE

  - data: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K7FUATRK
data_vg: ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K7FUATRK
---

 As you guys can see, I'm not using "wal", neither "db", just "data" but,
this isn't going to do what I have in mind... Right?!

 NOTE: I'm okay to move the root file system to the 2 SATA SSDs (out from
the NVMe), if you guys thinks that it's better to use the NVMe for the Ceph
OSDs... But I really prefer NVMe for my daily Ubuntu usage and all of its
VMs.

 What about Linux BCache? I mean, I think that it would be awesome to use
the BCache to create 2 logical devices (2 * (SSD + SSHDD + HDD)), where
only the SSD will be used by the BCache (I'll ignore the SSDHDD) and then,
present the BCache block device to Ceph, instead of the devices directly.
I'l asking this because from the little I know about Ceph, when with
Bluestore, there is no way to use the SSDs as a write-cache for the rest of
the slower discs, is that right? How would Ceph detect bad  blocks and
failures if on top of BCache? I'm also thinking about using VDO in between
BCache and Ceph... Lots of ideas!  lol

Cheers!
Thiago
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com