This is helpful, thanks.  Since the example is only for block.db, does
that imply that the wal should (can efficiently) live on the same disk as
data?

 R

On Fri, Aug 17, 2018 at 10:50 AM Alfredo Deza <ad...@redhat.com> wrote:

> On Fri, Aug 17, 2018 at 11:47 AM, Robert Stanford
> <rstanford8...@gmail.com> wrote:
> >
> >  What's more, I was planning on using this single journal device (SSD)
> for 4
> > OSDs.  With filestore I simply told each OSD to use this drive, sdb, on
> the
> > command line, and it would create a new partition on that drive every
> time I
> > created an OSD.  I thought it would be the same for BlueStore.  So that
> begs
> > the question, how does one set up an SSD to hold journals for multiple
> OSDs,
> > both db and wal?  Searching has yielded nothing.
>
> We are working on expanding the tooling to this for you, but until
> then, it is up to the user to create the LVs manually.
>
> This section might help out a bit on what you would need (for block.db):
>
>
> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#block-and-block-db
> >
> >  R
> >
> >
> > On Fri, Aug 17, 2018 at 9:48 AM David Turner <drakonst...@gmail.com>
> wrote:
> >>
> >> > ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc
> --block.db
> >> > /dev/sdb --block.wal /dev/sdb
> >>
> >> That command can't work... You're telling it to use the entire /dev/sdb
> >> device for the db and then again to do it for the wal, but you can only
> use
> >> the entire device once.  There are 2 things wrong with that.  First, if
> >> you're putting db and wal on the same device you do not need to specify
> the
> >> wal.  Second if you are actually intending to use a partition on
> /dev/sdb
> >> instead of the entire block device for this single OSD, then you need to
> >> manually create a partition for it and supply that partition to the
> >> --block.db command.
> >>
> >> Likely the command you want will end up being this after you create a
> >> partition on the SSD for the db/wal.
> >> `ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc
> --block.db
> >> /dev/sdb1`
> >>
> >> On Fri, Aug 17, 2018 at 10:24 AM Robert Stanford <
> rstanford8...@gmail.com>
> >> wrote:
> >>>
> >>>
> >>>  I was using the ceph-volume create command, which I understand
> combines
> >>> the prepare and activate functions.
> >>>
> >>> ceph-volume lvm create --osd-id 0 --bluestore --data /dev/sdc
> --block.db
> >>> /dev/sdb --block.wal /dev/sdb
> >>>
> >>>  That is the command context I've found on the web.  Is it wrong?
> >>>
> >>>  Thanks
> >>> R
> >>>
> >>> On Fri, Aug 17, 2018 at 5:55 AM Alfredo Deza <ad...@redhat.com> wrote:
> >>>>
> >>>> On Thu, Aug 16, 2018 at 9:00 PM, Robert Stanford
> >>>> <rstanford8...@gmail.com> wrote:
> >>>> >
> >>>> >  I am following the steps to my filestore journal with a bluestore
> >>>> > journal
> >>>> >
> >>>> > (
> http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/).
> It
> >>>> > is broken at ceph-volume lvm create.  Here is my error:
> >>>> >
> >>>> > --> Zapping successful for: /dev/sdc
> >>>> > Preparing sdc
> >>>> > Running command: /bin/ceph-authtool --gen-print-key
> >>>> > Running command: /bin/ceph --cluster ceph --name
> client.bootstrap-osd
> >>>> > --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
> >>>> > Running command: /bin/ceph --cluster ceph --name
> client.bootstrap-osd
> >>>> > --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> >>>> > ff523216-350d-4ca0-9022-0c17662c2c3b 10
> >>>> > Running command: vgcreate --force --yes
> >>>> > ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a /dev/sdc
> >>>> >  stdout: Physical volume "/dev/sdc" successfully created.
> >>>> >  stdout: Volume group "ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a"
> >>>> > successfully created
> >>>> > Running command: lvcreate --yes -l 100%FREE -n
> >>>> > osd-block-ff523216-350d-4ca0-9022-0c17662c2c3b
> >>>> > ceph-459b4fbe-e3c4-4f28-b58e-3496bf3ea95a
> >>>> >  stdout: Logical volume
> >>>> > "osd-block-ff523216-350d-4ca0-9022-0c17662c2c3b"
> >>>> > created.
> >>>> > --> blkid could not detect a PARTUUID for device: sdb
> >>>> > --> Was unable to complete a new OSD, will rollback changes
> >>>> > --> OSD will be destroyed, keeping the ID because it was provided
> with
> >>>> > --osd-id
> >>>> > Running command: ceph osd destroy osd.10 --yes-i-really-mean-it
> >>>> >  stderr: destroyed osd.10
> >>>> > -->  RuntimeError: unable to use device
> >>>> >
> >>>> >  Note that SDB is the SSD journal.  It has been zapped prior.
> >>>>
> >>>> I can't see what the actual command you used is, but I am guessing you
> >>>> did something like:
> >>>>
> >>>> ceph-volume lvm prepare --filestore --data /dev/sdb --journal /dev/sdb
> >>>>
> >>>> Which is not possible. There are a few ways you can do this (see:
> >>>> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#filestore )
> >>>>
> >>>> With a raw device and a pre-created partition (must have a PARTUUID):
> >>>>
> >>>>     ceph-volume lvm prepare --data /dev/sdb --journal /dev/sdc1
> >>>>
> >>>> With LVs:
> >>>>
> >>>>     ceph-volume lvm prepare --data vg/my-data --journal vg/my-journal
> >>>>
> >>>> With an LV for data and a partition:
> >>>>
> >>>>     ceph-volume lvm prepare --data vg/my-data --journal /dev/sdc1
> >>>>
> >>>> >
> >>>> >  What is going wrong, and how can I fix it?
> >>>> >
> >>>> >  Thank you
> >>>> >  R
> >>>> >
> >>>> >
> >>>> > _______________________________________________
> >>>> > ceph-users mailing list
> >>>> > ceph-users@lists.ceph.com
> >>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>> >
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to