Hi Peter

Please remember to include the list address in your reply.
I will not trim so people on the list can read you answer.


On 29.07.2021 12:43, Peter Childs wrote:
On Thu, 29 Jul 2021 at 10:37, Kai Stian Olstad <ceph+l...@olstad.com> wrote:

A little disclaimer, I have never used multipath with Ceph.

On 28.07.2021 20:19, Peter Childs wrote:
> I have a number of disk trays, with 25 ssd's in them, these are
> attached to
> my servers via a pair of sas cables, so that multipath is used to join
> the
> together again and maximize speed etc.
>
> Using cephadm how can I create the osd's?

You can use the commands in the documentation [1] "ceph orch daemon add
osd <host>:<path to multipath device>"
But you need to configure the LVM correctly to make this work.


That was my thought, but it was not working, but now it is....

vgcreate test /dev/mapper/mpatha
lvcreate -l 190776 -n testlv test
ceph orch daemon add osd dampwood18:test/testlv
  Created osd(s) 1361 on host 'dampwood18'

I think I can live with that I think there is room for improvement here,
but I'm happy with creating the vgs and lvs before I use the disks.....

If you could not run
  cephadm shell ceph orch daemon add osd dampwood18:/dev/mapper/mpatha
I would consider that a bug.


> It looks like it should be possible to use ceph-volume but I've not
> really
> worked out yet how to access ceph-volume within cephadm. Even if I've
> got
> to format them with lvm first. (The docs are slightly confusing here)
>
> It looks like the ceph disk inventory system can't cope with multipath?

If by "ceph disk inventory system" you mean OSD service specification[2]
then yes, I don't think it's possible to use it with multipath.


When you add a disk to Ceph with cephadm it will use LVM to create a
Physical Volume(PV) of that device and create Volume Group(VG) on the
disk and then create a Logical Volume(LV) that use the whole VG.
And the configuration in Ceph reference the VG/LV so Ceph should not
have a problem with multipath.

But since you have multipath, LVM might have a problem with that if not
configured correctly.
LVM will scan disk for LVM signature and try to create the devices for
the LV it finds.

So you need to make sure that the LVM only scan the multipath device
paths and not the individual disk the OS sees.



Hmm I think we might have "room for improvement" in this area,

Either the osd spec needs to include all the options for weird disks that
people might come up with, and allocating them to classes as well,

There are lot of limitation in the OSD service spec and handling drives in Cephadm. Just try to replace a HDD disk with the DB on a SSD, that is a pain at the moment.


or all the options available to ceph-volume need to be exposed to
orchestration which would also working, currently it feels like some of the complex options in ceph are not available to cephadm yet and you need to
work out how to do it.

You have "cephadm ceph-volume" or you could run "cephadm shell" and then run all the ceph commands.


I'm new to ceph and I like the theory having come from a Spectrum Scale
background, and I'm still trying to get to grips with how things work.

My Ceph cluster has got 3 types of drive, these multipathed 800G ssds,
Disks on nodes with lots of memory (256G between 30Disks) and Disks on
nodes with very little memory.... (48G between 60Disks) hence why I was
trying to get disk specs to work..... I've actually got it working with a little kernel tuning and must get around to writing it up so I can share
where I've got to......

As mention the OSD service spec has a lot of limitation.

The default memory size for an OSD is 4GB, so your 48 GB/60 disks would need some configuration and I'm not sure if it's feasible to run them with so little memory.


Thanks

Peter


--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to