Hi Etienne,

Maybe I didn't make myself clear...

When I map an rbd-image from my cluster to a /dev/rbd, ceph wants to 
automatically add the /dev/rbd as an OSD. This is undesirable behavior. Trying 
to add a /dev/rdb mapped to an image in the same cluster??? Scary...

Luckily the automatic creation of the OSD fails.

Nevertheless, I would feel better if ceph just doesn't try to add the /dev/rbd 
to the cluster.

Do I risk a conflict between my operations on a mapped rbd image/device?

Will at some point ceph alter my image unintentionally?

Do I risk ceph to actually add such an image as an osd?

I can disable the managed feature of the osd-management, but then I lose 
automatic functions of ceph. Is there a way to tell ceph to exclude /dev/rdb* 
devices from the autodetect/automanage?

Greetings,

Dominique.

> -----Oorspronkelijk bericht-----
> Van: Etienne Menguy <etienne.men...@ubisoft.com>
> Verzonden: maandag 29 augustus 2022 13:44
> Aan: Dominique Ramaekers <dominique.ramaek...@cometal.be>
> CC: ceph-users@ceph.io
> Onderwerp: RE: Automanage block devices
> 
> Hey,
> 
> /usr/sbin/ceph-volume ... lvm batch --no-auto /dev/rbd0 You want to add an
> OSD using rbd0?
> 
> To map a block device, just use rbd map (
> https://docs.ceph.com/en/quincy/man/8/rbdmap/ )
> 
> Étienne
> 
> > -----Original Message-----
> > From: Dominique Ramaekers <dominique.ramaek...@cometal.be>
> > Sent: lundi 29 août 2022 12:32
> > To: ceph-users@ceph.io
> > Subject: [ceph-users] Automanage block devices
> >
> > [Some people who received this message don't often get email from
> > dominique.ramaek...@cometal.be. Learn why this is important at
> > https://aka.ms/LearnAboutSenderIdentification ]
> >
> > Hi,
> >
> > I really like the behavior of ceph to auto-manage block devices. But I
> > get ceph status warnings if I map an image to a /dev/rbd
> >
> > Some log output:
> > Aug 29 11:57:34 hvs002 bash[465970]: Non-zero exit code 2 from
> > /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host
> > -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk
> > --init -e
> >
> CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc90
> > 42b9d6f6b625dee8fa8d93c2bae53fa9b61c3df1a -e NODE_NAME=hvs002 -e
> > CEPH_USE_RANDOM_NONCE=1 -e
> CEPH_VOLUME_OSDSPEC_AFFINITY=all-
> > available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
> > CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/dd4b0610-b4d2-11ec-bb58-
> > d1b32ae31585:/var/run/ceph:z -v /var/log/ceph/dd4b0610-b4d2-11ec-
> bb58-
> > d1b32ae31585:/var/log/ceph:z -v /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
> > d1b32ae31585/crash:/var/lib/ceph/crash:z -v /dev:/dev -v
> > /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
> > /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-
> > tmpke1ihnc_:/etc/ceph/ceph.conf:z -v /tmp/ceph-
> > tmpaqbxw8ga:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
> >
> quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc9042b9d6f6b625dee8fa
> > 8d93c2bae
> >  53fa9b61c3df1a lvm batch --no-auto /dev/rbd0 --yes --no-systemd Aug
> > 29
> > 11:57:34 hvs002 bash[465970]: /usr/bin/docker: stderr  stderr: lsblk:
> > /dev/rbd0: not a block device
> >
> > Aug 29 11:57:34 hvs002 bash[465970]: cluster 2022-08-
> > 29T09:57:33.973654+0000 mon.hvs001 (mon.0) 34133 : cluster [WRN]
> > Health check failed: Failed to apply 1 service(s):
> > osd.all-available-devices
> > (CEPHADM_APPLY_SPEC_FAIL)
> >
> > If I map a image to a rdb, the automanage feature want to add it as an
> > osd. It fails (as it apparently isn't detected as a block device), so
> > I guess my images are untouched, but still I worry because I can't
> > find a lot of information about these warnings.
> >
> > Do I risk a conflict between my operations on a mapped rbd image/device?
> > Wil at some point ceph alter my image unintentionally?
> >
> > Do I risk ceph to add such an image as an osd?
> >
> > I can disable the managed feature of the osd-management, but then I
> > lose automatic functions of ceph. Is there a way to tell ceph to
> > exclude /dev/rdb* devices from the autodetect/automanage?
> >
> > Greetings,
> >
> > Dominique.
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to