Dear Cephers,

I'm trying to migrate our OSDs to Bluestore using this little script:

#!/bin/bash
HOSTNAME=$(hostname -s)
OSDS=`ceph osd metadata | jq -c '[.[] | select(.osd_objectstore | contains("filestore")) ]' | jq '[.[] | select(.hostname | contains("'${HOSTNAME}'")) ]' | jq '.[].id'`
IFS=' ' read -a OSDARRAY <<<$OSDS
for OSD in "${OSDARRAY[@]}"; do
  DEV=/dev/`ceph osd metadata | jq -c '.[] | select(.id=='${OSD}') | .backend_filestore_dev_node' | sed 's/"//g'`
  echo "=== Migrating OSD nr ${OSD} on device ${DEV} ==="
  ceph osd out ${OSD}
    while ! ceph osd safe-to-destroy ${OSD} ; do echo "waiting for full evacuation"; sleep 60 ; done
      systemctl stop ceph-osd@${OSD}
      umount /var/lib/ceph/osd/ceph-${OSD}
      /usr/sbin/ceph-volume lvm zap ${DEV}
      ceph osd destroy ${OSD} --yes-i-really-mean-it
      /usr/sbin/ceph-volume lvm create --bluestore --data ${DEV} --osd-id ${OSD}
done

Unfortunately - under normal circumstances this works flawlessly. In our case we have expansion shelfs connected as multipath devices to our nodes.

/usr/sbin/ceph-volume lvm zap ${DEV}  is breaking with an error:

OSD(s) 1 are safe to destroy without reducing data durability.
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
 stdout: /dev/mapper/ is inactive.
Running command: wipefs --all /dev/dm-0
 stderr: wipefs: error: /dev/dm-0: probing initialization failed: Device or resource busy
-->  RuntimeError: command returned non-zero exit status: 1
destroyed osd.1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 74f6ff02-d027-4fc6-9b93-3a96d753
5c8f 1
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be destroyed, keeping the ID because it was provided with --osd-id
Running command: ceph osd destroy osd.1 --yes-i-really-mean-it
 stderr: destroyed osd.1

-->  RuntimeError: Cannot use device (/dev/dm-0). A vg/lv path or an existing device is needed


Does anybody know how to solve this problem?

Cheers,

Vadim

--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: +49-341-97-33380
mail:    vadim.bu...@uni-leipzig.de

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to