Hi,
seems that I found the cause. The disk array was used for ZFS before and
was not wiped.
I zapped the disks with sgdisk and via ceph but "zfs_member" was still
somewhere on the disk.
Wiping the disk (wipefs -a -f /dev/mapper/mpatha), "ceph osd create
--zap-disk" twice until entry in "df" and
HI!
Currently I am deploying a small cluster with two nodes. I installed ceph
jewel on all nodes and made a basic deployment.
After "ceph osd create..." I am now getting "Failed to start Ceph disk
activation: /dev/dm-18" on boot. All 28 OSDs were never active.
This server has a 14 disk JBOD with