Thank you so much Peter. The 'udevadm trigger' after 'partprobe' triggered
the udev rules and I've found out that even before the udev ruleset
triggers the owner is already ceph:ceph.
I've dug into ceph-disk a little more and found out that there is a
symbolic link of
On 03/15/17 08:43, Gunwoo Gim wrote:
> After a reboot, all the partitions of LVM don't show up in
> /dev/mapper -nor in the /dev/dm- or /proc/partitions- though
> the whole disks show up; I have to make the hosts run one 'partprobe'
> every time they boot so as to have the partitions all show up.
After a reboot, all the partitions of LVM don't show up in /dev/mapper
-nor in the /dev/dm- or /proc/partitions- though the whole disks
show up; I have to make the hosts run one 'partprobe' every time they boot
so as to have the partitions all show up.
I've found out that the udev rules have
Thank you very much, Peter.
I'm sorry for not clarifying the version number; it's kraken and
11.2.0-1xenial.
I guess the udev rules in this file are supposed to change them :
/lib/udev/rules.d/95-ceph-osd.rules
...but the rules' filters don't seem to match the DEVTYPE part of the
prepared
Is this Jewel? Do you have some udev rules or anything that changes the
owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
On 03/14/17 08:53, Gunwoo Gim wrote:
> I'd love to get helped out; it'd be much appreciated.
>
> Best Wishes,
> Nicholas.
>
> On Tue, Mar 14, 2017 at
I'd love to get helped out; it'd be much appreciated.
Best Wishes,
Nicholas.
On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim wrote:
> Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
>
Hello, I'm trying to deploy a ceph filestore cluster with LVM using
ceph-ansible playbook. I've been fixing a couple of code blocks in
ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
again; 'ceph-disk activate osd' fails.
Please let me just show you the error message