On Sun, Jul 17, 2016 at 5:21 AM, Will Dennis <willard.den...@gmail.com> wrote:
> OK, so nuked everything Ceph-related on my nodes, and started over. Now
> running Jewel (ceph version 10.2.2). Everything went fine until "ceph-deploy
> osd activate” again; now I’m seeing the following --
>
> [ceph_deploy.osd][DEBUG ] activating host ceph2 disk /dev/sdc
> [ceph_deploy.osd][DEBUG ] will use init type: systemd
> [ceph2][DEBUG ] find the location of an executable
> [ceph2][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
> --mark-init systemd --mount /dev/sdc
> [ceph2][WARNIN] main_activate: path = /dev/sdc
> [ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is
> /sys/dev/block/8:32/dm/uuid
> [ceph2][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value --
> /dev/sdc
> [ceph2][WARNIN] Traceback (most recent call last):
> [ceph2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
> [ceph2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts',
> 'ceph-disk')()
> [ceph2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
> line 4994, in run
> [ceph2][WARNIN]     main(sys.argv[1:])
> [ceph2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
> line 4945, in main
> [ceph2][WARNIN]     args.func(args)
> [ceph2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
> line 3299, in main_activate
> [ceph2][WARNIN]     reactivate=args.reactivate,
> [ceph2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py",
> line 3009, in mount_activate
> [ceph2][WARNIN]     e,
> [ceph2][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover
> filesystem type: device /dev/sdc: Line is truncated:
> [ceph2][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
>
> Still the same sort of error… no output from the blkid command used. Again,
> if I use either “-s PTTYPE” or get rid of the “-s TYPE” altogether, I get a
> value returned (“gpt”)…

I've run into a few issues as well activating OSDs on CentOS 7.

First, there's an issue with the version of parted in CentOS 7.2:
https://bugzilla.redhat.com/1339705

Secondly, the disks are now activated by udev. Instead of using
activate, use prepare
and udev handles the rest.

Third, this doesn't work well if you're also using LVM on your host
since for some reason
this causes udev to not send the necessary add/change events.

Hope this helps,

Ruben
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to