You are trying to use the kernel client to map the RBD in Jewel.  Jewel
RBDs have options enabled that require you to run a kernel 4.9 or newer.
You can disable the features that are requiring the newer kernel, but
that's not very good as those new features are very nice to have.  You can
use RBD-fuse to mount them, that is up to date for your Ceph version.  I
would probably go the RBD-fuse route in your position, unless upgrading
your kernel to 4.9 is an option.

On Wed, May 31, 2017 at 9:36 AM Shambhu Rajak <sra...@sandvine.com> wrote:

> Hi Cepher,
>
> I have created a pool and trying to create rbd image on the ceph client,
> while mapping the rbd image it fails as:
>
>
>
> ubuntu@shambhucephnode0:~$ sudo rbd map pool1-img1 -p pool1
>
> rbd: sysfs write failed
>
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
>
> rbd: map failed: (5) Input/output error
>
>
>
>
>
> so I checked the dmesg as suggested:
>
>
>
> ubuntu@shambhucephnode0:~$ dmesg | tail
>
> [788743.741818] libceph: mon2 10.186.210.243:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 20000000000
>
> [788743.746352] libceph: mon2 10.186.210.243:6789 socket error on read
>
> [788753.757934] libceph: mon2 10.186.210.243:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 20000000000
>
> [788753.777578] libceph: mon2 10.186.210.243:6789 socket error on read
>
> [788763.773857] libceph: mon0 10.186.210.241:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 20000000000
>
> [788763.780539] libceph: mon0 10.186.210.241:6789 socket error on read
>
> [788773.790371] libceph: mon1 10.186.210.242:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 20000000000
>
> [788773.811208] libceph: mon1 10.186.210.242:6789 socket error on read
>
> [788783.805987] libceph: mon1 10.186.210.242:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 20000000000
>
> [788783.826907] libceph: mon1 10.186.210.242:6789 socket error on read
>
>
>
> I am not sure what is going wrong here, my cluster health is HEALTH_OK
> though.
>
>
>
>
>
>
>
> My configuration details:
>
> Ceph version: ceph version 10.2.7
> (50e863e0f4bc8f4b9e31156de690d765af245185)
>
> OSD: 12 on 3 storage nodes
>
> Monitor : 3 running on the 3 osd nodes
>
>
>
> OS:
>
> No LSB modules are available.
>
> Distributor ID: Ubuntu
>
> Description:    Ubuntu 14.04.5 LTS
>
> Release:        14.04
>
> Codename:       trusty
>
>
>
> Ceph Client Kernal Version:
>
> Linux version 3.13.0-95-generic (buildd@lgw01-58) (gcc version 4.8.4
> (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
>
>
>
> KRBD:
>
> ubuntu@shambhucephnode0:~$ /sbin/modinfo rbd
>
> filename:       /lib/modules/3.13.0-95-generic/kernel/drivers/block/rbd.ko
>
> license:        GPL
>
> author:         Jeff Garzik <j...@garzik.org>
>
> description:    rados block device
>
> author:         Yehuda Sadeh <yeh...@hq.newdream.net>
>
> author:         Sage Weil <s...@newdream.net>
>
> author:         Alex Elder <el...@inktank.com>
>
> srcversion:     48BFBD5C3D31D799F01D218
>
> depends:        libceph
>
> intree:         Y
>
> vermagic:       3.13.0-95-generic SMP mod_unload modversions
>
> signer:         Magrathea: Glacier signing key
>
> sig_key:        51:D5:D7:73:F1:07:BA:1B:C0:9D:33:68:38:C4:3C:DE:74:9E:4E:05
>
> sig_hashalgo:   sha512
>
>
>
> Thanks,
>
> Shambhu Rajak
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to