Hi Rakesh, That works as well. I also diabled the other features.
rbd feature disable data/data_01 exclusive-lock Thanks for the response On Fri, Jun 24, 2016 at 6:22 AM, Rakesh Parkiti <rakeshpark...@hotmail.com> wrote: > Hi Ishmael > > Once try to create image with image-feature as layering only. > > #rbd create --image pool-name/image-name --size 15G --mage-feature > layering > # rbd map --image pool-name/image-name > > Thanks > Rakesh Parkiti > On Jun 23, 2016 19:46, Ishmael Tsoaela <ishmae...@gmail.com> wrote: > > Hi All, > > I have created an image but cannot map the image, anybody know what could > be the problem: > > > > sudo rbd map data/data_01 > > rbd: sysfs write failed > RBD image feature set mismatch. You can disable features unsupported by > the kernel with "rbd feature disable". > In some cases useful info is found in syslog - try "dmesg | tail" or so. > rbd: map failed: (6) No such device or address > > > > cluster_master@nodeC:~$ dmesg |tail > [89572.831725] libceph: client4227 fsid > 70cc6b75-9f83-4c67-a1c4-4fe846b4849e > [89572.832413] libceph: mon0 155.232.195.4:6789 session established > [89573.042375] libceph: client4229 fsid > 70cc6b75-9f83-4c67-a1c4-4fe846b4849e > [89573.043046] libceph: mon0 155.232.195.4:6789 session established > > > > command to create image: > > rbd create data_01 --size 102400 --pool data > > > cluster_master@nodeC:~$ rbd ls data > data_01 > > > cluster_master@nodeC:~$ rbd --image data_01 -p data info > rbd image 'data_01': > size 102400 MB in 25600 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105f2ae8944a > format: 2 > features: layering, exclusive-lock, object-map, fast-diff, deep-flatten > flags: > > > cluster_master@nodeC:~$ ceph status > cluster 70cc6b75-9f83-4c67-a1c4-4fe846b4849e > health HEALTH_OK > monmap e1: 1 mons at {nodeB=155.232.195.4:6789/0} > election epoch 3, quorum 0 nodeB > osdmap e17: 2 osds: 2 up, 2 in > flags sortbitwise > pgmap v160: 192 pgs, 2 pools, 6454 bytes data, 5 objects > 10311 MB used, 1851 GB / 1861 GB avail > 192 active+clean > > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com