Hi folks-

Was this ever resolved?  I’m not finding a resolution in the email chain, 
apologies if I am missing it.  I am experiencing this same problem.  Cluster 
works fine for object traffic, can’t seem to get rbd to work in 0.78.  Worked 
fine in 0.72.2 for me.  Running Ubuntu 13.04 with 3.12 kernel.

$ rbd create rbd/myimage --size 102400
$ sudo rbd map rbd/myimage
rbd: add failed: (5) Input/output error

$ rbd ls rbd
myimage
$

Thanks,
Joe

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???? ???????
Sent: Tuesday, March 25, 2014 1:59 AM
To: Ilya Dryomov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# rbd map rbd/tst
2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key file 
/etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error

[root@ceph01 cluster]# cat /var/log/messages | tail
Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:45:06 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:46:33 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:48:14 ceph01 kernel: libceph: mon0 
192.168.100.201:6789<http://192.168.100.201:6789> session established

I do not really understand this error. CRUSH correct.

Thanks.


2014-03-25 12:26 GMT+04:00 Ilya Dryomov 
<ilya.dryo...@inktank.com<mailto:ilya.dryo...@inktank.com>>:
On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов 
<malm...@gmail.com<mailto:malm...@gmail.com>> wrote:
> Hi, Ilya.
>
> I added the files(crushd and osddump) to a folder in GoogleDrive.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
set to 1 in your crushmap.  This is a new crush tunable, which was
introduced long after 3.14 merge window closed.  It will be supported
starting with 3.15, until then you should be able to do

ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
ceph osd setcrushmap -i /tmp/crush.new

to disable it.

Thanks,

                Ilya



--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to