Meant to include this – what do these messages indicate?  All systems have 0.78.

[1301268.557820] Key type ceph registered
[1301268.558524] libceph: loaded (mon/osd proto 15/24)
[1301268.579486] rbd: loaded rbd (rados block device)
[1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301268.582462] libceph: mon1 10.0.0.102:6789 socket error on read
[1301278.589461] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301278.589558] libceph: mon1 10.0.0.102:6789 socket error on read
[1301288.607615] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301288.607713] libceph: mon1 10.0.0.102:6789 socket error on read
[1301298.625873] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301298.625970] libceph: mon1 10.0.0.102:6789 socket error on read
[1301308.643936] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301308.644033] libceph: mon0 10.0.0.101:6789 socket error on read
[1301318.662082] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301318.662179] libceph: mon0 10.0.0.101:6789 socket error on read
[1301449.695232] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301449.695329] libceph: mon0 10.0.0.101:6789 socket error on read
[1301459.716235] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301459.716332] libceph: mon1 10.0.0.102:6789 socket error on read
[1301469.734425] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301469.734523] libceph: mon1 10.0.0.102:6789 socket error on read
[1301479.752603] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301479.752700] libceph: mon1 10.0.0.102:6789 socket error on read
[1301489.770773] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301489.770870] libceph: mon1 10.0.0.102:6789 socket error on read
[1301499.788904] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 1000000000
[1301499.789001] libceph: mon1 10.0.0.102:6789 socket error on read

$ ceph --version
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak01 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak02 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ceph health detail
HEALTH_WARN noscrub,nodeep-scrub flag(s) set
noscrub,nodeep-scrub flag(s) set

$ ceph status
    cluster b12ebb71-e4a6-41fa-8246-71cbfa09fb6e
     health HEALTH_WARN noscrub,nodeep-scrub flag(s) set
     monmap e1: 2 mons at 
{mohonpeak01=10.0.0.101:6789/0,mohonpeak02=10.0.0.102:6789/0}, election epoch 
10, quorum 0,1 mohonpeak01,mohonpeak02
     osdmap e216: 18 osds: 18 up, 18 in
            flags noscrub,nodeep-scrub
      pgmap v202112: 2784 pgs, 10 pools, 1637 GB data, 427 kobjects
            2439 GB used, 12643 GB / 15083 GB avail
                2784 active+clean


From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:44 AM
To: 'Ирек Фасихов'; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?

Hi folks-

Was this ever resolved?  I’m not finding a resolution in the email chain, 
apologies if I am missing it.  I am experiencing this same problem.  Cluster 
works fine for object traffic, can’t seem to get rbd to work in 0.78.  Worked 
fine in 0.72.2 for me.  Running Ubuntu 13.04 with 3.12 kernel.

$ rbd create rbd/myimage --size 102400
$ sudo rbd map rbd/myimage
rbd: add failed: (5) Input/output error

$ rbd ls rbd
myimage
$

Thanks,
Joe

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???? ???????
Sent: Tuesday, March 25, 2014 1:59 AM
To: Ilya Dryomov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# rbd map rbd/tst
2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key file 
/etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error

[root@ceph01 cluster]# cat /var/log/messages | tail
Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:45:06 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:46:33 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:48:14 ceph01 kernel: libceph: mon0 
192.168.100.201:6789<http://192.168.100.201:6789> session established

I do not really understand this error. CRUSH correct.

Thanks.


2014-03-25 12:26 GMT+04:00 Ilya Dryomov 
<ilya.dryo...@inktank.com<mailto:ilya.dryo...@inktank.com>>:
On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов 
<malm...@gmail.com<mailto:malm...@gmail.com>> wrote:
> Hi, Ilya.
>
> I added the files(crushd and osddump) to a folder in GoogleDrive.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
set to 1 in your crushmap.  This is a new crush tunable, which was
introduced long after 3.14 merge window closed.  It will be supported
starting with 3.15, until then you should be able to do

ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
ceph osd setcrushmap -i /tmp/crush.new

to disable it.

Thanks,

                Ilya



--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to