Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-04-04 Thread Ilya Dryomov
On Fri, Apr 4, 2014 at 10:47 PM, Gruher, Joseph R
 wrote:
> Meant to include this - what do these messages indicate?  All systems have
> 0.78.
>
>
>
> [1301268.557820] Key type ceph registered
>
> [1301268.558524] libceph: loaded (mon/osd proto 15/24)
>
> [1301268.579486] rbd: loaded rbd (rados block device)
>
> [1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatch, my
> 4a042a42 < server's 104a042a42, missing 10

That's CRUSH_V2 feature bit, it's supported starting with 3.13.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-04-04 Thread Gruher, Joseph R
Aha – upgrade of kernel from 3.13 to 3.14 appears to have resolved the problem.

Thanks,
Joe

From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:48 AM
To: Ирек Фасихов; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?

Meant to include this – what do these messages indicate?  All systems have 0.78.

[1301268.557820] Key type ceph registered
[1301268.558524] libceph: loaded (mon/osd proto 15/24)
[1301268.579486] rbd: loaded rbd (rados block device)
[1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301268.582462] libceph: mon1 10.0.0.102:6789 socket error on read
[1301278.589461] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301278.589558] libceph: mon1 10.0.0.102:6789 socket error on read
[1301288.607615] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301288.607713] libceph: mon1 10.0.0.102:6789 socket error on read
[1301298.625873] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301298.625970] libceph: mon1 10.0.0.102:6789 socket error on read
[1301308.643936] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301308.644033] libceph: mon0 10.0.0.101:6789 socket error on read
[1301318.662082] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301318.662179] libceph: mon0 10.0.0.101:6789 socket error on read
[1301449.695232] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301449.695329] libceph: mon0 10.0.0.101:6789 socket error on read
[1301459.716235] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301459.716332] libceph: mon1 10.0.0.102:6789 socket error on read
[1301469.734425] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301469.734523] libceph: mon1 10.0.0.102:6789 socket error on read
[1301479.752603] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301479.752700] libceph: mon1 10.0.0.102:6789 socket error on read
[1301489.770773] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301489.770870] libceph: mon1 10.0.0.102:6789 socket error on read
[1301499.788904] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301499.789001] libceph: mon1 10.0.0.102:6789 socket error on read

$ ceph --version
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak01 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak02 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ceph health detail
HEALTH_WARN noscrub,nodeep-scrub flag(s) set
noscrub,nodeep-scrub flag(s) set

$ ceph status
cluster b12ebb71-e4a6-41fa-8246-71cbfa09fb6e
 health HEALTH_WARN noscrub,nodeep-scrub flag(s) set
 monmap e1: 2 mons at 
{mohonpeak01=10.0.0.101:6789/0,mohonpeak02=10.0.0.102:6789/0}, election epoch 
10, quorum 0,1 mohonpeak01,mohonpeak02
 osdmap e216: 18 osds: 18 up, 18 in
flags noscrub,nodeep-scrub
  pgmap v202112: 2784 pgs, 10 pools, 1637 GB data, 427 kobjects
2439 GB used, 12643 GB / 15083 GB avail
2784 active+clean


From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:44 AM
To: 'Ирек Фасихов'; Ilya Dryomov
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>; Gruher, Joseph 
R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?

Hi folks-

Was this ever resolved?  I’m not finding a resolution in the email chain, 
apologies if I am missing it.  I am experiencing this same problem.  Cluster 
works fine for object traffic, can’t seem to get rbd to work in 0.78.  Worked 
fine in 0.72.2 for me.  Running Ubuntu 13.04 with 3.12 kernel.

$ rbd create rbd/myimage --size 102400
$ sudo rbd map rbd/myimage
rbd: add failed: (5) Input/output error

$ rbd ls rbd
myimage
$

Thanks,
Joe

From: 
ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com> 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of  ???
Sent: Tuesday, March 25, 2014 1:59 AM
To: Ilya Dryomov
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# rbd map rbd/tst
2014-03-25 12:48:14.318167 7f4

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-04-04 Thread Gruher, Joseph R
Meant to include this – what do these messages indicate?  All systems have 0.78.

[1301268.557820] Key type ceph registered
[1301268.558524] libceph: loaded (mon/osd proto 15/24)
[1301268.579486] rbd: loaded rbd (rados block device)
[1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301268.582462] libceph: mon1 10.0.0.102:6789 socket error on read
[1301278.589461] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301278.589558] libceph: mon1 10.0.0.102:6789 socket error on read
[1301288.607615] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301288.607713] libceph: mon1 10.0.0.102:6789 socket error on read
[1301298.625873] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301298.625970] libceph: mon1 10.0.0.102:6789 socket error on read
[1301308.643936] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301308.644033] libceph: mon0 10.0.0.101:6789 socket error on read
[1301318.662082] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301318.662179] libceph: mon0 10.0.0.101:6789 socket error on read
[1301449.695232] libceph: mon0 10.0.0.101:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301449.695329] libceph: mon0 10.0.0.101:6789 socket error on read
[1301459.716235] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301459.716332] libceph: mon1 10.0.0.102:6789 socket error on read
[1301469.734425] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301469.734523] libceph: mon1 10.0.0.102:6789 socket error on read
[1301479.752603] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301479.752700] libceph: mon1 10.0.0.102:6789 socket error on read
[1301489.770773] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301489.770870] libceph: mon1 10.0.0.102:6789 socket error on read
[1301499.788904] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 
4a042a42 < server's 104a042a42, missing 10
[1301499.789001] libceph: mon1 10.0.0.102:6789 socket error on read

$ ceph --version
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak01 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ssh mohonpeak02 'ceph --version'
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)

$ ceph health detail
HEALTH_WARN noscrub,nodeep-scrub flag(s) set
noscrub,nodeep-scrub flag(s) set

$ ceph status
cluster b12ebb71-e4a6-41fa-8246-71cbfa09fb6e
 health HEALTH_WARN noscrub,nodeep-scrub flag(s) set
 monmap e1: 2 mons at 
{mohonpeak01=10.0.0.101:6789/0,mohonpeak02=10.0.0.102:6789/0}, election epoch 
10, quorum 0,1 mohonpeak01,mohonpeak02
 osdmap e216: 18 osds: 18 up, 18 in
flags noscrub,nodeep-scrub
  pgmap v202112: 2784 pgs, 10 pools, 1637 GB data, 427 kobjects
2439 GB used, 12643 GB / 15083 GB avail
2784 active+clean


From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:44 AM
To: 'Ирек Фасихов'; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?

Hi folks-

Was this ever resolved?  I’m not finding a resolution in the email chain, 
apologies if I am missing it.  I am experiencing this same problem.  Cluster 
works fine for object traffic, can’t seem to get rbd to work in 0.78.  Worked 
fine in 0.72.2 for me.  Running Ubuntu 13.04 with 3.12 kernel.

$ rbd create rbd/myimage --size 102400
$ sudo rbd map rbd/myimage
rbd: add failed: (5) Input/output error

$ rbd ls rbd
myimage
$

Thanks,
Joe

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???? ???
Sent: Tuesday, March 25, 2014 1:59 AM
To: Ilya Dryomov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# rbd map rbd/tst
2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key file 
/etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error

[root@ceph01 cluster]# cat /var/log/messages | tail
Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:45:06 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:46:33 ceph01 kernel: libceph: client1124

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-04-04 Thread Gruher, Joseph R
Hi folks-

Was this ever resolved?  I’m not finding a resolution in the email chain, 
apologies if I am missing it.  I am experiencing this same problem.  Cluster 
works fine for object traffic, can’t seem to get rbd to work in 0.78.  Worked 
fine in 0.72.2 for me.  Running Ubuntu 13.04 with 3.12 kernel.

$ rbd create rbd/myimage --size 102400
$ sudo rbd map rbd/myimage
rbd: add failed: (5) Input/output error

$ rbd ls rbd
myimage
$

Thanks,
Joe

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of  ???
Sent: Tuesday, March 25, 2014 1:59 AM
To: Ilya Dryomov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# rbd map rbd/tst
2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key file 
/etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error

[root@ceph01 cluster]# cat /var/log/messages | tail
Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:45:06 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:46:33 ceph01 kernel: libceph: mon2 
192.168.100.203:6789<http://192.168.100.203:6789> session established
Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid 
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:48:14 ceph01 kernel: libceph: mon0 
192.168.100.201:6789<http://192.168.100.201:6789> session established

I do not really understand this error. CRUSH correct.

Thanks.


2014-03-25 12:26 GMT+04:00 Ilya Dryomov 
mailto:ilya.dryo...@inktank.com>>:
On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов 
mailto:malm...@gmail.com>> wrote:
> Hi, Ilya.
>
> I added the files(crushd and osddump) to a folder in GoogleDrive.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
set to 1 in your crushmap.  This is a new crush tunable, which was
introduced long after 3.14 merge window closed.  It will be supported
starting with 3.15, until then you should be able to do

ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
ceph osd setcrushmap -i /tmp/crush.new

to disable it.

Thanks,

Ilya



--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ирек Фасихов
Thanks, Ilya.


2014-03-29 22:06 GMT+04:00 Ilya Dryomov :

> On Sat, Mar 29, 2014 at 5:15 PM, Ирек Фасихов  wrote:
> > Ilya, hi. Maybe you have the required patches for the kernel?
>
> Hi,
>
> It turned out there was a problem with userspace.  If you grab the
> latest ceph.git master, you should be able to use cache pools with
> 3.14-rc7, which is what I assume you are running.
>
> If you want the specifics, 7a1990b66ec29f4f3e1659df813ef09831a17cfe is
> the commit that merged the fix.
>
> Thanks,
>
> Ilya
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ilya Dryomov
On Sat, Mar 29, 2014 at 5:15 PM, Ирек Фасихов  wrote:
> Ilya, hi. Maybe you have the required patches for the kernel?

Hi,

It turned out there was a problem with userspace.  If you grab the
latest ceph.git master, you should be able to use cache pools with
3.14-rc7, which is what I assume you are running.

If you want the specifics, 7a1990b66ec29f4f3e1659df813ef09831a17cfe is
the commit that merged the fix.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ирек Фасихов
Ilya, hi. Maybe you have the required patches for the kernel?


2014-03-25 14:51 GMT+04:00 Ирек Фасихов :

> Yep, so works.
>
>
> 2014-03-25 14:45 GMT+04:00 Ilya Dryomov :
>
> On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов  wrote:
>> > Hmmm, create another image in another pool. Pool without cache tier.
>> >
>> > [root@ceph01 cluster]# rbd create test/image --size 102400
>> > [root@ceph01 cluster]# rbd -p test ls -l
>> > NAME SIZE PARENT FMT PROT LOCK
>> > image 102400M  1
>> > [root@ceph01 cluster]# ceph osd dump | grep test
>> > pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
>> > rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags
>> hashpspool
>> > stripe_width 0
>> >
>> > Get the same error...
>> >
>> > [root@ceph01 cluster]# rbd map -p test image
>> > rbd: add failed: (5) Input/output error
>> >
>> > Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
>> > 10b46114-ac17-404e-99e3-69b34b85c901
>> > Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of
>> ceph_pg_pool
>> > Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
>> > Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789session
>> > established
>> >
>> > Maybe I'm doing wrong?
>>
>> No, the problem here is that the pool with hit_set stuff set still
>> exists and therefore is present in the osdmap.  You'll have to remove
>> that pool with something like
>>
>> # I assume "cache" is the name of the cache pool
>> ceph osd tier remove-overlay rbd
>> ceph osd tier remove rbd cache
>> ceph osd pool delete cache cache --yes-i-really-really-mean-it
>>
>> in order to be able to map test/image.
>>
>> Thanks,
>>
>> Ilya
>>
>
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
Yep, so works.


2014-03-25 14:45 GMT+04:00 Ilya Dryomov :

> On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов  wrote:
> > Hmmm, create another image in another pool. Pool without cache tier.
> >
> > [root@ceph01 cluster]# rbd create test/image --size 102400
> > [root@ceph01 cluster]# rbd -p test ls -l
> > NAME SIZE PARENT FMT PROT LOCK
> > image 102400M  1
> > [root@ceph01 cluster]# ceph osd dump | grep test
> > pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> > rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags hashpspool
> > stripe_width 0
> >
> > Get the same error...
> >
> > [root@ceph01 cluster]# rbd map -p test image
> > rbd: add failed: (5) Input/output error
> >
> > Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
> > 10b46114-ac17-404e-99e3-69b34b85c901
> > Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of
> ceph_pg_pool
> > Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
> > Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789session
> > established
> >
> > Maybe I'm doing wrong?
>
> No, the problem here is that the pool with hit_set stuff set still
> exists and therefore is present in the osdmap.  You'll have to remove
> that pool with something like
>
> # I assume "cache" is the name of the cache pool
> ceph osd tier remove-overlay rbd
> ceph osd tier remove rbd cache
> ceph osd pool delete cache cache --yes-i-really-really-mean-it
>
> in order to be able to map test/image.
>
> Thanks,
>
> Ilya
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ilya Dryomov
On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов  wrote:
> Hmmm, create another image in another pool. Pool without cache tier.
>
> [root@ceph01 cluster]# rbd create test/image --size 102400
> [root@ceph01 cluster]# rbd -p test ls -l
> NAME SIZE PARENT FMT PROT LOCK
> image 102400M  1
> [root@ceph01 cluster]# ceph osd dump | grep test
> pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags hashpspool
> stripe_width 0
>
> Get the same error...
>
> [root@ceph01 cluster]# rbd map -p test image
> rbd: add failed: (5) Input/output error
>
> Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
> established
>
> Maybe I'm doing wrong?

No, the problem here is that the pool with hit_set stuff set still
exists and therefore is present in the osdmap.  You'll have to remove
that pool with something like

# I assume "cache" is the name of the cache pool
ceph osd tier remove-overlay rbd
ceph osd tier remove rbd cache
ceph osd pool delete cache cache --yes-i-really-really-mean-it

in order to be able to map test/image.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
Hmmm, create another image in another pool. Pool without cache tier.

[root@ceph01 cluster]# rbd create test/image --size 102400
[root@ceph01 cluster]# rbd -p test ls -l
NAME SIZE PARENT FMT PROT LOCK
image 102400M  1
[root@ceph01 cluster]# ceph osd dump | grep test
pool 4 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 100 pgp_num 100 last_change 2049 owner 0 flags hashpspool
stripe_width 0

Get the same error...

[root@ceph01 cluster]# rbd map -p test image
rbd: add failed: (5) Input/output error

Mar 25 13:53:56 ceph01 kernel: libceph: client11343 fsid
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 13:53:56 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 13:53:56 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 13:53:56 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
established

Maybe I'm doing wrong?

Thanks.



2014-03-25 13:34 GMT+04:00 Ilya Dryomov :

> On Tue, Mar 25, 2014 at 10:59 AM, Ирек Фасихов  wrote:
> > Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
> >
> > [root@ceph01 cluster]# rbd map rbd/tst
> > 2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded
> key
> > file /etc/ceph/ceph.client.admin.keyring
> > rbd: add failed: (5) Input/output error
> >
> > [root@ceph01 cluster]# cat /var/log/messages | tail
> > Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
> > Mar 25 12:45:06 ceph01 kernel: libceph: mon2 192.168.100.203:6789session
> > established
> > Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid
> > 10b46114-ac17-404e-99e3-69b34b85c901
> > Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of
> ceph_pg_pool
> > Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
> > Mar 25 12:46:33 ceph01 kernel: libceph: mon2 192.168.100.203:6789session
> > established
> > Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid
> > 10b46114-ac17-404e-99e3-69b34b85c901
> > Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of
> ceph_pg_pool
> > Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
> > Mar 25 12:48:14 ceph01 kernel: libceph: mon0 192.168.100.201:6789session
> > established
> >
> > I do not really understand this error. CRUSH correct.
>
> Ah, this error means that the kernel received an unsupported version of
> osdmap.  Strictly speaking, kernel client doesn't fully support caching
> yet.  The reason, once again, is that tiering agent and some of the
> hit_set stuff are post 3.14.  It will probably make 3.15, but I'll have
> to get back to you on that.
>
> Sorry for not letting you know earlier, I got confused.
>
> Thanks,
>
> Ilya
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ilya Dryomov
On Tue, Mar 25, 2014 at 10:59 AM, Ирек Фасихов  wrote:
> Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
>
> [root@ceph01 cluster]# rbd map rbd/tst
> 2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key
> file /etc/ceph/ceph.client.admin.keyring
> rbd: add failed: (5) Input/output error
>
> [root@ceph01 cluster]# cat /var/log/messages | tail
> Mar 25 12:45:06 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:45:06 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:46:33 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:48:14 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
> established
>
> I do not really understand this error. CRUSH correct.

Ah, this error means that the kernel received an unsupported version of
osdmap.  Strictly speaking, kernel client doesn't fully support caching
yet.  The reason, once again, is that tiering agent and some of the
hit_set stuff are post 3.14.  It will probably make 3.15, but I'll have
to get back to you on that.

Sorry for not letting you know earlier, I got confused.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
I also added a new log in Google Drive.

https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing


2014-03-25 12:59 GMT+04:00 Ирек Фасихов :

> Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
>
> [root@ceph01 cluster]# *rbd map rbd/tst*
> 2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key
> file /etc/ceph/ceph.client.admin.keyring
> rbd: add failed: (5) Input/output error
>
> [root@ceph01 cluster]# *cat /var/log/messages | tail*
> Mar 25 12:45:06 ceph01 kernel: libceph: *osdc handle_map corrupt msg*
> Mar 25 12:45:06 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:46:33 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
> established
> Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid
> 10b46114-ac17-404e-99e3-69b34b85c901
> Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
> Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
> Mar 25 12:48:14 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
> established
>
> I do not really understand this error. CRUSH correct.
>
> Thanks.
>
>
>
> 2014-03-25 12:26 GMT+04:00 Ilya Dryomov :
>
> On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов  wrote:
>> > Hi, Ilya.
>> >
>> > I added the files(crushd and osddump) to a folder in GoogleDrive.
>> >
>> >
>> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
>>
>> OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
>> set to 1 in your crushmap.  This is a new crush tunable, which was
>> introduced long after 3.14 merge window closed.  It will be supported
>> starting with 3.15, until then you should be able to do
>>
>> ceph osd getcrushmap -o /tmp/crush
>> crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
>> ceph osd setcrushmap -i /tmp/crush.new
>>
>> to disable it.
>>
>> Thanks,
>>
>> Ilya
>>
>
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
Ilya, set "chooseleaf_vary_r 0", but no map rbd images.

[root@ceph01 cluster]# *rbd map rbd/tst*
2014-03-25 12:48:14.318167 7f44717f7760  2 auth: KeyRing::load: loaded key
file /etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error

[root@ceph01 cluster]# *cat /var/log/messages | tail*
Mar 25 12:45:06 ceph01 kernel: libceph: *osdc handle_map corrupt msg*
Mar 25 12:45:06 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
established
Mar 25 12:46:33 ceph01 kernel: libceph: client11240 fsid
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:46:33 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:46:33 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:46:33 ceph01 kernel: libceph: mon2 192.168.100.203:6789 session
established
Mar 25 12:48:14 ceph01 kernel: libceph: client11313 fsid
10b46114-ac17-404e-99e3-69b34b85c901
Mar 25 12:48:14 ceph01 kernel: libceph: got v 13 cv 11 > 9 of ceph_pg_pool
Mar 25 12:48:14 ceph01 kernel: libceph: osdc handle_map corrupt msg
Mar 25 12:48:14 ceph01 kernel: libceph: mon0 192.168.100.201:6789 session
established

I do not really understand this error. CRUSH correct.

Thanks.



2014-03-25 12:26 GMT+04:00 Ilya Dryomov :

> On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов  wrote:
> > Hi, Ilya.
> >
> > I added the files(crushd and osddump) to a folder in GoogleDrive.
> >
> >
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
>
> OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
> set to 1 in your crushmap.  This is a new crush tunable, which was
> introduced long after 3.14 merge window closed.  It will be supported
> starting with 3.15, until then you should be able to do
>
> ceph osd getcrushmap -o /tmp/crush
> crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
> ceph osd setcrushmap -i /tmp/crush.new
>
> to disable it.
>
> Thanks,
>
> Ilya
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ilya Dryomov
On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов  wrote:
> Hi, Ilya.
>
> I added the files(crushd and osddump) to a folder in GoogleDrive.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing

OK, so this has nothing to do with caching.  You have chooseleaf_vary_r
set to 1 in your crushmap.  This is a new crush tunable, which was
introduced long after 3.14 merge window closed.  It will be supported
starting with 3.15, until then you should be able to do

ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-chooseleaf_vary_r 0 -o /tmp/crush.new
ceph osd setcrushmap -i /tmp/crush.new

to disable it.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Hi, Ilya.

I added the files(crushd and osddump) to a folder in GoogleDrive.

https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing




2014-03-25 0:19 GMT+04:00 Ilya Dryomov :

> On Mon, Mar 24, 2014 at 9:46 PM, Ирек Фасихов  wrote:
> > Kernel module support RBD, Ilya Dryomov?
> >
> > Thanks?
> >
> >
> > 2014-03-24 23:32 GMT+04:00 Michael J. Kidd :
> >
> >> The message 'feature set mismatch' is exactly what Gregory was talking
> >> about.
> >>
> >> This indicates that you are using a CRUSH feature in your Ceph
> environment
> >> that the Kernel RBD client doesn't understand.  So, it's unable to
> >> communicate.  In this instance, it's likely to do with the cache pool
> >> itself, but I'm not terribly familiar with the features support in 3.14
> >> kernel...
> >>
> >> Thanks,
> >>
> >> Michael J. Kidd
> >> Sr. Storage Consultant
> >> Inktank Professional Services
> >>
> >>
> >> On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов 
> wrote:
> >>>
> >>> Hi, Gregory!
> >>> I think that there is no interesting :).
> >>>
> >>> dmesg:
> >>> Key type dns_resolver registered
> >>> Key type ceph registered
> >>> libceph: loaded (mon/osd proto 15/24)
> >>> rbd: loaded (major 252)
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon2 192.168.100.203:6789 socket error on read
> >>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon2 192.168.100.203:6789 socket error on read
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon2 192.168.100.203:6789 socket error on read
> >>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon2 192.168.100.203:6789 socket error on read
> >>> libceph: mon0 192.168.100.201:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon0 192.168.100.201:6789 socket error on read
> >>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon1 192.168.100.202:6789 socket error on read
> >>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my
> 384a042a42 <
> >>> server's 2384a042a42, missing 200
> >>> libceph: mon2 192.168.100.203:6789 socket error on read
>
> Hi,
>
> 200 is bit 41, so it's either crush tunables v3 or primary
> affinity.  Do you have crush chooseleaf_vary_r set to a non-default
> value or primary-affinity adjusted with the 'ceph osd primary-affinity'
> command?
>
> Can you add the output of 'ceph osd dump --format=json-pretty' and the
> output of 'ceph osd getcrushmap -o /tmp/crush; crushtool -d /tmp/crush'
> to your Drive folder so I can check myself?
>
> Thanks,
>
> Ilya
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ilya Dryomov
On Mon, Mar 24, 2014 at 9:46 PM, Ирек Фасихов  wrote:
> Kernel module support RBD, Ilya Dryomov?
>
> Thanks?
>
>
> 2014-03-24 23:32 GMT+04:00 Michael J. Kidd :
>
>> The message 'feature set mismatch' is exactly what Gregory was talking
>> about.
>>
>> This indicates that you are using a CRUSH feature in your Ceph environment
>> that the Kernel RBD client doesn't understand.  So, it's unable to
>> communicate.  In this instance, it's likely to do with the cache pool
>> itself, but I'm not terribly familiar with the features support in 3.14
>> kernel...
>>
>> Thanks,
>>
>> Michael J. Kidd
>> Sr. Storage Consultant
>> Inktank Professional Services
>>
>>
>> On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов  wrote:
>>>
>>> Hi, Gregory!
>>> I think that there is no interesting :).
>>>
>>> dmesg:
>>> Key type dns_resolver registered
>>> Key type ceph registered
>>> libceph: loaded (mon/osd proto 15/24)
>>> rbd: loaded (major 252)
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon2 192.168.100.203:6789 socket error on read
>>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon2 192.168.100.203:6789 socket error on read
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon2 192.168.100.203:6789 socket error on read
>>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon2 192.168.100.203:6789 socket error on read
>>> libceph: mon0 192.168.100.201:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon0 192.168.100.201:6789 socket error on read
>>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon1 192.168.100.202:6789 socket error on read
>>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>>> server's 2384a042a42, missing 200
>>> libceph: mon2 192.168.100.203:6789 socket error on read

Hi,

200 is bit 41, so it's either crush tunables v3 or primary
affinity.  Do you have crush chooseleaf_vary_r set to a non-default
value or primary-affinity adjusted with the 'ceph osd primary-affinity'
command?

Can you add the output of 'ceph osd dump --format=json-pretty' and the
output of 'ceph osd getcrushmap -o /tmp/crush; crushtool -d /tmp/crush'
to your Drive folder so I can check myself?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Kernel module support RBD, Ilya Dryomov?

Thanks?


2014-03-24 23:32 GMT+04:00 Michael J. Kidd :

> The message 'feature set mismatch' is exactly what Gregory was talking
> about.
>
> This indicates that you are using a CRUSH feature in your Ceph environment
> that the Kernel RBD client doesn't understand.  So, it's unable to
> communicate.  In this instance, it's likely to do with the cache pool
> itself, but I'm not terribly familiar with the features support in 3.14
> kernel...
>
> Thanks,
>
> Michael J. Kidd
> Sr. Storage Consultant
> Inktank Professional Services
>
>
> On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов  wrote:
>
>> Hi, Gregory!
>> I think that there is no interesting :).
>>
>> *dmesg:*
>> Key type dns_resolver registered
>> Key type ceph registered
>> libceph: loaded (mon/osd proto 15/24)
>> rbd: loaded (major 252)
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon2 192.168.100.203:6789 socket error on read
>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon2 192.168.100.203:6789 socket error on read
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon2 192.168.100.203:6789 socket error on read
>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon2 192.168.100.203:6789 socket error on read
>> libceph: mon0 192.168.100.201:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon0 192.168.100.201:6789 socket error on read
>> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon1 192.168.100.202:6789 socket error on read
>> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
>> server's 2384a042a42, missing 200
>> libceph: mon2 192.168.100.203:6789 socket error on read
>>
>>
>>
>> 2014-03-24 21:16 GMT+04:00 Gregory Farnum :
>>
>> I don't remember what features should exist where, but I expect that
>>> the cluster is making use of features that the kernel client doesn't
>>> support yet (despite the very new kernel). Have you checked to see if
>>> there's anything interesting in dmesg?
>>> -Greg
>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>
>>>
>>> On Mon, Mar 24, 2014 at 1:30 AM, Ирек Фасихов  wrote:
>>> > Created cache pool for documentation:
>>> > http://ceph.com/docs/master/dev/cache-pool/
>>> >
>>> > ceph osd pool create cache 100
>>> > ceph osd tier add rbd cache
>>> > ceph osd tier cache-mode cache writeback
>>> > ceph osd tier set-overlay rbd cache
>>> > ceph osd pool set cache hit_set_type bloom
>>> > ceph osd pool set cache hit_set_count 1
>>> > ceph osd pool set cache hit_set_period 600
>>> > ceph osd pool set cache target_max_bytes 100 #10Gb
>>> >
>>> > ceph osd tree:
>>> > # idweight  type name   up/down reweight
>>> > -1  6   root default
>>> > -2  2   host ceph01
>>> > 0   1   osd.0   up  1
>>> > 3   1   osd.3   up  1
>>> > -3  2   host ceph02
>>> > 1   1   osd.1   up  1
>>> > 4   1   osd.4   up  1
>>> > -4  2   host ceph03
>>> > 2   1   osd.2   up  1
>>> > 5   1   osd.5   up  1
>>> >
>>> > rbd -p rbd ls -l
>>> > NAME   SIZE PARENT FMT PROT LOCK
>>> > test 10240M  1
>>> >
>>> > rbd map rbd/test
>>> > rbd: add failed: (5) Input/output error
>>> >
>>> > uname -a
>>> > Linux ceph01.bank-hlynov.ru 3.14.0-rc7-bank-hlynov.ru #1 SMP Mon Mar
>>> 17
>>> > 11:49:22 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux. Linux CentOS 6.5
>>> >
>>> > Debug mode and strace in logs.
>>> >
>>> >
>>> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharin

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Michael J. Kidd
The message 'feature set mismatch' is exactly what Gregory was talking
about.

This indicates that you are using a CRUSH feature in your Ceph environment
that the Kernel RBD client doesn't understand.  So, it's unable to
communicate.  In this instance, it's likely to do with the cache pool
itself, but I'm not terribly familiar with the features support in 3.14
kernel...

Thanks,

Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services


On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов  wrote:

> Hi, Gregory!
> I think that there is no interesting :).
>
> *dmesg:*
> Key type dns_resolver registered
> Key type ceph registered
> libceph: loaded (mon/osd proto 15/24)
> rbd: loaded (major 252)
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon2 192.168.100.203:6789 socket error on read
> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon2 192.168.100.203:6789 socket error on read
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon2 192.168.100.203:6789 socket error on read
> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon2 192.168.100.203:6789 socket error on read
> libceph: mon0 192.168.100.201:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon0 192.168.100.201:6789 socket error on read
> libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon1 192.168.100.202:6789 socket error on read
> libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
> server's 2384a042a42, missing 200
> libceph: mon2 192.168.100.203:6789 socket error on read
>
>
>
> 2014-03-24 21:16 GMT+04:00 Gregory Farnum :
>
> I don't remember what features should exist where, but I expect that
>> the cluster is making use of features that the kernel client doesn't
>> support yet (despite the very new kernel). Have you checked to see if
>> there's anything interesting in dmesg?
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Mon, Mar 24, 2014 at 1:30 AM, Ирек Фасихов  wrote:
>> > Created cache pool for documentation:
>> > http://ceph.com/docs/master/dev/cache-pool/
>> >
>> > ceph osd pool create cache 100
>> > ceph osd tier add rbd cache
>> > ceph osd tier cache-mode cache writeback
>> > ceph osd tier set-overlay rbd cache
>> > ceph osd pool set cache hit_set_type bloom
>> > ceph osd pool set cache hit_set_count 1
>> > ceph osd pool set cache hit_set_period 600
>> > ceph osd pool set cache target_max_bytes 100 #10Gb
>> >
>> > ceph osd tree:
>> > # idweight  type name   up/down reweight
>> > -1  6   root default
>> > -2  2   host ceph01
>> > 0   1   osd.0   up  1
>> > 3   1   osd.3   up  1
>> > -3  2   host ceph02
>> > 1   1   osd.1   up  1
>> > 4   1   osd.4   up  1
>> > -4  2   host ceph03
>> > 2   1   osd.2   up  1
>> > 5   1   osd.5   up  1
>> >
>> > rbd -p rbd ls -l
>> > NAME   SIZE PARENT FMT PROT LOCK
>> > test 10240M  1
>> >
>> > rbd map rbd/test
>> > rbd: add failed: (5) Input/output error
>> >
>> > uname -a
>> > Linux ceph01.bank-hlynov.ru 3.14.0-rc7-bank-hlynov.ru #1 SMP Mon Mar 17
>> > 11:49:22 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux. Linux CentOS 6.5
>> >
>> > Debug mode and strace in logs.
>> >
>> >
>> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
>> >
>> >
>> > --
>> > С уважением, Фасихов Ирек Нургаязович
>> > Моб.: +79229045757
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.co

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Hi, Gregory!
I think that there is no interesting :).

*dmesg:*
Key type dns_resolver registered
Key type ceph registered
libceph: loaded (mon/osd proto 15/24)
rbd: loaded (major 252)
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon2 192.168.100.203:6789 socket error on read
libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon2 192.168.100.203:6789 socket error on read
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon2 192.168.100.203:6789 socket error on read
libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon2 192.168.100.203:6789 socket error on read
libceph: mon0 192.168.100.201:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon0 192.168.100.201:6789 socket error on read
libceph: mon1 192.168.100.202:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon1 192.168.100.202:6789 socket error on read
libceph: mon2 192.168.100.203:6789 feature set mismatch, my 384a042a42 <
server's 2384a042a42, missing 200
libceph: mon2 192.168.100.203:6789 socket error on read



2014-03-24 21:16 GMT+04:00 Gregory Farnum :

> I don't remember what features should exist where, but I expect that
> the cluster is making use of features that the kernel client doesn't
> support yet (despite the very new kernel). Have you checked to see if
> there's anything interesting in dmesg?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Mon, Mar 24, 2014 at 1:30 AM, Ирек Фасихов  wrote:
> > Created cache pool for documentation:
> > http://ceph.com/docs/master/dev/cache-pool/
> >
> > ceph osd pool create cache 100
> > ceph osd tier add rbd cache
> > ceph osd tier cache-mode cache writeback
> > ceph osd tier set-overlay rbd cache
> > ceph osd pool set cache hit_set_type bloom
> > ceph osd pool set cache hit_set_count 1
> > ceph osd pool set cache hit_set_period 600
> > ceph osd pool set cache target_max_bytes 100 #10Gb
> >
> > ceph osd tree:
> > # idweight  type name   up/down reweight
> > -1  6   root default
> > -2  2   host ceph01
> > 0   1   osd.0   up  1
> > 3   1   osd.3   up  1
> > -3  2   host ceph02
> > 1   1   osd.1   up  1
> > 4   1   osd.4   up  1
> > -4  2   host ceph03
> > 2   1   osd.2   up  1
> > 5   1   osd.5   up  1
> >
> > rbd -p rbd ls -l
> > NAME   SIZE PARENT FMT PROT LOCK
> > test 10240M  1
> >
> > rbd map rbd/test
> > rbd: add failed: (5) Input/output error
> >
> > uname -a
> > Linux ceph01.bank-hlynov.ru 3.14.0-rc7-bank-hlynov.ru #1 SMP Mon Mar 17
> > 11:49:22 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux. Linux CentOS 6.5
> >
> > Debug mode and strace in logs.
> >
> >
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
> >
> >
> > --
> > С уважением, Фасихов Ирек Нургаязович
> > Моб.: +79229045757
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>



-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Gregory Farnum
I don't remember what features should exist where, but I expect that
the cluster is making use of features that the kernel client doesn't
support yet (despite the very new kernel). Have you checked to see if
there's anything interesting in dmesg?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Mar 24, 2014 at 1:30 AM, Ирек Фасихов  wrote:
> Created cache pool for documentation:
> http://ceph.com/docs/master/dev/cache-pool/
>
> ceph osd pool create cache 100
> ceph osd tier add rbd cache
> ceph osd tier cache-mode cache writeback
> ceph osd tier set-overlay rbd cache
> ceph osd pool set cache hit_set_type bloom
> ceph osd pool set cache hit_set_count 1
> ceph osd pool set cache hit_set_period 600
> ceph osd pool set cache target_max_bytes 100 #10Gb
>
> ceph osd tree:
> # idweight  type name   up/down reweight
> -1  6   root default
> -2  2   host ceph01
> 0   1   osd.0   up  1
> 3   1   osd.3   up  1
> -3  2   host ceph02
> 1   1   osd.1   up  1
> 4   1   osd.4   up  1
> -4  2   host ceph03
> 2   1   osd.2   up  1
> 5   1   osd.5   up  1
>
> rbd -p rbd ls -l
> NAME   SIZE PARENT FMT PROT LOCK
> test 10240M  1
>
> rbd map rbd/test
> rbd: add failed: (5) Input/output error
>
> uname -a
> Linux ceph01.bank-hlynov.ru 3.14.0-rc7-bank-hlynov.ru #1 SMP Mon Mar 17
> 11:49:22 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux. Linux CentOS 6.5
>
> Debug mode and strace in logs.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Created cache pool for documentation:
http://ceph.com/docs/master/dev/cache-pool/

*ceph osd pool create cache 100*
*ceph osd tier add rbd cache*

*ceph osd tier cache-mode cache writeback*

*ceph osd tier set-overlay rbd cache*

*ceph osd pool set cache hit_set_type bloom*

*ceph osd pool set cache hit_set_count 1*

*ceph osd pool set cache hit_set_period 600*
*ceph osd pool set cache target_max_bytes 100 #10Gb*

*ceph osd tree:*
# idweight  type name   up/down reweight
-1  6   root default
-2  2   host ceph01
0   1   osd.0   up  1
3   1   osd.3   up  1
-3  2   host ceph02
1   1   osd.1   up  1
4   1   osd.4   up  1
-4  2   host ceph03
2   1   osd.2   up  1
5   1   osd.5   up  1

*rbd -p rbd ls -l *
NAME   SIZE PARENT FMT PROT LOCK
test 10240M  1

*rbd map rbd/test*
rbd: add failed: (5) Input/output error

*uname -a*
Linux ceph01.bank-hlynov.ru 3.14.0-rc7-bank-hlynov.ru #1 SMP Mon Mar 17
11:49:22 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux. Linux CentOS 6.5

Debug mode and strace in logs.

https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing


-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757


ceph.conf
Description: Binary data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com