Re: [ceph-users] rbd mapping failes - maybe solved

2013-08-30 Thread bernhard glomm
Thanks Sage,

I just tried various versions from gitbuilder and finally found one that worked 
;-)

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/   
raring main

looks like it works perfectly, on first glance with much better performance 
than cuttlefish.

Do you need some test for my problem with 0.67.2-16-gd41cf86?
could do so on monday.

I didn't ran udev nor cat /proc/partitions but checked 
/dev/rbd* -> not present
and 
tree /dev/disk
also not showing any hint for a new device other than my hard disk partitions

Since the dumpling version now seems to work I would otherwise keep using that
to get more familiar with ceph.

Bernhard

 Bernhard Glomm
IT Administration

Phone:   +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype:   bernhard.glomm.ecologic
   
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | 
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: 
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

On Aug 30, 2013, at 5:05 PM, Sage Weil  wrote:

> Hi Bernhard,
> 
> On Fri, 30 Ag 2013, Bernhard Glomm wrote:
>> Hi all,
>> 
>> due to a problem with ceph-deploy I currently use
>> 
>> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
>> raring main
>> (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
>> 
>> Now the initialization of the cluster works like a charm,
>> ceph health is okay,
> 
> Great; this will get backported to dumpling shortly and will be included 
> in teh 0.67.3 release.
> 
>> just the mapping of the created rbd is failing.
>> 
>> -
>> root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
>> --yes-i-really-really-mean-it
>> pool 'kvm-pool' deleted
>> root@ping[/1]:~ # ceph osd lspools
>> 
>> 0 data,1 metadata,2 rbd,
>> root@ping[/1]:~ #
>> root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
>> pool 'kvm-pool' created
>> root@ping[/1]:~ # ceph osd lspools
>> 0 data,1 metadata,2 rbd,4 kvm-pool,
>> root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
>> set pool 4 min_size to 2
>> root@ping[/1]:~ # ceph osd dump | grep 'rep size'
>> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
>> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0
>> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0
>> pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
>> pg_num 1000 pgp_num 1000 last_change 33 owner 0
>> root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
>> root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
>> root@ping[/1]:~ # rbd ls kvm-pool
>> atom03.cimg
>> atom04.cimg
>> root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
>> rbd image 'atom03.cimg':
>> size 4000 MB in 1000 objects
>> order 22 (4096 KB objects)
>> block_name_prefix: rb.0.114d.2ae8944a
>> format: 1
>> root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
>> rbd image 'atom04.cimg':
>> size 4000 MB in 1000 objects
>> order 22 (4096 KB objects)
>> block_name_prefix: rb.0.127d.74b0dc51
>> format: 1
>> root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
>> rbd: '/sbin/udevadm settle' failed! (256)
>> root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
>> --keyring /etc/ceph/ceph.client.admin.keyring
>> ^Crbd: '/sbin/udevadm settle' failed! (2)
>> root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
>> /etc/ceph/ceph.client.admin.keyring
>> rbd: '/sbin/udevadm settle' failed! (256)
>> -
> 
> What happens if you run '/sbin/udevadm settle' from the command line?
> 
> Also, this the very last step before rbd exits (normally with success), so 
> my guess is that the rbd mapping actually succeeded.  cat /proc/partitions 
> or ls /dev/rbd
> 
> sage
> 
>> 
>> Do I miss something?
>> But I think this set of commands worked perfectly with cuttlefish?
>> 
>> TIA
>> 
>> Bernhard
>> 
>> --
>> 
>> 
>> Bernhard Glomm
>> IT Administration
>> 
>> Phone:
>> +49 (30) 86880 134
>> Fax:
>> +49 (30) 86880 100
>> Skype:
>> bernhard.glomm.ecologic
>> Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: |
>> YouTube: | Google+:
>> Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin 
>> |
>> Germany
>> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
>> DE811963464
>> Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH
>> 
>> 
>> 



signature.asc
Description: Message 

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Sage Weil
On Fri, 30 Aug 2013, Bernhard Glomm wrote:
> mount cephfs failes to (I added 3 MDS)
> anybody any ideas how to debug this further?
> 
> I used ceph-deploy to create the cluster,
> the xfs filesystem on the OSD's is okay, I can copy remove and open files on
> that partition
> so I asume it's something inside of ceph???
> 
> TIA
> 
> Bernhard
> 
> P.S.: Version is
> ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f)
> on uptodate raring
> 
> --
> root@nuke36[/1]:/etc/ceph # ceph -s
> 2013-08-30 15:03:18.454701 7f3b7cd18700  1 -- :/0 messenger.start
> 2013-08-30 15:03:18.455460 7f3b7cd18700  1 -- :/1003684 -->
> 192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> 0x7f3b7800e8f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.456412 7f3b7c517700  1 -- 192.168.242.36:0/1003684
> learned my addr 192.168.242.36:0/1003684
> 2013-08-30 15:03:18.458069 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 1  mon_map v1  776+0+0 (3609201999 0 0)
> 0x7f3b6c000c30 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.458308 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 2  auth_reply(proto 2 0 Success) v1 
> 33+0+0 (345113272 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.458612 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> 0x7f3b60001af0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.459532 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 3  auth_reply(proto 2 0 Success) v1 
> 206+0+0 (1084599267 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.459816 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
> 0x7f3b600020d0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.460739 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 4  auth_reply(proto 2 0 Success) v1 
> 393+0+0 (496062897 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.460844 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f3b7800ed80
> con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.461118 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0
> 0x7f3b780079f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.461138 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0
> 0x7f3b7800fa10 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.461813 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 5  mon_map v1  776+0+0 (3609201999 0 0)
> 0x7f3b6c0008f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.462016 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 6  mon_subscribe_ack(300s) v1  20+0+0
> (3156621930 0 0) 0x7f3b6c001340 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.463931 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- mon_command({"prefix": "get_command_descriptions"}
> v 0) v1 -- ?+0 0x7f3b7800b0f0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.463966 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 7  osd_map(34..34 src has 1..34) v3 
> 2483+0+0 (453205619 0 0) 0x7f3b6c0008c0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.464694 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 8  mon_subscribe_ack(300s) v1  20+0+0
> (3156621930 0 0) 0x7f3b6c0010e0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.464749 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 9  osd_map(34..34 src has 1..34) v3 
> 2483+0+0 (453205619 0 0) 0x7f3b6c002720 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.464765 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 10  mon_subscribe_ack(300s) v1  20+0+0
> (3156621930 0 0) 0x7f3b6c002b20 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.468276 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 11  mon_command_ack([{"prefix":
> "get_command_descriptions"}]=0  v0) v1  72+0+24040 (1092875540 0
> 2922658865) 0x7f3b6c002720 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.510756 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -->
> 192.168.242.92:6789/0 -- mon_command({"prefix": "status"} v 0) v1 -- ?+0
> 0x7f3b7800b0d0 con 0x7f3b7800e4e0
> 2013-08-30 15:03:18.512490 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <==
> mon.3 192.168.242.92:6789/0 12  mon_command_ack([{"prefix":
> "status"}]=0  v0) v1  54+0+497 (1155462804 0 3461792647) 0x7f3b6c001080
> con 0x7f3b7800e4e0
>   cluster f57cdca3-7222-4095-853b-03727461f725
>    health HEALTH_OK
>    monmap e1: 5 mons 
> at{atom01=192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.2
> 42.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0},
> election ep

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Sage Weil
Hi Bernhard,

On Fri, 30 Ag 2013, Bernhard Glomm wrote:
> Hi all,
> 
> due to a problem with ceph-deploy I currently use
> 
> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
> raring main
> (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
> 
> Now the initialization of the cluster works like a charm,
> ceph health is okay,

Great; this will get backported to dumpling shortly and will be included 
in teh 0.67.3 release.

> just the mapping of the created rbd is failing.
> 
> -
> root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
> --yes-i-really-really-mean-it
> pool 'kvm-pool' deleted
> root@ping[/1]:~ # ceph osd lspools
> 
> 0 data,1 metadata,2 rbd,
> root@ping[/1]:~ #
> root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
> pool 'kvm-pool' created
> root@ping[/1]:~ # ceph osd lspools
> 0 data,1 metadata,2 rbd,4 kvm-pool,
> root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
> set pool 4 min_size to 2
> root@ping[/1]:~ # ceph osd dump | grep 'rep size'
> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
> pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
> pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
> pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
> pg_num 1000 pgp_num 1000 last_change 33 owner 0
> root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
> root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
> root@ping[/1]:~ # rbd ls kvm-pool
> atom03.cimg
> atom04.cimg
> root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
> rbd image 'atom03.cimg':
>     size 4000 MB in 1000 objects
>     order 22 (4096 KB objects)
>     block_name_prefix: rb.0.114d.2ae8944a
>     format: 1
> root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
> rbd image 'atom04.cimg':
>     size 4000 MB in 1000 objects
>     order 22 (4096 KB objects)
>     block_name_prefix: rb.0.127d.74b0dc51
>     format: 1
> root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
> rbd: '/sbin/udevadm settle' failed! (256)
> root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
> --keyring /etc/ceph/ceph.client.admin.keyring
> ^Crbd: '/sbin/udevadm settle' failed! (2)
> root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
> /etc/ceph/ceph.client.admin.keyring
> rbd: '/sbin/udevadm settle' failed! (256)
> -

What happens if you run '/sbin/udevadm settle' from the command line?

Also, this the very last step before rbd exits (normally with success), so 
my guess is that the rbd mapping actually succeeded.  cat /proc/partitions 
or ls /dev/rbd

sage

> 
> Do I miss something?
> But I think this set of commands worked perfectly with cuttlefish?
> 
> TIA
> 
> Bernhard
> 
> --
> 
> 
> Bernhard Glomm
> IT Administration
> 
> Phone:
> +49 (30) 86880 134
> Fax:
> +49 (30) 86880 100
> Skype:
> bernhard.glomm.ecologic
> Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: |
> YouTube: | Google+:
> Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH
> 
> 
> 
> ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
mount cephfs failes to (I added 3 MDS)
anybody any ideas how to debug this further?

I used ceph-deploy to create the cluster,
the xfs filesystem on the OSD's is okay, I can copy remove and open files on 
that partition
so I asume it's something inside of ceph???

TIA

Bernhard

P.S.: Version is 
ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f) on 
uptodate raring

--
root@nuke36[/1]:/etc/ceph # ceph -s
2013-08-30 15:03:18.454701 7f3b7cd18700  1 -- :/0 messenger.start
2013-08-30 15:03:18.455460 7f3b7cd18700  1 -- :/1003684 --> 
192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0 
0x7f3b7800e8f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.456412 7f3b7c517700  1 -- 192.168.242.36:0/1003684 learned 
my addr 192.168.242.36:0/1003684
2013-08-30 15:03:18.458069 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 1  mon_map v1  776+0+0 (3609201999 0 0) 
0x7f3b6c000c30 con 0x7f3b7800e4e0
2013-08-30 15:03:18.458308 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 2  auth_reply(proto 2 0 Success) v1  33+0+0 
(345113272 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.458612 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 
0x7f3b60001af0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.459532 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 3  auth_reply(proto 2 0 Success) v1  
206+0+0 (1084599267 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.459816 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 
0x7f3b600020d0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.460739 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 4  auth_reply(proto 2 0 Success) v1  
393+0+0 (496062897 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.460844 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f3b7800ed80 
con 0x7f3b7800e4e0
2013-08-30 15:03:18.461118 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 
0x7f3b780079f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.461138 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 
0x7f3b7800fa10 con 0x7f3b7800e4e0
2013-08-30 15:03:18.461813 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 5  mon_map v1  776+0+0 (3609201999 0 0) 
0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.462016 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 6  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c001340 con 0x7f3b7800e4e0
2013-08-30 15:03:18.463931 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 
0) v1 -- ?+0 0x7f3b7800b0f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.463966 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 7  osd_map(34..34 src has 1..34) v3  
2483+0+0 (453205619 0 0) 0x7f3b6c0008c0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464694 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 8  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c0010e0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464749 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 9  osd_map(34..34 src has 1..34) v3  
2483+0+0 (453205619 0 0) 0x7f3b6c002720 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464765 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 10  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c002b20 con 0x7f3b7800e4e0
2013-08-30 15:03:18.468276 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 11  mon_command_ack([{"prefix": 
"get_command_descriptions"}]=0  v0) v1  72+0+24040 (1092875540 0 
2922658865) 0x7f3b6c002720 con 0x7f3b7800e4e0
2013-08-30 15:03:18.510756 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 --> 
192.168.242.92:6789/0 -- mon_command({"prefix": "status"} v 0) v1 -- ?+0 
0x7f3b7800b0d0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.512490 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 <== 
mon.3 192.168.242.92:6789/0 12  mon_command_ack([{"prefix": "status"}]=0  
v0) v1  54+0+497 (1155462804 0 3461792647) 0x7f3b6c001080 con 0x7f3b7800e4e0
  cluster f57cdca3-7222-4095-853b-03727461f725
   health HEALTH_OK
   monmap e1: 5 mons at 
{atom01=192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.242.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0},
 election epoch 42, quorum 0,1,2,3,4 atom01,atom02,nuke36,ping,pong
   osdmap e34: 2 osds: 2 up, 2 in
    pgmap v367: 1192 pgs: 1192 active+clean; 9788 bytes data, 94460 KB used, 
3722 GB

[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay, 
just the mapping of the created rbd is failing.

-
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ # 
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool
 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.114d.2ae8944a
    format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.127d.74b0dc51
    format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
--keyring /etc/ceph/ceph.client.admin.keyring 
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
/etc/ceph/ceph.client.admin.keyring 
rbd: '/sbin/udevadm settle' failed! (256)
-

Do I miss something? 
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard

-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay, 
just the mapping of the created rbd is failing.

-
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ # 
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.114d.2ae8944a
    format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.127d.74b0dc51
    format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
--keyring /etc/ceph/ceph.client.admin.keyring 
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
/etc/ceph/ceph.client.admin.keyring 
rbd: '/sbin/udevadm settle' failed! (256)
-

Do I miss something? 
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard





> -- 
> 
  > 
 > 
> 
> 
  > 
> 
  > 
> Bernhard Glomm
> 
IT Administration
> 
> 
  > 
> 
  Phone:
> 
> 
  +49 (30) 86880 134
> 
  > 
  Fax:
> 
> 
  +49 (30) 86880 100
> 
  > 
  Skype:
> 
> 
  bernhard.glomm.ecologic
> 
  > 
> 
  > 
> 
> 
> 
> 
> 
> 
> 
> 
> 
  > 
> 
> 
  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany
> 
  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464
> 
  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> 
  > 
> 
 > Cuttlefish> 
  > 

> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com